To see the other types of publications on this topic, follow the link: Documental database model.

Journal articles on the topic 'Documental database model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 journal articles for your research on the topic 'Documental database model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shichkina, Yulia, and Van Muon Ha. "Method for Creating Collections with Embedded Documents for Document-oriented Databases Taking into Account Executable Queries." SPIIRAS Proceedings 19, no. 4 (September 7, 2020): 829–54. http://dx.doi.org/10.15622/sp.2020.19.4.5.

Full text
Abstract:
In the recent decades, NoSQL databases have become more popular day by day. And increasingly, developers and database administrators, for whatever reason, have to solve the problems of database migration from a relational model in the model NoSQL databases like the document-oriented database MongoDB database. This article discusses the approach to this migration data based on set theory. A new formal method of determining the optimal runtime searches aggregate collections with the attached documents NoSQL databases such as the key document. The attributes of the database objects are included in optimizing the number of collections and their structures in search queries. The initial data are object properties (attributes, relationships between attributes) on which information is stored in the database, and query the properties that are most often performed, or the speed of which should be maximal. This article discusses the basic types of connections (1-1, 1-M, M-M), typical of the relational model. The proposed method is the following step of the method of creating a collection without embedded documents. The article also provides a method for determining what methods should be used in the reasonable cases to make work with databases more effectively. At the end, this article shows the results of testing of the proposed method on databases with different initial schemes. Experimental results show that the proposed method helps reduce the execution time of queries can also significantly as well as reduce the amount of memory required to store the data in a new database.
APA, Harvard, Vancouver, ISO, and other styles
2

Essin, D. J. "Intelligent Processing of Loosely Structured Documents as a Strategy for Organizing Electronic Health Care Records." Methods of Information in Medicine 32, no. 04 (1993): 265–68. http://dx.doi.org/10.1055/s-0038-1634938.

Full text
Abstract:
AbstractLoosely structured documents can capture more relevant information about medical events than is possible using today’s popular databases. In order to realize the full potential of this increased information content, techniques will be required that go beyond the static mapping of stored data into a single, rigid data model. Through intelligent processing, loosely structured documents can become a rich source of detailed data about actual events that can support the wide variety of applications needed to run a health-care organization, document medical care or conduct research. Abstraction and indirection are the means by which dynamic data models and intelligent processing are introduced into database systems. A system designed around loosely structured documents can evolve gracefully while preserving the integrity of the stored data. The ability to identify and locate the information contained within documents offers new opportunities to exchange data that can replace more rigid standards of data interchange.
APA, Harvard, Vancouver, ISO, and other styles
3

Hirzalla, Naél, and Ahmed Karmouch. "A data model and a query languagefor multimedia documents databases." Multimedia Systems 7, no. 4 (July 1, 1999): 338–48. http://dx.doi.org/10.1007/s005300050135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kato, Hiroyuki, and Masatoshi Yoshikawa. "A model and queries for databases managing structured documents with object links." Systems and Computers in Japan 31, no. 6 (June 2000): 29–44. http://dx.doi.org/10.1002/(sici)1520-684x(200006)31:6<29::aid-scj4>3.0.co;2-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

FONG, JOSEPH, HERBERT SHIU, and JENNY WONG. "METHODOLOGY FOR DATA CONVERSION FROM XML DOCUMENTS TO RELATIONS USING EXTENSIBLE STYLESHEET LANGUAGE TRANSFORMATION." International Journal of Software Engineering and Knowledge Engineering 19, no. 02 (March 2009): 249–81. http://dx.doi.org/10.1142/s0218194009004131.

Full text
Abstract:
Extensible Markup Language (XML) has been used for data-transport and data-transformation while the business sector continues to store critical business data in relational databases. Extracting relational data and formatting it into XML documents, and then converting XML documents back to relational structures, becomes a major daily activity. It is important to have an efficient methodology to handle this conversion between XML documents and relational data. This paper aims to perform data conversion from XML documents into relational databases. It proposes a prototype and algorithms for this conversion process. The pre-process is schema translation using an XML schema definition. The proposed approach is based on the needs of an Order Information System to suggest a methodology to gain the benefits provided by XML technology and relational database management systems. The methodology is a stepwise procedure using XML schema definition and Extensible Stylesheet Language Transformations (XSLT) to ensure that the data constraints are not scarified after data conversion. The implementation of the data conversion is performed by decomposing the XML document of a hierarchical tree model into normalized relations interrelated with their artifact primary keys and foreign keys. The transformation process is performed by XSLT. This paper will also demonstrate the entire conversion process through a detailed case study.
APA, Harvard, Vancouver, ISO, and other styles
6

LLADÓS, JOSEP, MARÇAL RUSIÑOL, ALICIA FORNÉS, DAVID FERNÁNDEZ, and ANJAN DUTTA. "ON THE INFLUENCE OF WORD REPRESENTATIONS FOR HANDWRITTEN WORD SPOTTING IN HISTORICAL DOCUMENTS." International Journal of Pattern Recognition and Artificial Intelligence 26, no. 05 (August 2012): 1263002. http://dx.doi.org/10.1142/s0218001412630025.

Full text
Abstract:
Word spotting is the process of retrieving all instances of a queried keyword from a digital library of document images. In this paper we evaluate the performance of different word descriptors to assess the advantages and disadvantages of statistical and structural models in a framework of query-by-example word spotting in historical documents. We compare four word representation models, namely sequence alignment using DTW as a baseline reference, a bag of visual words approach as statistical model, a pseudo-structural model based on a Loci features representation, and a structural approach where words are represented by graphs. The four approaches have been tested with two collections of historical data: the George Washington database and the marriage records from the Barcelona Cathedral. We experimentally demonstrate that statistical representations generally give a better performance, however it cannot be neglected that large descriptors are difficult to be implemented in a retrieval scenario where word spotting requires the indexation of data with million word images.
APA, Harvard, Vancouver, ISO, and other styles
7

ZHANG, HENG, DA-HAN WANG, CHENG-LIN LIU, and HORST BUNKE. "KEYWORD SPOTTING FROM ONLINE CHINESE HANDWRITTEN DOCUMENTS USING ONE-VERSUS-ALL CHARACTER CLASSIFICATION MODEL." International Journal of Pattern Recognition and Artificial Intelligence 27, no. 03 (May 2013): 1353001. http://dx.doi.org/10.1142/s0218001413530017.

Full text
Abstract:
In this paper, we propose a method for text-query-based keyword spotting from online Chinese handwritten documents using character classification model. The similarity between the query word and handwriting is obtained by combining the character classification scores. The classifier is trained by one-versus-all strategy so that it gives high similarity to the target class and low scores to the others. Using character classification-based word similarity also helps overcome the out-of-vocabulary (OOV) problem. We use a character-synchronous dynamic search algorithm to efficiently spot the query word in large database. The retrieval performance is further improved by using competing character confusion and writer-adaptive thresholds. Our experimental results on a large handwriting database CASIA-OLHWDB justify the superiority of one-versus-all trained classifiers and the benefits of confidence transformation, character confusion and adaptive thresholds. Particularly, a one-versus-all trained prototype classifier performs as well as a linear support vector machine (SVM) classifier, but consumes much less storage of index file. The experimental comparison with keyword spotting based on handwritten text recognition also demonstrates the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
8

Mohebi, Azadeh, Mehri Sedighi, and Zahra Zargaran. "Subject-based retrieval of scientific documents, case study." Library Review 66, no. 6/7 (September 5, 2017): 549–69. http://dx.doi.org/10.1108/lr-10-2016-0090.

Full text
Abstract:
Purpose The purpose of this paper is to introduce an approach for retrieving a set of scientific articles in the field of Information Technology (IT) from a scientific database such as Web of Science (WoS), to apply scientometrics indices and compare them with other fields. Design/methodology/approach The authors propose to apply a statistical classification-based approach for extracting IT-related articles. In this approach, first, a probabilistic model is introduced to model the subject IT, using keyphrase extraction techniques. Then, they retrieve IT-related articles from all Iranian papers in WoS, based on a Bayesian classification scheme. Based on the probabilistic IT model, they assign an IT membership probability for each article in the database, and then they retrieve the articles with highest probabilities. Findings The authors have extracted a set of IT keyphrases, with 1,497 terms through the keyphrase extraction process, for the probabilistic model. They have evaluated the proposed retrieval approach with two approaches: the query-based approach in which the articles are retrieved from WoS using a set of queries composed of limited IT keywords, and the research area-based approach which is based on retrieving the articles using WoS categorizations and research areas. The evaluation and comparison results show that the proposed approach is able to generate more accurate results while retrieving more articles related to IT. Research limitations/implications Although this research is limited to the IT subject, it can be generalized for any subject as well. However, for multidisciplinary topics such as IT, special attention should be given to the keyphrase extraction phase. In this research, bigram model is used; however, one can extend it to tri-gram as well. Originality/value This paper introduces an integrated approach for retrieving IT-related documents from a collection of scientific documents. The approach has two main phases: building a model for representing topic IT, and retrieving documents based on the model. The model, based on a set of keyphrases, extracted from a collection of IT articles. However, the extraction technique does not rely on Term Frequency-Inverse Document Frequency, since almost all of the articles in the collection share a set of same keyphrases. In addition, a probabilistic membership score is defined to retrieve the IT articles from a collection of scientific articles.
APA, Harvard, Vancouver, ISO, and other styles
9

Pireva, Krenare, and Petros Kefalas. "An innovative web application for managing academic documents." International Journal of Business & Technology 1, no. 2 (May 2013): 39–46. http://dx.doi.org/10.33107/ijbte.2013.1.2.04.

Full text
Abstract:
Archiving versions of academic documents in consideration of green thinking was the motivation to develop an innovative tool that could organize academic documents in a centralized database. This paper presents a new web application, which aims to move towards a paperless University model for managing academic documents that are used within an educational institution, such as proposal courses, syllabuses etc. AcaDocMan, the developed application can be used by academic staff and quality assurance officers of institutions who are able not only to manage their course syllabuses but also to generate different consistent document formats for various purposes.
APA, Harvard, Vancouver, ISO, and other styles
10

OLIVEIRA, Elias, and Delermando BRANQUINHO FILHO. "Automatic classification of journalistic documents on the Internet1." Transinformação 29, no. 3 (December 2017): 245–55. http://dx.doi.org/10.1590/2318-08892017000300003.

Full text
Abstract:
Abstract Online journalism is increasing every day. There are many news agencies, newspapers, and magazines using digital publication in the global network. Documents published online are available to users, who use search engines to find them. In order to deliver documents that are relevant to the search, they must be indexed and classified. Due to the vast number of documents published online every day, a lot of research has been carried out to find ways to facilitate automatic document classification. The objective of the present study is to describe an experimental approach for the automatic classification of journalistic documents published on the Internet using the Vector Space Model for document representation. The model was tested based on a real journalism database, using algorithms that have been widely reported in the literature. This article also describes the metrics used to assess the performance of these algorithms and their required configurations. The results obtained show the efficiency of the method used and justify further research to find ways to facilitate the automatic classification of documents.
APA, Harvard, Vancouver, ISO, and other styles
11

WESTFECHTEL, BERNHARD. "A GRAPH-BASED SYSTEM FOR MANAGING CONFIGURATIONS OF ENGINEERING DESIGN DOCUMENTS." International Journal of Software Engineering and Knowledge Engineering 06, no. 04 (December 1996): 549–83. http://dx.doi.org/10.1142/s0218194096000235.

Full text
Abstract:
Due to increasing complexity of hardware and software systems, configuration management has been receiving more and more attention in nearly all engineering domains (e.g. electrical, mechanical, and software engineering). This observation has driven us to develop a domain-independent and adaptable configuration management model (called CoMa) for managing systems of engineering design documents. The CoMa model integrates composition hierarchies, dependencies, and versions into a coherent framework based on a sparse set of essential configuration management concepts. In order to give a clear and comprehensible specification, the CoMa model is defined in a high-level, multi-paradigm specification language (PROGRES) which combines concepts from various disciplines (database systems, knowledge-based systems, graph rewriting systems, programming languages). Finally, we also present an implementation which conforms to the formal specification and provides graphical, structure-oriented tools offering a bunch of sophisticated commands and operating in a heterogeneous environment.
APA, Harvard, Vancouver, ISO, and other styles
12

Sorokin, Dmitry Igorevich, Anton Sergeevich Nuzhny, and Elena Alexandrovna Saveleva. "Hierarchical Rubrication of Text Documents." Proceedings of the Institute for System Programming of the RAS 32, no. 6 (2020): 127–36. http://dx.doi.org/10.15514/ispras-2020-32(6)-10.

Full text
Abstract:
Topic modeling is an important and widely used method in the analysis of a large collection of documents. It allows us to digest the content of documents by examination of the selected topics. It has drawbacks such as a need to specify the number of topics. The topics can become too local or too global, depending on that number. Also, it does not provide a relation between local and global topics. Here we present an algorithm and a computer program for the hierarchical rubrication of text documents. The program solves these problems by creating a hierarchy of automatically selected topics in which local topics are connected of the global topics. The program processes PDF documents split them into text segments, builds vector representations using word2vec model and stores them in a database. The vector embeddings are structured in the form of a hierarchy of automatically constructed categories. Each category is identified by automatically selected keywords. The result is visualized in an interactive map. Traversing the hierarchy of topics is done by zooming the map. An analysis of the constructed hierarchy of categories allows us to evaluate the minimum and maximum depth of the hierarchy corresponding to a minimum and a maximum number of different topics contained in the collection of documents. The program was tested on documents on deep nuclear waste disposal.
APA, Harvard, Vancouver, ISO, and other styles
13

Yogish, Deepa, T. N. Manjunath, H. K. Yogish, and Ravindra S. Hegadi. "Ranking Top Similar Documents for User Query Based on Normalized Vector Cosine Similarity Model." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4531–34. http://dx.doi.org/10.1166/jctn.2020.9330.

Full text
Abstract:
As the technology is developing information in each fields like literature, technology, science, medicine etc., also increasing in high pace. To extract related document in huge collection of documents based on user query in digital world is an interesting problem. Documents similarity Technique used in many applications like text categorization, plagiarism discernment, document clustering, information retrieval, machine translation and question answering system. Many algorithms have been developed for this purpose that take a document or input query and match it with the document databases. This paper proposes novel approach to vectorize each document and query with normalized TF-IDF method and applying Cosine Similarity function to extract top 3 documents based on user query.
APA, Harvard, Vancouver, ISO, and other styles
14

Haw, Su-Cheng, and Emyliana Soong. "Performance evaluation on structural mapping choices for data-centric XML documents." Indonesian Journal of Electrical Engineering and Computer Science 18, no. 3 (June 1, 2020): 1539. http://dx.doi.org/10.11591/ijeecs.v18.i3.pp1539-1550.

Full text
Abstract:
<span>eXtensible Mark-up Language (XML) has been widely used as the de facto standard for data exchange over the Web. It is crucial to ensure that the data can be mapped correctly into the underlying data storage format, that is, without any lost of information. The two mapping strategies are structural-based and model-based. The structural-based mapping involves the presence of Data Type Definition (DTD) for schema mapping while the model-based mapping does not require the present of DTD or any schema for the mapping purpose. The structural-based mapping is good especially for data-centric type of data, i.e., data which is structured and can be binded into certain schema. As such, this paper evaluates and compares the performances of two selected existing structural-based mapping via simulation. Two main evaluations are: (i) storing the XML data into relational database (RDB), <br /> and (ii) querying the XML data from the RDB. The time taken for each respective process will be recorded and compared. From the experimental results, it is observed that the s-XML approach outperformed the SAX approach in terms of storing and query evaluations for most of the test cases conducted.</span>
APA, Harvard, Vancouver, ISO, and other styles
15

Kluska-Nawarecka, S., K. Regulski, M. Krzyżak, G. Leśniak, and M. Gurda. "System of Semantic Integration of Non-Structuralized Documents in Natural Language in the Domain of Metallurgy." Archives of Metallurgy and Materials 58, no. 3 (September 1, 2013): 927–30. http://dx.doi.org/10.2478/amm-2013-0103.

Full text
Abstract:
Abstract This paper presents assumptions for a system of automatic cataloging and semantic text documents searching. As an example, a document repository for metals processing technology was used. The system by using ontological model provides the user with a new approach to the exploration of database resources - easier and more intuitive information search. In the current document storage systems, searching is often based only on keywords and descriptions created manually by the system administrator. The use of text mining methods, especially latent semantic indexing, allows automatic clustering of documents with respect to their content. The result of this clustering is integrated with the ontological model, making navigation through documents resources intuitive and does not require the manual creation of directories. Such an approach seems to be particularly useful in a situation where we are dealing with large repositories of unstructured documents from such sources as the Internet. This situation is very typical for cases of searching information and knowledge in the area of metallurgy, for example with regard to innovation and non-traditional suppliers of materials and equipment.
APA, Harvard, Vancouver, ISO, and other styles
16

Palakal, Mathew, Matthew Stephens, Snehasis Mukhopadhyay, Rajeev Raje, and Simon Rhodes. "Identification of Biological Relationships from Text Documents Using Efficient Computational Methods." Journal of Bioinformatics and Computational Biology 01, no. 02 (July 2003): 307–42. http://dx.doi.org/10.1142/s0219720003000137.

Full text
Abstract:
The biological literature databases continue to grow rapidly with vital information that is important for conducting sound biomedical research and development. The current practices of manually searching for information and extracting pertinent knowledge are tedious, time-consuming tasks even for motivated biological researchers. Accurate and computationally efficient approaches in discovering relationships between biological objects from text documents are important for biologists to develop biological models. The term "object" refers to any biological entity such as a protein, gene, cell cycle, etc. and relationship refers to any dynamic action one object has on another, e.g. protein inhibiting another protein or one object belonging to another object such as, the cells composing an organ. This paper presents a novel approach to extract relationships between multiple biological objects that are present in a text document. The approach involves object identification, reference resolution, ontology and synonym discovery, and extracting object-object relationships. Hidden Markov Models (HMMs), dictionaries, and N-Gram models are used to set the framework to tackle the complex task of extracting object-object relationships. Experiments were carried out using a corpus of one thousand Medline abstracts. Intermediate results were obtained for the object identification process, synonym discovery, and finally the relationship extraction. For the thousand abstracts, 53 relationships were extracted of which 43 were correct, giving a specificity of 81 percent. These results are promising for multi-object identification and relationship finding from biological documents.
APA, Harvard, Vancouver, ISO, and other styles
17

Yazidi Alaoui, O., S. Hamdoune, H. Zili, H. Boulassal, M. Wahbi, and O. El Kharki. "CREATING STRATEGIC BUSINESS VALUE FROM BIG DATA ANALYSIS – APPLICATION TELECOM NETWORK DATA AND PLANNING DOCUMENTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W16 (October 1, 2019): 691–95. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w16-691-2019.

Full text
Abstract:
Abstract. Mobile networks carrier gather and accumulate in their database system a considerable volume of data, that carries geographic information which is crucial for the growth of the company. This work aimed develop a prototype called Spatial On -Line Analytic Processing (SOLAP) to carry out multidimensional analysis and to anticipate the extension of the area of radio antennas.To this end, the researcher started by creating a Data warehouse that allows storing Big Data received from the Radio antennas. Then, doing the OLAP(online analytic processing) in order to perform multidimensional Analysis which used through GIS to represent the Data in different scales in satellite image as a topographic background). As a result, this prototype enables the carriers to receive continuous reports on different scales (Town, city, country) and to identify the BTS that works and performs well or shows the rate of its working (the way behaves) its pitfalls. By the end, it gives a clear image on the future working strategy respecting the urban planning, and the digital terrain model (DTM).
APA, Harvard, Vancouver, ISO, and other styles
18

MOCHIDA, KEISUKE, and MASAKI NAKAGAWA. "SEPARATING FIGURES, MATHEMATICAL FORMULAS AND JAPANESE TEXT FROM FREE HANDWRITING IN MIXED ONLINE DOCUMENTS." International Journal of Pattern Recognition and Artificial Intelligence 18, no. 07 (November 2004): 1173–87. http://dx.doi.org/10.1142/s0218001404003708.

Full text
Abstract:
This paper describes a method for separating online handwritten patterns into Japanese text, figures and mathematical formulas. Today, Tablet PCs and electronic whiteboards provide much larger writing area for pen interfaces unlike PDAs (Personal Digital Assistants), through which users can easily input text, write mathematical formulas and draw figures on the screen. The fact that these objects can be written by a single pen (marker) without switching the device, mode, software or whatever else, and without any writing restrictions such as grids or boxes is one of the most important benefits of the pen interfaces. However, the task of segmenting these objects is challenging. To address this issue, we have applied a probabilistic model employing stroke features, stroke crossings and stroke densities. Further, we partially apply the approach of segmentation by recognition. Although the current recognizer for formulas is not a true recognizer, we have achieved about 81% correct segmentation for all the strokes when applied to our newly prepared database of mixed patterns. This method has been compared with a neural-network. The results show that our method is generally better but less effective in distinguishing figures from other components.
APA, Harvard, Vancouver, ISO, and other styles
19

van der Haak, M., M. Hartmann, R. Haux, P. Schmücker, and R. Brandner. "Electronic Signature for Medical Documents – Integration and Evaluation of a Public Key Infrastructure in Hospitals." Methods of Information in Medicine 41, no. 04 (2002): 321–30. http://dx.doi.org/10.1055/s-0038-1634389.

Full text
Abstract:
Summary Objectives: Our objectives were to determine the user-oriented and legal requirements for a Public Key Infrastructure (PKI) for electronic signatures for medical documents, and to translate these requirements into a general model for a signature system. A prototype of this model was then implemented and evaluated in clinical routine use. Methods: Analyses of documents, processes, interviews, observations, and of the available literature supplied the foundations for the development of the signature system model. Eight participants of the Department of Dermatology of the Heidelberg University Medical Center evaluated the implemented prototype from December 2000 to January 2001, during the course of an intervention study. By means of questionnaires, interviews, observations and database analyses, the usefulness and user acceptance of the electronic signature and its integration into electronic discharge letters were established. Results: Since the major part of medical documents generated in a hospital are signature-relevant, they will require electronic signatures in the future. A PKI must meet the multitude of responsibilities and security needs required in a hospital. Also, the signature functionality must be integrated directly into the workflow surrounding document creation. A developed signature model, fulfilling user-oriented and legal requirements, was implemented using hard and software components that conform to the German Signature Law. It was integrated into the existing hospital information system of the Heidelberg University Medical Center. At the end of the intervention study, the average acceptance scores achieved were x = 3,90; sD = 0,42 on a scale of 1 (very negative attitude) to 5 (very positive attitude) for the electronic signature procedure. Acceptance of the integration into computer-supported discharge letter writing reached x = 3,91; sD = 0,47. On average, the discharge letters were completed 7.18 days earlier. Conclusion: The electronic signature is indispensable for the further development of electronic patient records. Application-independent hard and software components, in accordance with the signature law, must be integrated into electronic patient records, and provided to certification services using standardized interfaces. Signature-oriented workflow and document management components are essential for user acceptance in routine clinical use.
APA, Harvard, Vancouver, ISO, and other styles
20

Korach, Zfania Tom, Kenrick D. Cato, Sarah A. Collins, Min Jeoung Kang, Christopher Knaplund, Patricia C. Dykes, Liqin Wang, et al. "Unsupervised Machine Learning of Topics Documented by Nurses about Hospitalized Patients Prior to a Rapid-Response Event." Applied Clinical Informatics 10, no. 05 (October 2019): 952–63. http://dx.doi.org/10.1055/s-0039-3401814.

Full text
Abstract:
Abstract Background In the hospital setting, it is crucial to identify patients at risk for deterioration before it fully develops, so providers can respond rapidly to reverse the deterioration. Rapid response (RR) activation criteria include a subjective component (“worried about the patient”) that is often documented in nurses' notes and is hard to capture and quantify, hindering active screening for deteriorating patients. Objectives We used unsupervised machine learning to automatically discover RR event risk/protective factors from unstructured nursing notes. Methods In this retrospective cohort study, we obtained nursing notes of hospitalized, nonintensive care unit patients, documented from 2015 through 2018 from Partners HealthCare databases. We applied topic modeling to those notes to reveal topics (clusters of associated words) documented by nurses. Two nursing experts named each topic with a representative Systematized Nomenclature of Medicine–Clinical Terms (SNOMED CT) concept. We used the concepts along with vital signs and demographics in a time-dependent covariates extended Cox model to identify risk/protective factors for RR event risk. Results From a total of 776,849 notes of 45,299 patients, we generated 95 stable topics, of which 80 were mapped to 72 distinct SNOMED CT concepts. Compared with a model containing only demographics and vital signs, the latent topics improved the model's predictive ability from a concordance index of 0.657 to 0.720. Thirty topics were found significantly associated with RR event risk at a 0.05 level, and 11 remained significant after Bonferroni correction of the significance level to 6.94E-04, including physical examination (hazard ratio [HR] = 1.07, 95% confidence interval [CI], 1.03–1.12), informing doctor (HR = 1.05, 95% CI, 1.03–1.08), and seizure precautions (HR = 1.08, 95% CI, 1.04–1.12). Conclusion Unsupervised machine learning methods can automatically reveal interpretable and informative signals from free-text and may support early identification of patients at risk for RR events.
APA, Harvard, Vancouver, ISO, and other styles
21

Arnarsson, Ivar Örn, Otto Frost, Emil Gustavsson, Daniel Stenholm, Mats Jirstrand, and Johan Malmqvist. "Supporting Knowledge Re-Use with Effective Searches of Related Engineering Documents - A Comparison of Search Engine and Natural Language Processing-Based Algorithms." Proceedings of the Design Society: International Conference on Engineering Design 1, no. 1 (July 2019): 2597–606. http://dx.doi.org/10.1017/dsi.2019.266.

Full text
Abstract:
AbstractProduct development companies are collecting data in form of Engineering Change Requests for logged design issues and Design Guidelines to accumulate best practices. These documents are rich in unstructured data (e.g., free text) and previous research has pointed out that product developers find current it systems lacking capabilities to accurately retrieve relevant documents with unstructured data. In this research we compare the performance of Search Engine & Natural Language Processing algorithms in order to find fast related documents from two databases with Engineering Change Request and Design Guideline documents. The aim is to turn hours of manual documents searching into seconds by utilizing such algorithms to effectively search for related engineering documents and rank them in order of significance. Domain knowledge experts evaluated the results and it shows that the models applied managed to find relevant documents with up to 90% accuracy of the cases tested. But accuracy varies based on selected algorithm and length of query.
APA, Harvard, Vancouver, ISO, and other styles
22

Mosini, Amanda Cristina, Marcelo Saad, Camilla Casaletti Braghetta, Roberta de Medeiros, Mario Fernando Prieto Peres, and Frederico Camelo Leão. "Neurophysiological, cognitive-behavioral and neurochemical effects in practitioners of transcendental meditation - A literature review." Revista da Associação Médica Brasileira 65, no. 5 (May 2019): 706–13. http://dx.doi.org/10.1590/1806-9282.65.5.706.

Full text
Abstract:
SUMMARY The term meditation can be used in many different ways, according to the technique to which it refers. Transcendental Meditation (MT) is one of these techniques. TM could serve as a model for research on spiritual meditation, unlike the meditation techniques based on secular knowledge. The purpose of the present study is to conduct a bibliographic review to organize scientific evidence on the effects of TM on neurophysiology, neurochemistry, and cognitive and behavioral aspects of its practitioners. To conduct this critical narrative review of the literature, we searched for scientific papers on the PubMed database of the National Center for Biotechnology Information. The keywords used in the search were Transcendental Meditation, Neuroscience of meditation e Meditation and behavior. We selected 21 papers that analyzed different aspects that could be altered through meditation practice. We concluded that TM has positive and significant documentable neurochemical, neurophysiological, and cognitive-behavioral effects. Among the main effects are the reduction of anxiety and stress (due to the reduction of cortisol and norepinephrine levels), increase of the feeling of pleasure and well-being (due to the increase of the synthesis and release of dopamine and serotonin), and influence on memory recall and possible consolidation. Further studies are needed using creative and innovative methodological designs that analyze different neural circuitry and verify the clinical impact on practitioners.
APA, Harvard, Vancouver, ISO, and other styles
23

Dutra, Evelyn De Britto, and Vanessa Cabral Gomes. "Painel de Monitoramento e de Avaliação da Gestão do SUS: um mapeamento das principais fontes de informações públicas de saúde no Brasil com base no modelo sistêmico." Revista Foco 12, no. 3 (October 8, 2019): 04. http://dx.doi.org/10.28950/1981-223x_revistafocoadm/2019.v12i3.710.

Full text
Abstract:
A gestão de serviços de saúde apresenta um contexto desafiador em meio aos diferentes níveis de assistência e estruturas complexas e precisa dispor de uma prática administrativa que otimize os recursos na obtenção de melhores resultados. Essa complexidade pode ser melhor entendida com a abordagem da teoria de sistemas e aplicando técnicas de melhoria para obter os resultados esperados, que de maneira geral representam o atendimento às necessidades de saúde da população. Atualmente no Brasil, existe uma grande quantidade de informações produzidas pelo sistema de saúde que subsidia a criação de indicadores e possibilita o acompanhamento do sistema. Nesse contexto, o objetivo geral desse trabalho é mapear as principais fontes de dados públicos de saúde no Brasil, com base em indicadores Painel de Monitoramento e de Avaliação da Gestão do SUS. Para complementar o painel, buscou-se informações sobre parâmetros para avaliação desses indicadores, o que pode servir de base na compreensão dos dados coletados. Trata-se de uma pesquisa descritiva, realizada por duas etapas: mapeamento das principais fontes de dados online dos indicadores e pesquisa bibliográfica e documental sobre a existência de parâmetros para esses indicadores. O instrumento utilizado na pesquisa, o qual identificou os indicadores do sistema de saúde, é o Painel de Monitoramento e de Avaliação da Gestão do SUS. Dos 17 indicadores selecionados, nove (9) foram encontrados em bases de dados diferentes das apresentadas pelo Painel. Isso mostra a necessidade da revisão periódica da fonte dos dados disponíveis. Em relação aos parâmetros, buscou-se métricas para cada um dos 17 indicadores, sendo identificados 10 indicadores com algum parâmetro oficial para análise. Como já colocado, a inclusão desses parâmetros pode ajudar na avaliação dos indicadores ao ser uma base para comparação dos resultados. Health service management presents a challenging context amidst different levels of care and complex structures, and needs to have an administrative practice that optimizes resources for better results. This complexity can be better understood by the systemic theory and applying improvement techniques to obtain the expected results, which, in general, represents the health needs of the population. Currently in Brazil, there is a great amount of information produced by the health system that subsidizes the creation of indicators and enables monitoring of the system. In this context, the aim of this study is to map the main sources of public health data in Brazil, based on indicators of the Monitoring and Evaluation Panel of SUS’ Management. To complement the panel, information about parameters was sought to evaluate these indicators, which may serve as a basis for understanding the data collected. This is a descriptive research carried out by two steps: mapping of the main online data sources of the indicators and bibliographic and documentary research on the existence of parameters for these indicators.The resource used in the research, which identified the indicators of the health system, is the SUS’ Monitoring and Evaluation Panel. The main results show that of the 17 selected indicators, nine (9) were found in databases other than those presented by the Panel, showing the need of a frequent review of the available data sources. Regarding the parameters, the metrics was searched for each of the 17 indicators, and identified 10 indicators with official parameter for analysis. As mentioned, these official parameters can contribute to increase the evaluation of the indicators, as being a metric-based results comparison.
APA, Harvard, Vancouver, ISO, and other styles
24

Camargo, Tiago Francisco de, Antônio Zanin, Geovanne Dias de Moura, Juliano Corrêa Daleaste, and Citânia Aparecida Pilatti Bortoluzzi. "Influence of organizational complexity on the measurement of the biological assets of the public listed companies of B3." REVISTA AMBIENTE CONTÁBIL - Universidade Federal do Rio Grande do Norte - ISSN 2176-9036 11, no. 1 (November 5, 2018). http://dx.doi.org/10.21680/2176-9036.2019v11n1id15889.

Full text
Abstract:
Purpose: The research This work aimed to analyze the influence of organizational complexity on the measurement of biological assets in the public companies listed in B3. Methodology: To do so, a descriptive, documental and quantitative research was carried out by means of a document, with data obtained through the Economática® database and also in the electronic site of B3. The sample consisted of a set of all the companies listed in B3, however, the sample refers only to the companies that have disclosed biological assets of short or long term in their statements in the balance sheet. Results: The results show that there is an influence of the organizational complexity for the measurement of the biological assets. The multiple regression model explains that 54% of the changes in the measurement records of the companies' total biological assets can be explained by the variable Log_At total according to the predictive model: (Y = 0.654Xi + 0.281x - 0.115 xi + ei). It was verified that, although regarding the form of measurement of the biological assets, the companies of the sample revealed that 72% carry out the valuation of these based on the criteria of the discounted cash flow. Contributions of the Study: Considering the normative nature of CPC 29 and its impacts on the criteria of evaluations of biological assets and agricultural products, the research produced a predictive model capable of explaining and clarifying that aspects of organizational complexity influence the classification of the criterion of measurement of total biological assets and the predictive model can be very useful to produce information for decision making on the aspects investigated, besides offering an additional theoretical contribution for the advancement of studies related to the identification of the organizational complexity and the criteria of measurement of the biological assets.
APA, Harvard, Vancouver, ISO, and other styles
25

"Real-Time Remote Healthcare and Telemedicine Application Model using Ontology Enabled Clustering of Biomedical & Clinical Documents." International Journal of Innovative Technology and Exploring Engineering 9, no. 3 (January 10, 2020): 73–79. http://dx.doi.org/10.35940/ijitee.c8066.019320.

Full text
Abstract:
Remote health monitoring has become a hot topic research due to its multi-dimensional benefits to the society. This paper is aimed at developing a novel remote health monitoring model through wireless sensor networks to ensure efficient telemedicine process. The proposed model, Real-time Remote Healthcare and Telemedicine (RRHT) utilizes the concept of model based design to provide low cost and time saving efficiency. First the low power consuming sensor nodes are placed at specified body points with facility to monitor and reduce the power consumption at each stage of the designed model. These nodes collect the patient data and transmit them in wireless medium through the gateway where the data are combined to form documents/notes. Autonomous optimized routing algorithm is employed at this stage for transmission through efficient wireless paths to the processor connected at the hospitals or health centers. At the processor, the transmitted patient data documents are clustered using ontology enabled clustering models using chicken swarm optimization (CSO) and genetic chicken swarm optimization (GCSO). The clustered results are comparatively analyzed with the previous patient database and to determine the change in health readings. Based on these findings, the suitable medication details along with advice on hospital visits are suggested by the decision module and are sent to the physicians or medical experts for approval and further diagnosis. The performance analysis shows that the proposed RRHT system with GCSO clustering is highly reliable and accurate with better speed and lower cost. These results also prove that the RRHT significantly improved the healthcare application through the utilization of better strategies in document clustering of patient data.
APA, Harvard, Vancouver, ISO, and other styles
26

Ortega, Cristina Dotta. "Do princípio monográfico à unidade documentária: exploração dos fundamentos da Catalogação | From the monographic principle to the documentary unit: an exploration of the bases of Cataloguing." Liinc em Revista 7, no. 1 (March 30, 2011). http://dx.doi.org/10.18617/liinc.v7i1.402.

Full text
Abstract:
Resumo Discorre sobre a noção de unidade documentária – unidade informacional mínima, considerada de interesse de um grupo de usuários e passível de representação para a produção de registros de bases de dados – com o fim de explorar os fundamentos da Catalogação. Duas concepções são consideradas: o conceito de obra proposto por Panizzi como parte dos princípios para a produção de catálogos de bibliotecas, depois retomado no modelo Functional Requirements for Bibliographic Records (FRBR); e o conceito de assunto como modo de identificar a unidade intelectual (a partir da unidade física), desenvolvido pela Documentação e aplicado em sistemas de informação científica. Parte da hipótese de que estas concepções se configuram como aproximações histórico-conceituais à noção de unidade documentária constituindo-se, portanto, como pertinentes à sua problematização. Como metodologia, foi realizada abordagem histórico-conceitual das duas concepções citadas e análise sobre sua contribuição atual. Inicialmente, contextualiza-se o tema da Catalogação, tratando de seus objetivos e da terminologia existente sob o ponto de vista dos processos e instrumentos de produção e gestão de bases de dados. Em seguida, apresentam-se alguns dos princípios da Catalogação consolidados por Panizzi na metade do século XIX, e o princípio monográfico proposto por Otlet para a Documentação a partir daqueles, para então discorrer sobre a articulação entre esses princípios e suas aplicações no decorrer do século XX. Observa que o cenário desenhado por essas duas concepções, respectivamente sob o predomínio da comunidade de bibliotecas e dos serviços e redes de informação científica, vem tomando novos contornos desde as últimas décadas. Constata que os conceitos de obra e de assunto não se constituem como aspectos auto-exclusivos da atividade de produção e gestão de bases de dados, mas como princípios gerais para a identificação da unidade documentária a partir da qual o registro de informação é construído. Palavras-chave catalogação; representação descritiva; princípio monográfico; unidade documentária; obra; assuntoAbstract This article deals with the notion of documentary unit – a minimum informational unit, which is considered to be of interest of a group of users, liable to representation for the constitution of registration of a database – in order to explore the grounds of Cataloguing. Two conceptions have been taken into account: the concept of work proposed by Panizzi as part of the principles for the production of library catalogs, which was later resumed in the Functional Requirements for Bibliographic Records (FRBR) model; and the concept of subject as a way to distinguish the intellectual unit (from the physical unit), developed by Documentation and applied in systems of scientific information. It starts from the assumption that these conceptions frame themselves as conceptual-historical approaches to the notion of documentary unit, therefore making it belong to its problematization. As methodology, we have carried out conceptual-historical approaches of the two conceptions mentioned above, and an analyses about their present contribution. First, this article contextualizes Cataloguing, dealing with its aims and with the existing terminology, bearing in mind the processes and tools of production and management of database. Second, it presents some of the principles of Cataloging consolidated by Panizzi in the middle of 19th century, and the monographic principle proposed by Otlet for documentation based on those principles. Then, it discusses those principles and their applications along the 20th century. The article observes that the scenery portraited by these two conceptions, respectively under the predominance of the community of libraries, and the services and nets of scientific information, has taken new contours during the last two decades. It also shows that the concepts of work and subject are not self-exclusive aspects of activities of production and database management, but are general principles for the identification of a documentary unit from which the information registration is constructed.Keywords cataloguing; descriptive representation; monographic principle; documentary unit; work; subject
APA, Harvard, Vancouver, ISO, and other styles
27

"Examination of the Efficiency of Algorithms for Increasing the Reliability of Information on Criteria of Harness and the Cost of Processing Electronic Documents." International Journal of Recent Technology and Engineering 8, no. 2S11 (November 2, 2019): 4133–39. http://dx.doi.org/10.35940/ijrte.b1526.0982s1119.

Full text
Abstract:
The task of analyzing the effectiveness of the functioning of electronic document management systems (EDMS) is formulated and the methodological basis for optimization is developed according to the criteria of reliability, complexity and cost of information processing. A technique for estimating the time of input, transmission, storage, processing, exchange of documents and increasing the reliability of information based on implemented and traditional approaches aimed at using statistical, logical, semantic and structural - technological relationships of document elements is proposed. Models and algorithms for optimizing the placement of electronic documents (ED) in databases and other information systems associated with the system have been developed. Methods are proposed to minimize the time of searching, processing, increasing the reliability of information and presenting the required document to the user. A computational scheme for solving the optimization problem based on the use of adaptive methods of stochastic random search, modeling by a truncated Markov chain and dynamic programming has been developed and implemented. The conditions for optimizing the values of the labor-intensiveness coefficients and the cost of information processing algorithms based on the application of linear constraints on their efficiency areas are studied. The experimental results of the gain coefficient in the reliability of the information obtained by the maximum rating score are obtained. The software package has been implemented and the values of the coefficient of efficiency of algorithms for increasing the reliability of information based on the use of adaptive random search, segmentation, semantic redundancy and lexicological synthesis of the structure of electronic documents have been analyzed
APA, Harvard, Vancouver, ISO, and other styles
28

"Adaptabilidad en el sistema de producción agrícola: Una mirada desde los productos alternativos sostenibles/ Adaptability in the agricultural production system: A look from sustainable alternative products." Revista de Ciencias Sociales, 2020, 308–27. http://dx.doi.org/10.31876/rcs.v26i4.34665.

Full text
Abstract:
Resumen La humanidad desde el principio ha tomado productos agrícolas para satisfacerse sin responsabilidad, aumentando sus necesidades de alimentación y comercialización, por tal motivo la investigación caracteriza la adaptabilidad y el sistema de producción de productos alternativos agrícolas. La implementación de nuevos productos agrícolas siempre ha estado regida por diferentes categorías, en especial por la adaptabilidad y productividad de monocultivos, que son de los sistemas más organizados y productivos. La metodología es de tipo descriptiva documental, se revisaron diferentes bases de datos para sustentar la investigación. Los resultados dan cuenta de que diferentes especializaciones que forman parte de la agronomía, han propiciado en algunos casos implementación de innovaciones técnicas que dificultan elementos clave de los sistemas de producción y su adaptabilidad, como son, el producto, los procesos, el mercado, entre otros; uso de diferentes tipos de modelos agrícolas, con apoyo de la planificación e investigación, puesto que pueden utilizarse para predecir el comportamiento de una planta (manejo de cultivo). Se concluye, que la adaptabilidad de los cultivos agrícolas en zonas afectadas por diferentes factores (cambios climáticos), se basa especialmente en la reorganización de los cultivos de acuerdo a los sistemas productivos adecuados, para obtener productos agrícolas que sean rentables y sostenibles. Abstract Humanity from the beginning has taken agricultural products to satisfy itself without responsibility, increasing its needs for food and marketing, for this reason research characterizes the adaptability and production system of alternative agricultural products. The implementation of new agricultural products has always been governed by different categories, especially by the adaptability and productivity of monocultures, which are among the most organized and productive systems. The methodology is descriptive documentary type, different databases were reviewed to support the research. The results show that different specializations that are part of agronomy, have in some cases led to the implementation of technical innovations that hinder key elements of production systems and their adaptability, such as the product, the processes, the market, among others; use of different types of agricultural models, supported by planning and research, since they can be used to predict the behavior of a plant (crop management). It is concluded that the adaptability of agricultural crops in areas affected by different factors (climatic changes), is based especially on the reorganization of crops according to adequate production systems, to obtain agricultural products that are profitable and sustainable.
APA, Harvard, Vancouver, ISO, and other styles
29

Ibarra-Corona, Mauricio Arturo, and Alexandro Escudero-Nahón. "Metasíntesis sobre la aplicación de principios de Ingeniería de Software en el desarrollo de plataformas de tecnología educativa." Revista Interuniversitaria de Investigación en Tecnología Educativa, June 1, 2021, 62–75. http://dx.doi.org/10.6018/riite.463421.

Full text
Abstract:
Debido a la creciente presencia de la tecnología digital en los entornos educativos formales, las aplicaciones digitales que apoyan los procesos de enseñanza-aprendizaje son cada día más sofisticados y los docentes están tomando parte activa en el diseño de esas aplicaciones. Uno de los aspectos que más ha evolucionado es el desarrollo de software para el diseño de plataformas de tecnología educativa. Este aspecto ha sido motivo de diversas investigaciones científicas, pero no existen publicaciones que den cuenta de cómo han sido considerados los principios de la Ingeniería de Software por parte de los docentes en el desarrollo de software para el diseño de plataformas de tecnología educativa. Para cumplir con lo anterior, se realizó una revisión sistemática de la literatura especializada publicada en los últimos cinco años con el método de investigación documental propio de la metasíntesis. La obtención de información se realizó en las bases de datos científicos Springer Link, Science Direct, ERIC y CONRICyT con la siguiente fórmula: "Software Engineering" AND ("Instructional Design" OR "Educational Technology"). Fueron analizados en total 69 artículos escritos en inglés o español. Tras una interpretación hermenéutica de los resultados, el hallazgo más relevante sugiere que, aunque los principios de Ingeniería de Software sí son contemplados y aplicados por la mayoría de los docentes, existe una brecha entre la teoría y la práctica referente a la tecnología educativa, misma que deriva de la complejidad de empatar la pedagogía con el desarrollo tecnológico. Finalmente, se sugiere el desarrollo de un modelo que facilite aplicar los principios de la Ingeniería de Software en el proceso de diseño de las plataformas, que a su vez facilitaría el proceso de desarrollo de software educativo. Due to the growing presence of digital technology in formal educational environments, digital applications that support teaching-learning processes are becoming more sophisticated and teachers are taking an active part in its design. One of the aspects that has evolved the most is the development of software for the design of educational technology platforms. This aspect has been the subject of various scientific research, but there are no publications that account for how the principles of software engineering have been considered by teachers in the development of software for the design of educational technology platforms. To comply with the above, a systematic review of the specialized literature published in the last five years was carried out with the documentary research method of meta-synthesis. The information was obtained in the scientific databases Springer Link, Science Direct, ERIC and CONRICyT with the following formula: "Software Engineering" AND ("Instructional Design" OR "Educational Technology"). A total of 69 articles written in English or Spanish were analyzed. After a hermeneutical interpretation of the results, the most relevant finding suggests that, although the principles of software engineering are contemplated and applied by most of the teachers, there is a gap between theory and practice regarding educational technology, which derives from the complexity of matching pedagogy with development. Finally, the development of a model that facilitates the application of software engineering principles in the platform design process is suggested, which in turn would facilitate the educational software development process.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography