Academic literature on the topic 'Document warehouse'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Document warehouse.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Document warehouse"

1

Feki, Jamel, Ines Ben Messaoud, and Gilles Zurfluh. "Building an XML document warehouse." Journal of Decision Systems 22, no. 2 (April 2013): 122–48. http://dx.doi.org/10.1080/12460125.2013.780322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Thilmany, Jean. "Ephemeral Warehouse." Mechanical Engineering 127, no. 09 (September 1, 2005): 30–33. http://dx.doi.org/10.1115/1.2005-sep-1.

Full text
Abstract:
This article highlights how can companies archive their 3D CAD files as their software races toward obsolescence. Digital designs, though, are created on software and computers that are outdated when they are delivered. Computer files can be hard to retrieve in as little as five years down the road. This is a big problem for the engineering community and, of course, for corporations, government agencies, and organizations that store information digitally—in short, for everyone. Most information today—not just engineering data—is created and stored digitally on computer systems that become outdated sooner than bread gets stale. Companies may also store blueprints or CAD documents as portable document files (PDFs) or as tagged image files (TIFs). These are 3D digital files that can be accessed fairly universally from any computer. Again, much is lost, including geometry, when swooshing a 3D file as flat as a pancake.
APA, Harvard, Vancouver, ISO, and other styles
3

Azabou, Maha, Ameen Banjar, and Jamel Omar Feki. "Enhancing the Diamond Document Warehouse Model." International Journal of Data Warehousing and Mining 16, no. 4 (October 2020): 1–25. http://dx.doi.org/10.4018/ijdwm.2020100101.

Full text
Abstract:
The data warehouse community has paid particular attention to the document warehouse (DocW) paradigm during the last two decades. However, some important issues related to the semantics are still pending and therefore need a deep research investigation. Indeed, the semantic exploitation of the DocW is not yet mature despite it representing a main concern for decision-makers. This paper aims to enhancing the multidimensional model called Diamond Document Warehouse Model with semantics aspects; in particular, it suggests semantic OLAP (on-line analytical processing) operators for querying the DocW.
APA, Harvard, Vancouver, ISO, and other styles
4

Pecoraro, Fabrizio, Daniela Luzi, and Fabrizio L. Ricci. "Developing HL7 CDA-Based Data Warehouse for the Use of Electronic Health Record Data for Secondary Purposes." ACI Open 03, no. 01 (January 2019): e44-e62. http://dx.doi.org/10.1055/s-0039-1688936.

Full text
Abstract:
Background The growing availability of clinical and administrative data collected in electronic health records (EHRs) have led researchers and policy makers to implement data warehouses to improve the reuse of EHR data for secondary purposes. This approach can take advantages from a unique source of information that collects data from providers across multiple organizations. Moreover, the development of a data warehouse benefits from the standards adopted to exchange data provided by heterogeneous systems. Objective This article aims to design and implement a conceptual framework that semiautomatically extracts information collected in Health Level 7 Clinical Document Architecture (CDA) documents stored in an EHR and transforms them to be loaded in a target data warehouse. Results The solution adopted in this article supports the integration of the EHR as an operational data store in a data warehouse infrastructure. Moreover, data structure of EHR clinical documents and the data warehouse modeling schemas are analyzed to define a semiautomatic framework that maps the primitives of the CDA with the concepts of the dimensional model. The case study successfully tests this approach. Conclusion The proposed solution guarantees data quality using structured documents already integrated in a large-scale infrastructure, with a timely updated information flow. It ensures data integrity and consistency and has the advantage to be based on a sample size that covers a broad target population. Moreover, the use of CDAs simplifies the definition of extract, transform, and load tools through the adoption of a conceptual framework that load the information stored in the CDA in the data warehouse.
APA, Harvard, Vancouver, ISO, and other styles
5

Ben Messaoud, Ines, Abdulrahman A. Alshdadi, and Jamel Feki. "Building a Document-Oriented Warehouse Using NoSQL." International Journal of Operations Research and Information Systems 12, no. 2 (April 2021): 33–54. http://dx.doi.org/10.4018/ijoris.20210401.oa3.

Full text
Abstract:
The traditional data warehousing approaches should adapt to take into consideration novel needs and data structures. In this context, NoSQL technology is progressively gaining a place in the research and industry domains. This paper proposes an approach for building a NoSQL document-oriented warehouse (DocW). This approach has two methods, namely 1) document warehouse builder and 2) NoSQL-Converter. The first method generates the DocW schema as a galaxy model whereas the second one translates the generated galaxy into a document-oriented NoSQL model. This relies on two types of rules: structure and hierarchical rules. Furthermore, in order to help understanding the textual results of analytical queries on the NoSQL-DocW, the authors define two semantic operators S-Drill-Up and S-Drill-Down to aggregate/expand the terms of query. The implementation of our proposals uses MangoDB and Talend. The experiment uses the medical collection Clef-2007 and two metrics called write request latency and read request latency to evaluate respectively the loading time and the response time to queries.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Jiyun, and Han-joon Kim. "Multidimensional Text Warehousing for Automated Text Classification." Journal of Information Technology Research 11, no. 2 (April 2018): 168–83. http://dx.doi.org/10.4018/jitr.2018040110.

Full text
Abstract:
This article describes how, in the era of big data, a data warehouse is an integrated multidimensional database that provides the basis for the decision making required to establish crucial business strategies. Efficient, effective analysis requires a data organization system that integrates and manages data of various dimensions. However, conventional data warehousing techniques do not consider the various data manipulation operations required for data-mining activities. With the current explosion of text data, much research has examined text (or document) repositories to support text mining and document retrieval. Therefore, this article presents a method of developing a text warehouse that provides a machine-learning-based text classification service. The document is represented as a term-by-concept matrix using a 3rd-order tensor-based textual representation model, which emphasizes the meaning of words occurring in the document. As a result, the proposed text warehouse makes it possible to develop a semantic Naïve Bayes text classifier only by executing appropriate SQL statements.
APA, Harvard, Vancouver, ISO, and other styles
7

Bouaziz, Senda, Ahlem Nabli, and Faiez Gargouri. "Design a Data Warehouse Schema from Document-Oriented database." Procedia Computer Science 159 (2019): 221–30. http://dx.doi.org/10.1016/j.procs.2019.09.177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chapoto, Tendayi, and Anthony Q. Q. Aboagye. "African innovations in harnessing farmer assets as collateral." African Journal of Economic and Management Studies 8, no. 1 (March 13, 2017): 66–75. http://dx.doi.org/10.1108/ajems-03-2017-144.

Full text
Abstract:
Purpose The purpose of this paper is to document and appraise two innovations by which nontraditional forms of collateral are being used to make smallholder crop and livestock farmers bankable in Ghana and Zimbabwe. Design/methodology/approach The setup and operations of the warehouse receipt system (WRS) in Ghana were evaluated for the extent to which the WRS was meeting crop farmers’ expectations and the WRS’s own objectives. Owners of the WRS, a certified warehouse operator in a big city, and two operators of certified community warehouses in farming communities were interviewed. Two focus group discussions with crop farmers were also held. Information about the setup and operations of the Tawanda Nyambirai Livestock Trust (TNLT) Private Limited in Zimbabwe (TNLT) and extent of serving the credit needs of livestock farmers was obtained by telephone from the managing director. Data were gathered in April 2014 and were analyzed later. Findings Due to low output no smallholder farmer targeted by the WRS had been issued with a tradable certified warehouse receipts to serve as collateral to potential lenders. Grain aggregators (non-farmers) have aggregated enough grains from farmers to be issued warehouse receipts. Grain farmers report substantial reduction in post-harvest losses when they lodge farm proceeds with certified community warehouses. For the TNLT, more than 140 farmers had deposited 700 cattle and had been issued with tradable certificates of deposit within one year of TNLT to obtain revolving credit from one bank. Other benefits and challenges are highlighted. Originality/value Both approaches have potential of helping to solve liquidity constraints of farmers.
APA, Harvard, Vancouver, ISO, and other styles
9

Plonskiy, Vladimir Yurievich, and Tamara Balabekovna Chistyakova. "SYSTEM OF DYNAMIC REDISTRIBUTION OF WAREHOUSE RESOURCES OF INDUSTRIAL ENTERPRISE." Vestnik of Astrakhan State Technical University. Series: Management, computer science and informatics 2020, no. 4 (October 31, 2020): 18–28. http://dx.doi.org/10.24143/2072-9502-2020-4-18-28.

Full text
Abstract:
The article addresses the problem of redistribution of warehouse resources when working in a virtual organization or when changing the logistics structure of the enterprise. A two-stage model of providing warehouse resources to a common access to participants of a virtual en-terprise is considered. An automated system for managing redistribution of resources for the stage of disconnecting from a virtual enterprise or when closing your own warehouses was developed. A description of the control object - a distributed warehouse system of an industrial enterprise specializing in the production of sheet metal. A formalized description of the technological process for the production of cold-rolled steel, including the stages of stockpiling, has been developed. The task of managing the distribution of resources is posed, the functional structure of the software complex is designed. Information, mathematical and algorithmic support for solving the problem of redistributing resources is presented. The architecture of information support includes a database containing regulatory and reference information and document management of the enterprise. The database information is used by the algorithmic support of four functional modules: management of normative and reference information, management of receipts and sales, management of warehouse movements, management of multi-turn packaging. The result of the work is presented by the system of dynamic redistribution of warehouse resources and implemented on the 1C: Enterprise platform, which allows solving the problems of the operational movement of material resources between the enterprise’s warehouses, also under the conditions of the virtual enterprise functioning. Absolute and relative distribution criteria are formed. The first group includes absolute criteria: volume, quantity, amount, weight. The second group consists of relative distribution criteria: quantity/volume, amount/volume, weight/volume. Testing the system using the example of criteria from each group showed the operability of all types of support using the example of a warehouse complex of an enterprise manufacturing metal rolling. Flexible customizable system structure allows you to expand the application of the proposed algorithmic support to other types of resources.
APA, Harvard, Vancouver, ISO, and other styles
10

Endrawati, Firman Surya, and Widio Putra Perta R. "Perancangan Sistem Akuntansi Persediaan Dan Kartu Gudang Berbasis Komputer Pada Konveksi Tas." Akuntansi dan Manajemen 10, no. 2 (December 1, 2015): 21–27. http://dx.doi.org/10.30630/jam.v10i2.102.

Full text
Abstract:
Bag is one of commodities produced by local industries in Indonesia in general, and in particular western Sumatra. However, the company, has several weaknesses related to the management and recording of inventory such as the lack of segregation of duties and responsibilities by each section, management procedures and record-keeping supplies inadequate and the lack of documents and records that support the transactions related to inventory , This research aims to design a card inventory accounting system and computer-based warehouse that will be applied to the sales system, purchasing system and system of material usage to fit the elements contained in the internal control system. Based on research that has been done, then the inventory accounting system designed in accordance with the elements contained in the internal control systems and computer-based warehouse cards are useful for recording the transfer of inventory stored in warehouse. Besides being able to produce information that is far more accurate than manual recording, computer-based warehouse card is also used in the savings and effectiveness of the use of the document processing time.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Document warehouse"

1

Kanna, Rajesh. "Managing XML data in a relational warehouse on query translation, warehouse maintenance, and data staleness /." [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/anp4011/Thesis.PDF.

Full text
Abstract:
Thesis (M.S.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains x, 75 p.; also contains graphics. Vita. Includes bibliographical references (p. 71-74).
APA, Harvard, Vancouver, ISO, and other styles
2

Bange, Carsten. "Business intelligence aus Kennzahlen und Dokumenten : Integration strukturierter und unstrukturierter Daten in entscheidungsunterstützenden Informationssystemen /." Hamburg : Kovac, 2004. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=012863212&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hedler, Francielly. "Global warehouse management : a methodology to determine an integrated performance measurement." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAI082/document.

Full text
Abstract:
La complexité croissante des opérations dans les entrepôts a conduit les entreprises à adopter un grand nombre d'indicateurs de performances, ce qui rend leur gestion de plus en plus difficile. De plus, comme ces nombreux indicateurs sont souvent interdépendants, avec des objectifs différents, parfois contraires (par exemple, le résultat d'un indicateur de coût doit diminuer, tandis qu'un indicateur de qualité doit être maximisé), il est souvent très difficile pour un manager d'évaluer la performance globale des systèmes logistiques, comprenant l'entrepôt. Dans ce contexte, cette thèse développe une méthodologie pour atteindre une mesure agrégée de la performance de l'entrepôt. Elle comprend quatre étapes principales: (i) le développement d'un modèle analytique d'indicateurs de performance habituellement utilisés pour la gestion de l'entrepôt; (ii) la définition de relations entre les indicateurs, de façon analytique et statistique ; (iii) l'agrégation de ces indicateurs dans un modèle intégré; (iv) la proposition d'une échelle pour suivre l'évolution de la performance de l'entrepôt au fil du temps, selon les résultats du modèle agrégé.La méthodologie est illustrée sur un entrepôt théorique pour démontrer son applicabilité. Les indicateurs utilisés pour évaluer la performance de l'entrepôt proviennent de la littérature, et une base de données est générée pour permettre l'utilisation des outils mathématiques. La matrice jacobienne est utilisée pour définir de façon analytique les relations entre les indicateurs, et une analyse en composantes principales est faite pour agréger les indicateurs de façon statistique. Le modèle agrégé final comprend 33 indicateurs, répartis en six composants différents, et l'équation de l'indicateur de performance globale est obtenue à partir de la moyenne pondérée de ces six composants. Une échelle est développée pour l'indicateur de performance globale en utilisant une approche d'optimisation pour obtenir ses limites supérieure et inférieure. L'utilisation du modèle intégré est testée sur deux situations différentes de performance de l'entrepôt, et des résultats intéressants sur la performance finale de l'entrepôt sont discutés. Par conséquent, nous concluons que la méthodologie proposée atteint son objectif en fournissant un outil d'aide à la décision pour les managers afin qu'ils puissent être plus efficaces dans la gestion globale de la performance de l'entrepôt, sans négliger des informations importantes fournis par les indicateurs
The growing warehouse operation complexity has led companies to adopt a large number of indicators, making its management increasingly difficult. It may be hard for managers to evaluate the overall performance of the logistic systems, including the warehouse, because the assessment of the interdependence of indicators with distinct objectives is rather complex (e.g. the level of a cost indicator shall decrease, whereas a quality indicator level shall be maximized). This fact could lead to biases in the analysis executed by the manager in the evaluation of the global warehouse performance.In this context, this thesis develops a methodology to achieve an integrated warehouse performance measurement. It encompasses four main steps: (i) the development of an analytical model of performance indicators usually used for warehouse management; (ii) the definition of indicator relationships analytically and statistically; (iii) the aggregation of these indicators in an integrated model; (iv) the proposition of a scale to assess the evolution of the warehouse performance over time according to the integrated model results.The methodology is applied to a theoretical warehouse to demonstrate its application. The indicators used to evaluate the warehouse come from the literature and the database is generated to perform the mathematical tools. The Jacobian matrix is used to define indicator relationships analytically, and the principal component analysis to achieve indicator's aggregation statistically. The final aggregated model comprehends 33 indicators assigned in six different components, which compose the global performance indicator equation by means of component's weighted average. A scale is developed for the global performance indicator using an optimization approach to obtain its upper and lower boundaries.The usability of the integrated model is tested for two different warehouse performance situations and interesting insights about the final warehouse performance are discussed. Therefore, we conclude that the proposed methodology reaches its objective providing a decision support tool for managers so that they can be more efficient in the global warehouse performance management without neglecting important information from indicators
A crescente complexidade das operações em armazéns tem levado as empresasa adotarem um grande número de indicadores de desempenho, o que tem dificultadocada vez mais o seu gerenciamento. Além do volume de informações, os indicadores normalmentepossuem interdependências e objetivos distintos, as vezes até opostos (por exemplo,o indicador de custo deve ser reduzido enquanto o indicador de qualidade deve sempre seraumentado), tornando complexo para o gestor avaliar o desempenho logístico global dosistema, incluindo o armazém.Dentro deste contexto, esta tese desenvolve uma metodologia para obter uma medidaagregada do desempenho global do armazém. A metodologia é composta de quatro etapasprincipais: (i) o desenvolvimento de um modelo analítico dos indicadores de desempenhojá utilizados para o gerenciamento do armazém; (ii) a definição das relações entre os indicadoresde forma analítica e estatística; (iii) a agregação destes indicadores em um modelointegrado; (iv) a proposição de uma escala para avaliar a evolução do desempenho globaldo armazém ao longo do tempo, de acordo com o resultado do modelo integrado.A metodologia é aplicada em um armazém teórico para demonstrar sua aplicabilidade.Os indicadores utilizados para avaliar o desempenho do armazém são provenientesda literatura, e uma base de dados é gerada para permitir a utilização de ferramentasmatemáticas. A matriz jacobiana é utilizada para definir de forma analítica as relaçõesentre os indicadores, e uma análise de componentes principais é realizada para agregaros indicadores de forma estatística. O modelo agregado final compreende 33 indicadores,divididos em seis componentes diferentes, e a equação do indicador de desempenho globalé obtido a partir da média ponderada dos seis componentes. Uma escala é desenvolvidapara o indicador de desempenho global utilizando um modelo de otimização para obter oslimites superior e inferior da escala.Depois de testes com o modelo integrado, pôde-se concluir que a metodologia propostaatingiu seu objetivo ao fornecer uma ferramenta de ajuda à decisão para os gestores, permitindoque eles sejam mais eficazes no gerenciamento global do armazém sem negligenciarinformações importantes que são fornecidas pelos indicadores
APA, Harvard, Vancouver, ISO, and other styles
4

Garcelon, Nicolas. "Problématique des entrepôts de données textuelles : dr Warehouse et la recherche translationnelle sur les maladies rares." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCB257/document.

Full text
Abstract:
La réutilisation des données de soins pour la recherche s’est largement répandue avec le développement d’entrepôts de données cliniques. Ces entrepôts de données sont modélisés pour intégrer et explorer des données structurées liées à des thesaurus. Ces données proviennent principalement d’automates (biologie, génétique, cardiologie, etc) mais aussi de formulaires de données structurées saisies manuellement. La production de soins est aussi largement pourvoyeuse de données textuelles provenant des comptes rendus hospitaliers (hospitalisation, opératoire, imagerie, anatomopathologie etc.), des zones de texte libre dans les formulaires électroniques. Cette masse de données, peu ou pas utilisée par les entrepôts classiques, est une source d’information indispensable dans le contexte des maladies rares. En effet, le texte libre permet de décrire le tableau clinique d’un patient avec davantage de précisions et en exprimant l’absence de signes et l’incertitude. Particulièrement pour les patients encore non diagnostiqués, le médecin décrit l’histoire médicale du patient en dehors de tout cadre nosologique. Cette richesse d’information fait du texte clinique une source précieuse pour la recherche translationnelle. Cela nécessite toutefois des algorithmes et des outils adaptés pour en permettre une réutilisation optimisée par les médecins et les chercheurs. Nous présentons dans cette thèse l'entrepôt de données centré sur le document clinique, que nous avons modélisé, implémenté et évalué. À travers trois cas d’usage pour la recherche translationnelle dans le contexte des maladies rares, nous avons tenté d’adresser les problématiques inhérentes aux données textuelles: (i) le recrutement de patients à travers un moteur de recherche adapté aux données textuelles (traitement de la négation et des antécédents familiaux), (ii) le phénotypage automatisé à partir des données textuelles et (iii) l’aide au diagnostic par similarité entre patients basés sur le phénotypage. Nous avons pu évaluer ces méthodes sur l’entrepôt de données de Necker-Enfants Malades créé et alimenté pendant cette thèse, intégrant environ 490 000 patients et 4 millions de comptes rendus. Ces méthodes et algorithmes ont été intégrés dans le logiciel Dr Warehouse développé pendant la thèse et diffusé en Open source depuis septembre 2017
The repurposing of clinical data for research has become widespread with the development of clinical data warehouses. These data warehouses are modeled to integrate and explore structured data related to thesauri. These data come mainly from machine (biology, genetics, cardiology, etc.) but also from manual data input forms. The production of care is also largely providing textual data from hospital reports (hospitalization, surgery, imaging, anatomopathologic etc.), free text areas in electronic forms. This mass of data, little used by conventional warehouses, is an indispensable source of information in the context of rare diseases. Indeed, the free text makes it possible to describe the clinical picture of a patient with more precision and expressing the absence of signs and uncertainty. Particularly for patients still undiagnosed, the doctor describes the patient's medical history outside any nosological framework. This wealth of information makes clinical text a valuable source for translational research. However, this requires appropriate algorithms and tools to enable optimized re-use by doctors and researchers. We present in this thesis the data warehouse centered on the clinical document, which we have modeled, implemented and evaluated. In three cases of use for translational research in the context of rare diseases, we attempted to address the problems inherent in textual data: (i) recruitment of patients through a search engine adapted to textual (data negation and family history detection), (ii) automated phenotyping from textual data, and (iii) diagnosis by similarity between patients based on phenotyping. We were able to evaluate these methods on the data warehouse of Necker-Enfants Malades created and fed during this thesis, integrating about 490,000 patients and 4 million reports. These methods and algorithms were integrated into the software Dr Warehouse developed during the thesis and distributed in Open source since September 2017
APA, Harvard, Vancouver, ISO, and other styles
5

Samuel, John. "Feeding a data warehouse with data coming from web services. A mediation approach for the DaWeS prototype." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22493/document.

Full text
Abstract:
Cette thèse traite de l’établissement d’une plateforme logicielle nommée DaWeS permettant le déploiement et la gestion en ligne d’entrepôts de données alimentés par des données provenant de services web et personnalisés à destination des petites et moyennes entreprises. Ce travail s’articule autour du développement et de l’expérimentation de DaWeS. L’idée principale implémentée dans DaWeS est l’utilisation d’une approche virtuelle d’intégration de données (la médiation) en tant queprocessus ETL (extraction, transformation et chargement des données) pour les entrepôts de données gérés par DaWeS. A cette fin, un algorithme classique de réécriture de requêtes (l’algorithme inverse-rules) a été adapté et testé. Une étude théorique sur la sémantique des requêtes conjonctives et datalog exprimées avec des relations munies de limitations d’accès (correspondant aux services web) a été menée. Cette dernière permet l’obtention de bornes supérieures sur les nombres d’appels aux services web requis dans l’évaluation de telles requêtes. Des expérimentations ont été menées sur des services web réels dans trois domaines : le marketing en ligne, la gestion de projets et les services d’aide aux utilisateurs. Une première série de tests aléatoires a été effectuée pour tester le passage à l’échelle
The role of data warehouse for business analytics cannot be undermined for any enterprise, irrespective of its size. But the growing dependence on web services has resulted in a situation where the enterprise data is managed by multiple autonomous and heterogeneous service providers. We present our approach and its associated prototype DaWeS [Samuel, 2014; Samuel and Rey, 2014; Samuel et al., 2014], a DAta warehouse fed with data coming from WEb Services to extract, transform and store enterprise data from web services and to build performance indicators from them (stored enterprise data) hiding from the end users the heterogeneity of the numerous underlying web services. Its ETL process is grounded on a mediation approach usually used in data integration. This enables DaWeS (i) to be fully configurable in a declarative manner only (XML, XSLT, SQL, datalog) and (ii) to make part of the warehouse schema dynamic so it can be easily updated. (i) and (ii) allow DaWeS managers to shift from development to administration when they want to connect to new web services or to update the APIs (Application programming interfaces) of already connected ones. The aim is to make DaWeS scalable and adaptable to smoothly face the ever-changing and growing web services offer. We point out the fact that this also enables DaWeS to be used with the vast majority of actual web service interfaces defined with basic technologies only (HTTP, REST, XML and JSON) and not with more advanced standards (WSDL, WADL, hRESTS or SAWSDL) since these more advanced standards are not widely used yet to describe real web services. In terms of applications, the aim is to allow a DaWeS administrator to provide to small and medium companies a service to store and query their business data coming from their usage of third-party services, without having to manage their own warehouse. In particular, DaWeS enables the easy design (as SQL Queries) of personalized performance indicators. We present in detail this mediation approach for ETL and the architecture of DaWeS. Besides its industrial purpose, working on building DaWeS brought forth further scientific challenges like the need for optimizing the number of web service API operation calls or handling incomplete information. We propose a bound on the number of calls to web services. This bound is a tool to compare future optimization techniques. We also present a heuristics to handle incomplete information
APA, Harvard, Vancouver, ISO, and other styles
6

Khemiri, Rym. "Vers l'OLAP collaboratif pour la recommandation des analyses en ligne personnalisées." Thesis, Lyon 2, 2015. http://www.theses.fr/2015LYO22015/document.

Full text
Abstract:
La personnalisation vise à recueillir les intérêts, les préférences, les usages, les contraintes, le contexte, etc. souvent considérés comme faisant partie de ce que l'on appelle ''profil utilisateur'' pour ensuite les intégrer dans un système et les exploiter afin de permettre à l'utilisateur d'accéder rapidement aux informations les plus pertinentes pour lui. Par ailleurs, au sein d'une organisation, différents acteurs sont amenés à prendre des décisions à différents niveaux de responsabilité et ont donc besoin de réaliser des analyses à partir de l'entrepôt de données pour supporter la prise de décision. Ainsi, dans le contexte de cette communauté d'utilisateurs de l'entrepôt de données, la notion de collaboration émerge. Il est alors intéressant de combiner les concepts de personnalisation et de collaboration pour approcher au mieux les besoins des utilisateurs en leur recommandant des analyses en ligne pertinentes. L'objectif de ce mémoire est de proposer une approche collaborative pour l'OLAP, impliquant plusieurs utilisateurs, dirigée par un processus de personnalisation intégré aux systèmes décisionnels afin de pouvoir aider l'utilisateur final dans son processus d'analyse en ligne. Qu'il s'agisse de personnalisation du modèle d'entrepôt, de recommandation de requêtes décisionnelles ou de recommandation de chemins de navigation au sein des cubes de données, l'utilisateur a besoin d'un système décisionnel efficace qui l'aide dans sa démarche d'analyse en ligne. La finalité est de fournir à l'utilisateur des réponses pertinentes proches de ses besoins pour qu'il puisse mieux appréhender ses prises de décision. Nous nous sommes intéressés dans cette thèse à trois problèmes relevant de la prise en compte de l'utilisateur au sein des entrepôts de données et de l'OLAP. Nos contributions s'appuient sur la combinaison de techniques issues de la fouille de données avec les entrepôts et OLAP. Notre première contribution est une approche qui consiste à personnaliser les hiérarchies de dimensions afin d'obtenir des axes d'analyse nouveaux sémantiquement plus riches pouvant aider l'utilisateur à réaliser de nouvelles analyses non prévues par le modèle de l'entrepôt initial. En effet, nous relâchons la contrainte du modèle fixe de l'entrepôt, ce qui permet à l'utilisateur de créer de nouveaux axes d'analyse pertinents en tenant compte à la fois de ses contraintes et des connaissances enfouies dans les données entreposées. Notre approche repose sur une méthode d'apprentissage non-supervisé, le k-means contraint, capable de créer de nouveaux regroupements intéressants des données entreposées pouvant constituer un nouveau niveau de hiérarchie permettant de réaliser de nouvelles requêtes décisionnelles. L'intérêt est alors de pouvoir exploiter ces nouveaux niveaux de hiérarchie pour que les autres utilisateurs appartenant à la même communauté d'utilisateurs puissent en tirer profit, dans l'esprit d'un système collaboratif dans lequel chacun apporte sa pierre à l'édifice. Notre deuxième contribution est une approche interactive pour aider l'utilisateur à formuler de nouvelles requêtes décisionnelles pour construire des cubes OLAP pertinents en s'appuyant sur ses requêtes décisionnelles passées, ce qui lui permet d'anticiper sur ses besoins d'analyse futurs. Cette approche repose sur l'extraction des motifs fréquents à partir d'une charge de requêtes associée à un ou à un ensemble d'utilisateurs appartenant à la même communauté d'acteurs d'une organisation. Notre intuition est que la pertinence d'une requête décisionnelle est fortement corrélée avec la fréquence d'utilisation par l'utilisateur (ou un ensemble d'utilisateurs) des attributs associés à l'ensemble de ses (leurs) requêtes précédentes. Notre approche de formulation de requêtes (...)
The objective of this thesis is to provide a collaborative approach to the OLAP involving several users, led by an integrated personalization process in decision-making systems in order to help the end user in their analysis process. Whether personalizing the warehouse model, recommending decision queries or recommending navigation paths within the data cubes, the user need an efficient decision-making system that assist him. We were interested in three issues falling within data warehouse and OLAP personalization offering three major contributions. Our contributions are based on a combination of datamining techniques with data warehouses and OLAP technology. Our first contribution is an approach about personalizing dimension hierarchies to obtain new analytical axes semantically richer for the user that can help him to realize new analyzes not provided by the original data warehouse model. Indeed, we relax the constraint of the fixed model of the data warehouse which allows the user to create new relevant analysis axes taking into account both his/her constraints and his/her requirements. Our approach is based on an unsupervised learning method, the constrained k-means. Our goal is then to recommend these new hierarchy levels to other users of the same user community, in the spirit of a collaborative system in which each individual brings his contribution. The second contribution is an interactive approach to help the user to formulate new decision queries to build relevant OLAP cubes based on its past decision queries, allowing it to anticipate its future analysis needs. This approach is based on the extraction of frequent itemsets from a query load associated with one or a set of users belonging to the same actors in a community organization. Our intuition is that the relevance of a decision query is strongly correlated to the usage frequency of the corresponding attributes within a given workload of a user (or group of users). Indeed, our approach of decision queries formulation is a collaborative approach because it allows the user to formulate relevant queries, step by step, from the most commonly used attributes by all actors of the user community. Our third contribution is a navigation paths recommendation approach within OLAP cubes. Users are often left to themselves and are not guided in their navigation process. To overcome this problem, we develop a user-centered approach that suggests the user navigation guidance. Indeed, we guide the user to go to the most interesting facts in OLAP cubes telling him the most relevant navigation paths for him. This approach is based on Markov chains that predict the next analysis query from the only current query. This work is part of a collaborative approach because transition probabilities from one query to another in the cuboids lattice (OLAP cube) is calculated by taking into account all analysis queries of all users belonging to the same community. To validate our proposals, we present a support system user-centered decision which comes in two subsystems: (1) content personalization and (2) recommendation of decision queries and navigation paths. We also conducted experiments that showed the effectiveness of our analysis online user centered approaches using quality measures such as recall and precision
APA, Harvard, Vancouver, ISO, and other styles
7

Tournier, Ronan. "Analyse en ligne (OLAP) de documents." Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00348094.

Full text
Abstract:
Les entrepôts de données et les systèmes d'analyse en ligne OLAP (On-Line Analytical Processing) fournissent des méthodes et des outils permettant l'analyse de données issues des systèmes d'information des entreprises. Mais, seules 20% des données d'un système d'information est constitué de données analysables par les systèmes OLAP actuels. Les 80% restant, constitués de documents, restent hors de portée de ces systèmes faute d'outils ou de méthodes adaptés. Pour répondre à cette problématique nous proposons un modèle conceptuel multidimensionnel pour représenter les concepts d'analyse. Ce modèle repose sur un unique concept, modélisant à la fois les sujets et les axes d'une analyse. Nous y associons une fonction pour agréger des données textuelles afin d'obtenir une vision synthétique des informations issues de documents. Cette fonction résume un ensemble de mots-clefs par un ensemble plus petit et plus général. Nous introduisons un noyau d'opérations élémentaires permettant la spécification d'analyses multidimensionnelles à partir des concepts du modèle ainsi que leur manipulation pour affiner une analyse. Nous proposons également une démarche pour l'intégration des données issues de documents, qui décrit les phases pour concevoir le schéma conceptuel multidimensionnel, l'analyse des sources de données ainsi que le processus d'alimentation. Enfin, pour valider notre proposition, nous présentons un prototype.
APA, Harvard, Vancouver, ISO, and other styles
8

Roatis, Alexandra. "Efficient Querying and Analytics of Semantic Web Data." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112218/document.

Full text
Abstract:
L'utilité et la pertinence des données se trouvent dans l'information qui peut en être extraite.Le taux élevé de publication des données et leur complexité accrue, par exemple dans le cas des données du Web sémantique autodescriptives et hétérogènes, motivent l'intérêt de techniques efficaces pour la manipulation de données.Dans cette thèse, nous utilisons la technologie mature de gestion de données relationnelles pour l'interrogation des données du Web sémantique.La première partie se concentre sur l'apport de réponse aux requêtes sur les données soumises à des contraintes RDFS, stockées dans un système de gestion de données relationnelles. L'information implicite, résultant du raisonnement RDF est nécessaire pour répondre correctement à ces requêtes.Nous introduisons le fragment des bases de données RDF, allant au-delà de l'expressivité des fragments étudiés précédemment.Nous élaborons de nouvelles techniques pour répondre aux requêtes dans ce fragment, en étendant deux approches connues de manipulation de données sémantiques RDF, notamment par saturation de graphes et reformulation de requêtes.En particulier, nous considérons les mises à jour de graphe au sein de chaque approche et proposerons un procédé incrémental de maintenance de saturation. Nous étudions expérimentalement les performances de nos techniques, pouvant être déployées au-dessus de tout moteur de gestion de données relationnelles.La deuxième partie de cette thèse considère les nouvelles exigences pour les outils et méthodes d'analyse de données, issues de l'évolution du Web sémantique.Nous revisitons intégralement les concepts et les outils pour l'analyse de données, dans le contexte de RDF.Nous proposons le premier cadre formel pour l'analyse d'entrepôts RDF. Notamment, nous définissons des schémas analytiques adaptés aux graphes RDF hétérogènes à sémantique riche, des requêtes analytiques qui (au-delà de cubes relationnels) permettent l'interrogation flexible des données et schémas, ainsi que des opérations d'agrégation puissantes de type OLAP. Des expériences sur une plateforme entièrement implémentée démontrent l'intérêt pratique de notre approche
The utility and relevance of data lie in the information that can be extracted from it.The high rate of data publication and its increased complexity, for instance the heterogeneous, self-describing Semantic Web data, motivate the interest in efficient techniques for data manipulation.In this thesis we leverage mature relational data management technology for querying Semantic Web data.The first part focuses on query answering over data subject to RDFS constraints, stored in relational data management systems. The implicit information resulting from RDF reasoning is required to correctly answer such queries. We introduce the database fragment of RDF, going beyond the expressive power of previously studied fragments. We devise novel techniques for answering Basic Graph Pattern queries within this fragment, exploring the two established approaches for handling RDF semantics, namely graph saturation and query reformulation. In particular, we consider graph updates within each approach and propose a method for incrementally maintaining the saturation. We experimentally study the performance trade-offs of our techniques, which can be deployed on top of any relational data management engine.The second part of this thesis considers the new requirements for data analytics tools and methods emerging from the development of the Semantic Web. We fully redesign, from the bottom up, core data analytics concepts and tools in the context of RDF data. We propose the first complete formal framework for warehouse-style RDF analytics. Notably, we define analytical schemas tailored to heterogeneous, semantic-rich RDF graphs, analytical queries which (beyond relational cubes) allow flexible querying of the data and the schema as well as powerful aggregation and OLAP-style operations. Experiments on a fully-implemented platform demonstrate the practical interest of our approach
APA, Harvard, Vancouver, ISO, and other styles
9

Pérez, Martínez Juan Manuel. "Contextualizing a Data Warehouse with Documents." Doctoral thesis, Universitat Jaume I, 2007. http://hdl.handle.net/10803/10482.

Full text
Abstract:
La tecnología actual de los almacenes de datos y las técnicas OLAP permite a las organizaciones analizar los datos estructurados que éstas recopilan en sus bases de datos. Las circunstancias que rodean a estos datos aparecen descritas en documentos, típicamente ricos en texto. Esta información sobre el contexto de los datos registrados el almacén es muy valiosa, ya que nos permite interpretar el resultado obtenido en análisis históricos. Por ejemplo, la crisis financiera relatada una revista digital sobre economía podría explicar una caída de las ventas en una determinada región. Sin embargo, no es posible explotar esta información contextual utilizando directamente las herramientas OLAP tradicionales. La principal causa es la naturaleza no-estructurada, rica en texto, de los documentos que recogen dicha información. Esta tesis presenta el almacén contextualizado: un nuevo tipo de sistema de apoyo a la decisión que combina las tecnologías de los almacenes de datos y los sistemas de recuperación de la información para integrar las fuentes de información estructurada y de documentos de una organización, y analizar estos datos bajo distintos contextos.
APA, Harvard, Vancouver, ISO, and other styles
10

El, Malki Mohammed. "Modélisation NoSQL des entrepôts de données multidimensionnelles massives." Thesis, Toulouse 2, 2016. http://www.theses.fr/2016TOU20139/document.

Full text
Abstract:
Les systèmes d’aide à la décision occupent une place prépondérante au sein des entreprises et des grandes organisations, pour permettre des analyses dédiées à la prise de décisions. Avec l’avènement du big data, le volume des données d’analyses atteint des tailles critiques, défiant les approches classiques d’entreposage de données, dont les solutions actuelles reposent principalement sur des bases de données R-OLAP. Avec l’apparition des grandes plateformes Web telles que Google, Facebook, Twitter, Amazon… des solutions pour gérer les mégadonnées (Big Data) ont été développées et appelées « Not Only SQL ». Ces nouvelles approches constituent une voie intéressante pour la construction des entrepôts de données multidimensionnelles capables de supporter des grandes masses de données. La remise en cause de l’approche R-OLAP nécessite de revisiter les principes de la modélisation des entrepôts de données multidimensionnelles. Dans ce manuscrit, nous avons proposé des processus d’implantation des entrepôts de données multidimensionnelles avec les modèles NoSQL. Nous avons défini quatre processus pour chacun des deux modèles NoSQL orienté colonnes et orienté documents. De plus, le contexte NoSQL rend également plus complexe le calcul efficace de pré-agrégats qui sont habituellement mis en place dans le contexte ROLAP (treillis). Nous avons élargis nos processus d’implantations pour prendre en compte la construction du treillis dans les deux modèles retenus.Comme il est difficile de choisir une seule implantation NoSQL supportant efficacement tous les traitements applicables, nous avons proposé deux processus de traductions, le premier concerne des processus intra-modèles, c’est-à-dire des règles de passage d’une implantation à une autre implantation du même modèle logique NoSQL, tandis que le second processus définit les règles de transformation d’une implantation d’un modèle logique vers une autre implantation d’un autre modèle logique
Decision support systems occupy a large space in companies and large organizations in order to enable analyzes dedicated to decision making. With the advent of big data, the volume of analyzed data reaches critical sizes, challenging conventional approaches to data warehousing, for which current solutions are mainly based on R-OLAP databases. With the emergence of major Web platforms such as Google, Facebook, Twitter, Amazon...etc, many solutions to process big data are developed and called "Not Only SQL". These new approaches are an interesting attempt to build multidimensional data warehouse capable of handling large volumes of data. The questioning of the R-OLAP approach requires revisiting the principles of modeling multidimensional data warehouses.In this manuscript, we proposed implementation processes of multidimensional data warehouses with NoSQL models. We defined four processes for each model; an oriented NoSQL column model and an oriented documents model. Each of these processes fosters a specific treatment. Moreover, the NoSQL context adds complexity to the computation of effective pre-aggregates that are typically set up within the ROLAP context (lattice). We have enlarged our implementations processes to take into account the construction of the lattice in both detained models.As it is difficult to choose a single NoSQL implementation that supports effectively all the applicable treatments, we proposed two translation processes. While the first one concerns intra-models processes, i.e., pass rules from an implementation to another of the same NoSQL logic model, the second process defines the transformation rules of a logic model implementation to another implementation on another logic model
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Document warehouse"

1

Henson, Ray D. Documents of title under the Uniform commercial code. 2nd ed. Philadelphia, Pa: American Law Institute-American Bar Association Committee on Continuing Professional Education, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Torres, Justo P. The law on negotiable instruments: With Warehouse Receipts Act, documents of title and business forms. 2nd ed. Manila, Philippines: Published & distributed by Rex Book Store, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Torres, Justo P. The law on negotiable instruments: With Warehouse Receipts Act, documents of titles and business forms. Manila, Philippines: Published & distributed by Rex Book Store, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Torres, Justo P. The law on negotiable instruments: With Warehouse Receipts Act, documents of title, and business forms. 5th ed. Sampaloc, Manila: Booksellers, Inc., 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Notes and cases on banking law and negitiable instruments law: Essentials of negotiable instruments law, warehouse receipts law, letters of credit and trust receipts law. 3rd ed. Manila, Philippines: Published and ditributed by Rex Book Store, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aquino, Timoteo B. Notes and cases on banking law and negotiable instruments law: Essentials of negotiable instruments law, warehouse receipts law, letters of credit and trust receipts law. 3rd ed. Manila, Philippines: Published & distributed by Rex Book Store, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Aquino, Timoteo B. Notes and cases on banking law and negotiable instruments law: Essentials of negotiable instruments law, warehouse receipts law, letters of credit and trust receipts law. 3rd ed. Manila, Philippines: Published & distributed by Rex Book Store, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aquino, Timoteo B. Notes and cases on banking law and negotiable instruments law: Essentials of negotiable instruments law, warehouse receipts law, letters of credit and trust receipts law. 3rd ed. Manila, Philippines: Published and distributed by Rex Book Store, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hart, Frederick M. A student's guide to sales of goods, letters of credit, and documents of title. New York, NY (11 Penn Plaza, New York 10001): M. Bender, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Handbook on negotiable instruments: Documents of title and of maritime commerce and common carriers : (including letters of credit, articles 1 to 63, Code of commerce, guaranty and admiralty). Quezon City: AFA Publications, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Document warehouse"

1

Nassis, Vicky, Tharam S. Dillon, Rajugan Rajagopalapillai, and Wenny Rahayu. "An XML Document Warehouse Model." In Database Systems for Advanced Applications, 513–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11733836_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lo, Howard, Che-Chern Lin, Rong-Jyue Fang, Chungping Lee, and Yu-Chen Weng. "Chinese Document Clustering Using Self-Organizing Map-Based on Botanical Document Warehouse." In Lecture Notes in Electrical Engineering, 593–600. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-85437-3_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ben Messaoud, Ines, Jamel Feki, and Gilles Zurfluh. "A Semi-automatic Approach to Build XML Document Warehouse." In Communications in Computer and Information Science, 347–63. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25840-9_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tebourski, Wafa, Wahiba Ben Abdessalem Karaa, and Henda Ben Ghezela. "Toward Modeling Semiautomatic Data Warehouses." In Mining Multimedia Documents, 35–51. Taylor & Francis Group, 6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742: CRC Press, 2017. http://dx.doi.org/10.1201/9781315399744-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tebourski, Wafa, Wahiba Ben Abdessalem Karaa, and Henda Ben Ghezela. "Toward Modeling Semiautomatic Data Warehouses." In Mining Multimedia Documents, 35–51. Boca Raton : CRC Press, [2017]: Chapman and Hall/CRC, 2017. http://dx.doi.org/10.1201/b21638-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nassis, Vicky, R. Rajugan, Tharam S. Dillon, and Wenny Rahayu. "Conceptual Design of XML Document Warehouses." In Data Warehousing and Knowledge Discovery, 1–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30076-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pérez, Juan M., Torben Bach Pedersen, Rafael Berlanga, and María J. Aramburu. "IR and OLAP in XML Document Warehouses." In Lecture Notes in Computer Science, 536–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-31865-1_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chevalier, Max, Mohammed El Malki, Arlind Kopliku, Olivier Teste, and Ronan Tournier. "Document-Oriented Data Warehouses: Complex Hierarchies and Summarizability." In Lecture Notes in Electrical Engineering, 671–83. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-1627-1_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nassis, Vicky, R. Rajugan, Tharam S. Dillon, and Wenny Rahayu. "A Systematic Design Approach for XML-View Driven Web Document Warehouses." In Computational Science and Its Applications – ICCSA 2005, 914–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11424826_99.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ferro, Marcio, Rinaldo Lima, and Robson Fidalgo. "Evaluating Redundancy and Partitioning of Geospatial Data in Document-Oriented Data Warehouses." In Big Data Analytics and Knowledge Discovery, 221–35. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27520-4_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Document warehouse"

1

Ben Messaoud, Ines, Refka Ben Ali, and Jamel Feki. "From Document Warehouse to Column-Oriented NoSQL Document Warehouse." In 12th International Conference on Software Technologies. SCITEPRESS - Science and Technology Publications, 2017. http://dx.doi.org/10.5220/0006423500850094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schymik, Gregory, Karen Corral, David Schuff, and Robert St Louis. "Architecting a Dimensional Document Warehouse." In Proceedings of the 40th Annual Hawaii International Conference on System Sciences. IEEE, 2007. http://dx.doi.org/10.1109/hicss.2007.85.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"UNIFICATION OF XML DOCUMENT STRUCTURES FOR DOCUMENT WAREHOUSE (DocW)." In 13th International Conference on Enterprise Information Systems. SciTePress - Science and and Technology Publications, 2011. http://dx.doi.org/10.5220/0003502100850094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"TOWARDS A MULTI-USER DOCUMENT WAREHOUSE." In 8th International Conference on Web Information Systems and Technologies. SciTePress - Science and and Technology Publications, 2012. http://dx.doi.org/10.5220/0003937801490154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ben Messaoud, Ines, Jamel Feki, and Gilles Zurfluh. "A first step for building a document warehouse: Unification of XML documents." In 2012 Sixth International Conference on Research Challenges in Information Science (RCIS). IEEE, 2012. http://dx.doi.org/10.1109/rcis.2012.6240440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Phan Hieu Ho, Trung Hung Vo, and Ngoc Anh Thi Nguyen. "Data warehouse designing for Vietnamese textual document-based plagiarism detection system." In 2017 International Conference on System Science and Engineering (ICSSE). IEEE, 2017. http://dx.doi.org/10.1109/icsse.2017.8030873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ferro, Marcio, Rogerio Fragoso, and Robson Fidalgo. "Document-Oriented Geospatial Data Warehouse: An Experimental Evaluation of SOLAP Queries." In 2019 IEEE 21st Conference on Business Informatics (CBI). IEEE, 2019. http://dx.doi.org/10.1109/cbi.2019.00013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Challal, Zakia, Wafaa Bala, Hanifa Mokeddem, Kamel Boukhalfa, Omar Boussaid, and Elhadj Benkhelifa. "Document-oriented versus Column-oriented Data Storage for Social Graph Data Warehouse." In 2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS). IEEE, 2019. http://dx.doi.org/10.1109/snams.2019.8931718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aiello, Mariateresa. "Self-Storage Cities: A New Typology of (Sub)Urban Enclave." In 2016 ACSA International Conference. ACSA Press, 2016. http://dx.doi.org/10.35483/acsa.intl.2016.23.

Full text
Abstract:
In the periphery, arrays of self-storage facilities are part of the light industrial landscape of warehouses and ex-urban alienation. Within the urban fabric, storage buildings represent both container and camouflage architecture, and are perfect examples of what Professor Crawford calls “background buildings.”1 Self-storage facilities are an architectural typology worthy of study, and not only for their growing impact on the city and suburban sprawl, or for the uncanny ability to mimic other design typologies and adapt to the target market. They can be examined in terms of building type and construction methods. From the economic point of view, storage facilitiesare compelling: they are a by-product of shopping/goods architecture, consumerism and planned obsolescence. They embody currently popular issues of surplus and clutter/hoarding. The issue of “material excess” becomes an (ex) urban pathology, endemic to a culture of wholesale commerce and warehouse buying experiences. The clutter culture can be mapped and becomes tangible in the form of the “country of storage facilities”, a veritable document to “stuff obesity”. The current rise of self-storage facilities is also a physical reminder of the consequence of changes in social and living conditions. How do we, as architects and urban designers, confront the typology of the self-storage facility and the new urban/exurban enclaves that these commercial containers of space have created? How can we better understand the nature of the singularly camouflaged “housing of stuff” often found in the downtowns of second-tier U.S cities? The content of these buildings, the “user”if you wish, is constituted entirely of stuff we cannotor do not wish to fit in our homes. What is it that we store, and why?
APA, Harvard, Vancouver, ISO, and other styles
10

Omar, Nizam, Pollyana Notargiacomo Mustaro, Ismar Frango Silveira, and Daniel Arndt Alves. "Multiplatform Distributed Architecture of Learning Content Management System." In InSITE 2005: Informing Science + IT Education Conference. Informing Science Institute, 2005. http://dx.doi.org/10.28945/2911.

Full text
Abstract:
Learning objects are constructed and used by a community during an undefined period of time and formatted as digital entities on diverse document types to be used, reused or referenced during a technology-mediated learning process. A Learning Content Management System (LCMS) is needed to their storage and retrieval. Electronic document management, data warehouse and data mining techniques will be presented. Effective management of a really big repository of Learning Objects by a community on a large network needs a system that facilitates the right access to the right document by its content and not only by title, author or other usual indexing fields. Learning Objects must be found by their full content and indexed and customized according to each user or user groups’ necessities. The main indexing and retrieving techniques will be discussed and a solution will be presented. Different learning objects can be stored on a common repository and duplication must be avoided. To fulfill this requirement the system needs to implement smart strategies that can be constructed based on AI techniques. Considering the diversity of users, machines and operating systems, the LCMS must be platform independent and manage portable resources, thus giving access to any user from any machine. LCMS must be scalable enough in order to avoid abrupt changes from small applications to big ones, without losing performance. A multiplatform distributed LCMS architecture is presented, and it is composed by: Interface server, data server, parser server, index server and repository server. These servers can run on a single machine or on a cluster of machines according to the needs of the application.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Document warehouse"

1

Liu, Bing, Ronald E. Jarnagin, Wei Jiang, and Krishnan Gowri. Technical Support Document: The Development of the Advanced Energy Design Guide for Small Warehouse and Self-Storage Buildings. Office of Scientific and Technical Information (OSTI), December 2007. http://dx.doi.org/10.2172/921429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography