Academic literature on the topic 'Data warehouse queries'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data warehouse queries.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data warehouse queries"

1

Gupta, Dr S. L., Dr Payal Pahwa, and Ms Sonali Mathur. "CLASSIFICATION OF DATA WAREHOUSE TESTING APPROACHES." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 3, no. 3 (December 2, 2012): 381–86. http://dx.doi.org/10.24297/ijct.v3i3a.2942.

Full text
Abstract:
Data Warehouse is a collection of large amount of data which is used by the management for making strategic decisions. The data in a data warehouse is gathered from heterogeneous sources and then populated and queried for carrying out the analysis. The data warehouse design must support the queries for which it is being used for. The design is often an iterative process and must be modified a number of times before any model can be stabilized. The design life cycle of any product includes various stages wherein, testing being the most important one. Data warehouse design has received considerable attention whereas data warehouse testing is being explored now by various researchers. This paper discusses about various categories of testing activities being carried out in a data warehouse at different levels
APA, Harvard, Vancouver, ISO, and other styles
2

Haxhiu, Valdrin. "Decision making based on data analyses using data warehouses." International Journal of Business & Technology 6, no. 3 (May 1, 2018): 1–6. http://dx.doi.org/10.33107/ijbte.2018.6.3.04.

Full text
Abstract:
Data warehouses are a collection of several databases, whose goal is to help different companies and corporations make important decisions about their activities. These decisions are taken from the analyses that are made to the data within the data warehouse. These data are taken from data that companies and corporations collect on daily basis from their branches that may be located in different cities, regions, states and continents. Data that are entered to data warehouses are historical data and they represent that part of data that is important for making decisions. These data go under a transformation process in order to accommodate with the structure of the objects within the databases in the data warehouse. This is done because the structure of the relational databases is not similar with the structure of the databases (multidimensional databases) within the data warehouse. The first ones are optimized for transactions on daily basis like: entering, changing, deleting and retrieving data through simple queries, the second ones are optimized for retrieving data through multidimensional queries, which enable us to extract important information. This information helps to make important decisions by learning which are the weak points and the strong points of the company, in order to invest more on the weak points and to strengthen the strong points, increasing the profits of the company. The goal of this paper is to treat data analyses for decision making from a data warehouse by using OLAP (online analytical processing) analysis. For this treatment we used the Analysis Services of Microsoft SQL Server 2016 platform. We analyzed the data of an IT Store with branches in different cities in Kosovo and came to a conclusion for some sales trends. This paper emphasizes the role of data warehouses in decision making.
APA, Harvard, Vancouver, ISO, and other styles
3

Atigui, Faten, Franck Ravat, Jiefu Song, Olivier Teste, and Gilles Zurfluh. "Facilitate Effective Decision-Making by Warehousing Reduced Data." International Journal of Decision Support System Technology 7, no. 3 (July 2015): 36–64. http://dx.doi.org/10.4018/ijdsst.2015070103.

Full text
Abstract:
The authors' aim is to provide a solution for multidimensional data warehouse's reduction based on analysts' needs which will specify aggregated schema applicable over a period of time as well as retain only useful data for decision support. Firstly, they describe a conceptual modeling for multidimensional data warehouse. A multidimensional data warehouse's schema is composed of a set of states. Each state is defined as a star schema composed of one fact and its related dimensions. The derivation between states is carried out through combination of reduction operators. Secondly, they present a meta-model which allows managing different states of multidimensional data warehouse. The definition of reduced and unreduced multidimensional data warehouse schema can be carried out by instantiating the meta-model. Finally, they describe their experimental assessments and discuss their results. Evaluating their solution implies executing different queries in various contexts: unreduced single fact table, unreduced relational star schema, reduced star schema and reduced snowflake schema. The authors show that queries are more efficiently calculated within a reduced star schema.
APA, Harvard, Vancouver, ISO, and other styles
4

Dehdouh, Khaled, Omar Boussaid, and Fadila Bentayeb. "Big Data Warehouse." International Journal of Decision Support System Technology 12, no. 1 (January 2020): 1–24. http://dx.doi.org/10.4018/ijdsst.2020010101.

Full text
Abstract:
In the Big Data warehouse context, a column-oriented NoSQL database system is considered as the storage model which is highly adapted to data warehouses and online analysis. Indeed, the use of NoSQL models allows data scalability easily and the columnar store is suitable for storing and managing massive data, especially for decisional queries. However, the column-oriented NoSQL DBMS do not offer online analysis operators (OLAP). To build OLAP cubes corresponding to the analysis contexts, the most common way is to integrate other software such as HIVE or Kylin which has a CUBE operator to build data cubes. By using that, the cube is built according to the row-oriented approach and does not allow to fully obtain the benefits of a column-oriented approach. In this article, the focus is to define a cube operator called MC-CUBE (MapReduce Columnar CUBE), which allows building columnar NoSQL cubes according to the columnar approach by taking into account the non-relational and distributed aspects when data warehouses are stored.
APA, Harvard, Vancouver, ISO, and other styles
5

Kumar, Amit, and T. V. Vijay Kumar. "Materialized View Selection Using Self-Adaptive Perturbation Operator-Based Particle Swarm Optimization." International Journal of Applied Evolutionary Computation 11, no. 3 (July 2020): 50–67. http://dx.doi.org/10.4018/ijaec.2020070104.

Full text
Abstract:
A data warehouse is a central repository of time-variant and non-volatile data integrated from disparate data sources with the purpose of transforming data to information to support data analysis. Decision support applications access data warehouses to derive information using online analytical processing. The response time of analytical queries against speedily growing size of the data warehouse is substantially large. View materialization is an effective approach to decrease the response time for analytical queries and expedite the decision-making process in relational implementations of data warehouses. Selecting a suitable subset of views that deceases the response time of analytical queries and also fit within available storage space for materialization is a crucial research concern in the context of a data warehouse design. This problem, referred to as view selection, is shown to be NP-Hard. Swarm intelligence have been widely and successfully used to solve such problems. In this paper, a discrete variant of particle swarm optimization algorithm, i.e. self-adaptive perturbation operator based particle swarm optimization (SPOPSO), has been adapted to solve the view selection problem. Accordingly, SPOPSO-based view selection algorithm (SPOPSOVSA) is proposed. SPOPSOVSA selects the Top-K views in a multidimensional lattice framework. Further, the proposed algorithm is shown to perform better than the view selection algorithm HRUA.
APA, Harvard, Vancouver, ISO, and other styles
6

M Kirmani, Mudasir. "Dimensional Modeling Using Star Schema for Data Warehouse Creation." Oriental journal of computer science and technology 10, no. 04 (October 13, 2017): 745–54. http://dx.doi.org/10.13005/ojcst/10.04.07.

Full text
Abstract:
Data Warehouse design requires a radical rebuilding of tremendous measures of information, frequently of questionable or conflicting quality, drawn from various heterogeneous sources. Data Warehouse configuration assimilates business learning and innovation know-how. The outline of theData Warehouse requires a profound comprehension of the business forms in detail. The principle point of this exploration paper is to contemplate and investigate the transformation model to change over the E-R outlines to Star Schema for developing Data Warehouses. The Dimensional modelling is a logical design technique used for data warehouses. This research paper addresses various potential differences between the two techniques and highlights the advantages of using dimensional modelling along with disadvantages as well. Dimensional Modelling is one of the popular techniques for databases that are designed keeping in mind the queries from end-user in a data warehouse. In this paper the focus has been on Star Schema, which basically comprises of Fact table and Dimension tables. Each fact table further comprises of foreign keys of various dimensions and measures and degenerate dimensions if any. We also discuss the possibilities of deployment and acceptance of Conversion Model (CM) to provide the details of fact table and dimension tables according to the local needs. It will also highlight to why dimensional modelling is preferred over E-R modelling when creating data warehouse.
APA, Harvard, Vancouver, ISO, and other styles
7

Pisano, Valentina Indelli, Michele Risi, and Genoveffa Tortora. "How reduce the View Selection Problem through the CoDe Modeling." Journal on Advances in Theoretical and Applied Informatics 2, no. 2 (December 21, 2016): 19. http://dx.doi.org/10.26729/jadi.v2i2.2090.

Full text
Abstract:
Big Data visualization is not an easy task due to the sheer amount of information contained in data warehouses. Then the accuracy on data relationships in a representation becomes one of the most crucial aspects to perform business knowledge discovery. A tool that allows to model and visualize information relationships between data is CoDe, which by processing several queries on a data-mart, generates a visualization of such data. However on a large data warehouse, the computation of these queries increases the response time by the query complexity. A common approach to speed up data warehousing is precompute a set of materialized views, store in the warehouse and use them to compute the workload queries. The goal and the objectives of this paper are to present a new process exploiting the CoDe modeling through determining the minimal number of required OLAP queries and to mitigate the problem of view selection, i.e., select the optimal set of materialized views. In particular, the proposed process determines the minimal number of required OLAP queries, creates an ad hoc lattice structure to represent them, and selects on such structure the views to be materialized taking into account an heuristic based on the processing time cost and the view storage space. The results of an experiment on a real data warehouse show an improvement in the range of 36-98% with respect the approach that does not consider materialized views, and 7% wrt. an approach that exploits them. Moreover, we have shown how the results are affected by the lattice structure.
APA, Harvard, Vancouver, ISO, and other styles
8

Rado, Ratsimbazafy, and Omar Boussaid. "Multiple Decisional Query Optimization in Big Data Warehouse." International Journal of Data Warehousing and Mining 14, no. 3 (July 2018): 22–43. http://dx.doi.org/10.4018/ijdwm.2018070102.

Full text
Abstract:
Data warehousing (DW) area has always motivated a plethora of hard optimization problem that cannot be solved in polynomial time. Those optimization problems are more complex and interesting when it comes to multiple OLAP queries. In this article, the authors explore the potential of distributed environment for an established data warehouse, database-related optimization problem, the problem of Multiple Query Optimization (MQO). In traditional DW materializing views is an optimization technic to solve such problem by storing pre-computed join or frequently asked queries. In this era of big data this kind of view materialization is not suitable due to the data size. In this article, the authors tackle the problem of MQO on distributed DW by using a multiple, small, shared and easy to maintain shared data. The evaluation shows that, compared to available default execution engine, the authors' approach consumes on average 20% less memory in the Map-scan task and it is 12% faster regarding the execution time of interactive and reporting queries from TPC-DS.
APA, Harvard, Vancouver, ISO, and other styles
9

Bimonte, Sandro, Omar Boussaid, Michel Schneider, and Fabien Ruelle. "Design and Implementation of Active Stream Data Warehouses." International Journal of Data Warehousing and Mining 15, no. 2 (April 2019): 1–21. http://dx.doi.org/10.4018/ijdwm.2019040101.

Full text
Abstract:
In the era of Big Data, more and more stream data is available. In the same way, Decision Support Systems (DSS) tools, such as data warehouses and alert systems, become more and more sophisticated, and conceptual modeling tools are consequently mandatory for successfully DSS projects. Formalisms such as UML and ER have been widely used in the context of classical information and data warehouse systems, but they have not been investigated yet for stream data warehouses to deal with alert systems. Therefore, in this article, the authors introduce the notion of Active Stream Data Warehouse (ASDW) and this article proposes a UML profile for designing Active Stream Data Warehouses. Indeed, this article extends the ICSOLAP profile to take into account continuous and window OLAP queries. Moreover, this article studies the duality of the stream and OLAP decision-making process and the authors propose a set of ECA rules to automatically trigger OLAP operators. The UML profile is implemented in a new OLAP architecture, and it is validated using an environmental case study concerning the wind monitoring.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Li. "The Study on Indexing Techniques in Data Warehouse." Key Engineering Materials 439-440 (June 2010): 1505–10. http://dx.doi.org/10.4028/www.scientific.net/kem.439-440.1505.

Full text
Abstract:
Nowadays, data warehouse has already become the hot spot in database studies. Indexes can potentially speed up a variety of operations in a data warehouse. In this paper, we present several relatively mature index techniques in data warehouse. Then, we give a comparison between them on performance evaluations. This paper focuses on the performance evaluation of three data warehouse queries with three different indexing techniques and to observe the impact of variable size data with respect to time and space complexity.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Data warehouse queries"

1

Cyrus, Sam. "Fast Computation on Processing Data Warehousing Queries on GPU Devices." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6214.

Full text
Abstract:
Current database management systems use Graphic Processing Units (GPUs) as dedicated accelerators to process each individual query, which results in underutilization of GPU. When a single query data warehousing workload was run on an open source GPU query engine, the utilization of main GPU resources was found to be less than 25%. The low utilization then leads to low system throughput. To resolve this problem, this paper suggests a way to transfer all of the desired data into the global memory of GPU and keep it until all queries are executed as one batch. The PCIe transfer time from CPU to GPU is minimized, which results in better performance in less time of overall query processing. The execution time was improved by up to 40% when running multiple queries, compared to dedicated processing.
APA, Harvard, Vancouver, ISO, and other styles
2

Jäcksch, Bernhard [Verfasser]. "A Plan For OLAP: Optimization Of Financial Planning Queries In Data Warehouse Systems / Bernhard Jäcksch." München : Verlag Dr. Hut, 2011. http://d-nb.info/1017353700/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cao, Phuong Thao. "Approximation of OLAP queries on data warehouses." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00905292.

Full text
Abstract:
We study the approximate answers to OLAP queries on data warehouses. We consider the relative answers to OLAP queries on a schema, as distributions with the L1 distance and approximate the answers without storing the entire data warehouse. We first introduce three specific methods: the uniform sampling, the measure-based sampling and the statistical model. We introduce also an edit distance between data warehouses with edit operations adapted for data warehouses. Then, in the OLAP data exchange, we study how to sample each source and combine the samples to approximate any OLAP query. We next consider a streaming context, where a data warehouse is built by streams of different sources. We show a lower bound on the size of the memory necessary to approximate queries. In this case, we approximate OLAP queries with a finite memory. We describe also a method to discover the statistical dependencies, a new notion we introduce. We are looking for them based on the decision tree. We apply the method to two data warehouses. The first one simulates the data of sensors, which provide weather parameters over time and location from different sources. The second one is the collection of RSS from the web sites on Internet.
APA, Harvard, Vancouver, ISO, and other styles
4

Brito, Jaqueline Joice. "Processamento de consultas SOLAP drill-across e com junção espacial em data warehouses geográficos." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-18022013-090739/.

Full text
Abstract:
Um data warehouse geográco (DWG) é um banco de dados multidimensional, orientado a assunto, integrado, histórico, não-volátil e geralmente organizado em níveis de agregação. Além disso, também armazena dados espaciais em uma ou mais dimensões ou em pelo menos uma medida numérica. Visando oferecer suporte à tomada de decisão, é possível realizar em DWGs consultas SOLAP (spatial online analytical processing ), isto é, consultas analíticas multidimensionais (e.g., drill-down, roll-up, drill-across ) com predicados espaciais (e.g., intersecta, contém, está contido) denidos para range queries e junções espaciais. Um desafio no processamento dessas consultas é recuperar, de forma eficiente, dados espaciais e convencionais em DWGs muito volumosos. Na literatura, existem poucos índices voltados à indexação de DWGs, e ainda assim nenhum desses índices dedica-se a indexar consultas SOLAP drill-across e com junção espacial. Esta dissertação visa suprir essa limitação, por meio da proposta de estratégias para o processamento dessas consultas complexas. Para o processamento de consultas SOLAP drill-across foram propostas duas estratégias, Divide e Única, além da especicação de um conjunto de diretrizes que deve ser seguido para o projeto de um esquema de DWG que possibilite a execução dessas consultas e da especicação de classes de consultas. Para o processamento de consultas SOLAP com junção espacial foi proposta a estratégia SJB, além da identicação de quais características o esquema de DWG deve possuir para possibilitar a execução dessas consultas e da especicação do formato dessas consultas. A validação das estratégias propostas foi realizada por meio de testes de desempenho considerando diferentes congurações, sendo que os resultados obtidos foram contrastados com a execução de consultas do tipo junção estrela e o uso de visões materializadas. Os resultados mostraram que as estratégias propostas são muito eficientes. No processamento de consultas SOLAP drill-across, as estratégias Divide e Única mostraram uma redução no tempo de 82,7% a 98,6% com relação à junção estrela e ao uso de visões materializadas. No processamento de consultas SOLAP com junção espacial, a estratégia SJB garantiu uma melhora de desempenho na grande maioria das consultas executadas. Para essas consultas, o ganho de desempenho variou de 0,3% até 99,2%
A geographic data warehouse (GDW) is a special kind of multidimensional database. It is subject-oriented, integrated, historical, non-volatile and usually organized in levels of aggregation. Furthermore, a GDW also stores spatial data in one or more dimensions or at least in one numerical measure. Aiming at decision support, GDWs allow SOLAP (spatial online analytical processing) queries, i.e., multidimensional analytical queries (e.g., drill-down, roll-up, drill-across) extended with spatial predicates (e.g., intersects, contains, is contained) dened for range and spatial join queries. A challenging issue related to the processing of these complex queries is how to recover spatial and conventional data stored in huge GDWs eciently. In the literature, there are few access methods dedicated to index GDWs, and none of these methods focus on drill-across and spatial join SOLAP queries. In this master\'s thesis, we propose novel strategies for processing these complex queries. We introduce two strategies for processing SOLAP drill-across queries (namely, Divide and Unique), dene a set of guidelines for the design of a GDW schema that enables the execution of these queries, and determine a set of classes of these queries to be issued over a GDW schema that follows the proposed guidelines. As for the processing of spatial join SOLAP queries, we propose the SJB strategy, and also identify the characteristics of a DWG schema that enables the execution of these queries as well as dene the format of these queries. We validated the proposed strategies through performance tests that compared them with the star join computation and the use of materialized views. The obtained results showed that our strategies are very ecient. Regarding the SOLAP drill-across queries, the Divide and Unique strategies showed a time reduction that ranged from 82,7% to 98,6% with respect to star join computation and the use of materialized views. Regarding the SOLAP spatial join queries, the SJB strategy guaranteed best results for most of the analyzed queries. For these queries, the performance gain of the SJB strategy ranged from 0,3% to 99,2% over the star join computation and the use of materialized view
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Jing-Tang, and 林景堂. "Efficient Computation of ContinuousAggregation Queries on Data Warehouse." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/09733746068377733588.

Full text
Abstract:
碩士
國立中央大學
資訊工程研究所
95
Data Warehouse usually stores a large amount of historical data. User’s aggregate queries usually have to consume a large amount of time and system resources in order to analyze a large amount of data in data warehouse. The response time of these aggregate queries is typically several orders of magnitude higher than the response time of OLTP (Online Transaction Processing) queries. Because that, how to reduce their response time is becoming increasingly important. The concept of materialized view is well suited to the data warehouse environment. We offer a method to construct DAG (Directed Acyclic Graph) base on the derived situation between these aggregate queries. And then, we modify the depth-first search algorithm to travel this DAG. Finally, we will find out a queries execution order has well improve performance under the space constraint restricted by the data warehouse system.
APA, Harvard, Vancouver, ISO, and other styles
6

Chang, W. I., and 張瑋穎. "Using Object-Oriented Method for Complex Queries in Data Warehouse." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/45906341072806286592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gonçalves, Ricardo Jorge Fonseca. "Estabelecimento de planos de consumo energético para queries sobre data warehouses." Master's thesis, 2014. http://hdl.handle.net/1822/37261.

Full text
Abstract:
Dissertação de mestrado em Engenharia Informática
Atualmente o termo “eficiência energética” é alvo de grande preocupação por uma parte, significativa, da comunidade ligada à computação. Uma das frações na qual se verifica tal preocupação é a que está ligada aos sistemas de gestão de base de dados, os sistemas que são usualmente responsáveis pela gestão de acessos, manipulação e organização dos dados. Com efeito, e tendo em vista contornar o seu elevado consumo de energia, em particular ao nível das instalações de centros de dados (Data Centers), tem-se assistido a um gradual aumento do investimento em processos de investigação e na produção de componentes de hardware e de software de baixo consumo energético. Um caso particular de uso dos sistemas de gestão de base de dados são os sistemas de data warehousing. Estes sistemas são utilizados como suporte aos processos de tomada de decisão, lidando, em geral, com um grande volume de dados e com interrogações, normalmente complexas, no seu quotidiano. Partindo da informação disponibilizada pelos sistemas de gestão de base de dados, nomeadamente aquela que é fornecida ao nível dos planos de execução das queries, pretendeu-se neste trabalho de dissertação construir um sistema capaz de gerar planos de consumo de energia para as queries a executar num ambiente típico de um sistema de data warehousing e demonstrar a sua viabilidade técnica e prática através da sua aplicação a um caso concreto de exploração de um sistema de data warehousing.
Nowadays, the concept of “energy efficiency“ is subject to great concern by one significant part of the community related to the computing. One of the aspects in which that concern is verified is on database management systems, systems that are usually responsible for access management, manipulation and organization of data. In fact, in order to avoid their high energy consumption, particularly at the level of installations of data centers, there has been a gradual increase in the investment on research processes and in the production of low energy consumption hardware and software components. A particular case of the use of the database management systems are the systems of data warehousing. These systems are used as a support the decision-making processes, dealing, as a rule, with large data volume and complex queries on a daily basis. Departing from information provided by database management systems, particularly information provided at the level of query execution plans, it was intended in this dissertation to build a system that is able to generate energy consumption plans for queries running in a typical data warehousing environment and demonstrate their technical and practical viability through its application to a particular case of exploitation of a data warehousing system.
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Jing. "Optimizing queries using a materialized view in a data warehoue [sic]." 2006. http://digital.library.okstate.edu/etd/umi-okstate-1889.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Fa-Jung, and 吳發榮. "A Recursive Relative Prefix Sum Approach to Range Queries in Data Warehouses." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/21795572008363627752.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
90
Data warehouses contain data consolidated from several operational databases and provide the historical, and summarized data which is more appropriate for analysis than detail, individual records. On-Line Analytical Processing (OLAP) provides advanced analysis tools to extract information from data stored in a Data Warehouse. OLAP is designed to provide aggregate information that can be used to analyze the contents of databases and data warehouses. A range query applies an aggregation operation over all selected cells of an OLAP data cube where the selection is specified by providing ranges of values for numeric dimensions. Range sum queries are very useful in finding trends and in discovering relationships between attributes in the database. There is a method, prefix sum method, promises that any range sum query on a data cube can be answered in constant time by precomputing some auxiliary information. However, it is hampered by its update cost. For today's applications, interactive data analysis applications which provide current or "near current" information will require fast response time and have reasonable update time. Since the size of a data cube is exponential in the number of its dimensions, rebuilding the entire data cube can be very costly and is not realistic. To cope with this dynamic data cube problem, several strategies have been proposed. They all use specific data structures, which require extra storage cost, to response range sum query fast. For example, the double relative prefix sum method makes use of three components: a block prefix array, a relative overlay array and a relative prefix array to store auxiliary information. Although the double relative prefix sum method improves the update cost, it increases the query time. In the thesis, we present a method, called the recursive relative prefix sum method, which tries to provide a compromise between query and update cost. In the recursive relative prefix sum method with k levels, we use a relative prefix array and k relative overlay arrays. From our performance study, we show that the update cost of our method is always less than that of the prefix sum method. In most of cases, the update cost of our method is less than that of the relative prefix sum method. Moreover, in most of cases, the query cost of our method is less than that of the double relative prefix sum method. Compared with the dynamic data cube method, our method has lower storage cost and shorter query time. Consequently, our recursive relative prefix sum method has a reasonable response time for ad hoc range queries on the data cube, while at the same time, greatly reduces the update cost. In some applications, however, updating in some regions may happen more frequently than others. We also provide a solution, called the weighted relative prefix sum} method, for this situation. Therefore, this method can also provide a compromise between the range sum query cost and the update cost, when the update probabilities of different regions are considered.
APA, Harvard, Vancouver, ISO, and other styles
10

Tsai, Main-Che, and 蔡孟哲. "A Design of an Efficient Access Approach for Classifying Operational Data and an Intelligent Materialized Views Pre-fetching Mechanism for Enhancing Summary Queries on Data Warehouses." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/61420195228607149751.

Full text
Abstract:
碩士
朝陽科技大學
資訊管理系碩士班
90
Recently, organizations have mostly focused their investment in new information technologies for fast capturing the correct data and gaining competitive advantage through automatic systems that offered more efficient and cost-effective services to the customer. Operational systems were never designed to support such business activities and so using these systems for decision-making may never be an easy solution. Fortunately, in recent years, the potential of data warehousing is now seen as a valuable and viable solution. Data warehouse can be embedded into diverse working platform and data warehousing improves the productivity of corporate decision-makers access to data that can reveal previously unavailable, unknown, and untapped information on, etc. However, from the user’s viewpoint, there are two issues associated with the huge-scale data warehouse. One is that could the original data items stored in data warehouses satisfy the user’s decision-making strategies due to the fast changeable environments? The other is that how the decision-makers can fast access to the huge multi-dimensionable data? The cause-effect relationships among the queried data items in the association rules and the concept-hierarchy tree (CHT) among data items for classification are proposed for solving the two issues. For the former, the Apriori-Model association algorithm and the Linear Structure Relation Model (LISREL) are proposed as the explorations into the deduced relation combination to constructing a series of causal-effect association rules. For the latter, the array approaches and signature files are individually proposed for developing the access methods for transforming the massive amounts of data into well-characterized classes. Then the classified data will be integrated into a CHT—an easy but popular tool of data mining for classification. According to the two approaches, four mechanisms are established in this research for constructing an effective and efficient data warehouse. The Intelligent Materialized Views Pre-fetching and the examining mechanism by tracing the CHT paths are established for satisfying the user’s requirements while querying in the data warehouse. The indexing mechanism by arraying or signature files and the intelligent data retrieving mechanism are established for improving the efficiency of data retrieval. In this research, some experimental are conducted for the practicability and the performance of the presented mechanisms.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Data warehouse queries"

1

Gorawski, Marcin, and Rafał Malczok. "Performing Range Aggregate Queries in Stream Data Warehouse." In Man-Machine Interactions, 615–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00563-3_64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Böhnlein, Michael, Achim Ulbrich-vom Ende, and Markus Plaha. "Visual Specification of Multidimensional Queries Based on a Semantic Data Model." In Vom Data Warehouse zum Corporate Knowledge Center, 379–97. Heidelberg: Physica-Verlag HD, 2002. http://dx.doi.org/10.1007/978-3-642-57491-7_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Ying, Ying Chen, and Fangyan Rao. "The Approach for Data Warehouse to Answering Spatial OLAP Queries." In Intelligent Data Engineering and Automated Learning, 270–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45080-1_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bouadi, Tassadit, Marie-Odile Cordier, and René Quiniou. "Computing Hierarchical Skyline Queries “On-the-Fly” in a Data Warehouse." In Data Warehousing and Knowledge Discovery, 146–58. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10160-6_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Ji, Tok Wang Ling, Robert M. Bruckner, and A. Min Tjoa. "Building XML Data Warehouse Based on Frequent Patterns in User Queries." In Data Warehousing and Knowledge Discovery, 99–108. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45228-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hong, Seokjin, Byoungho Song, and Sukho Lee. "Efficient Execution of Range-Aggregate Queries in Data Warehouse Environments." In Conceptual Modeling — ER 2001, 299–310. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45581-7_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kumar, T. V. Vijay, Archana Singh, and Gaurav Dubey. "Mining Queries for Constructing Materialized Views in a Data Warehouse." In Advances in Intelligent Systems and Computing, 149–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-30111-7_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tan, Rebecca Boon-Noi, David Taniar, and Guojun Lu. "Efficient Execution of Parallel Aggregate Data Cube Queries in Data Warehouse Environments." In Intelligent Data Engineering and Automated Learning, 709–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45080-1_95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lopes, Claudivan Cruz, Valéria Cesário Times, Stan Matwin, Ricardo Rodrigues Ciferri, and Cristina Dutra de Aguiar Ciferri. "Processing OLAP Queries over an Encrypted Data Warehouse Stored in the Cloud." In Data Warehousing and Knowledge Discovery, 195–207. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10160-6_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Costa, João Pedro, and Pedro Furtado. "Data Warehouse Processing Scale-Up for Massive Concurrent Queries with SPIN." In Transactions on Large-Scale Data- and Knowledge-Centered Systems XVII, 1–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-46335-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data warehouse queries"

1

Yi, Xun, Russell Paulet, Elisa Bertino, and Guandong Xu. "Private data warehouse queries." In the 18th ACM symposium. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2462410.2462418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kurunji, Swathi, Tingjian Ge, Benyuan Liu, and Cindy X. Chen. "Communication cost optimization for cloud Data Warehouse queries." In 2012 IEEE 4th International Conference on Cloud Computing Technology and Science (CloudCom). IEEE, 2012. http://dx.doi.org/10.1109/cloudcom.2012.6427580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ferro, Marcio, Rogerio Fragoso, and Robson Fidalgo. "Document-Oriented Geospatial Data Warehouse: An Experimental Evaluation of SOLAP Queries." In 2019 IEEE 21st Conference on Business Informatics (CBI). IEEE, 2019. http://dx.doi.org/10.1109/cbi.2019.00013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Abdelmadjid, Larbi, and Malki Mimoun. "Queries-based requirements imprecision study for data warehouse update structural approach." In the 8th International Conference. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3200842.3200851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Singh, Archana, and Ajay Rana. "Generate frequent queries for Views in a Data Warehouse using Data Mining Techniques." In the 2014 International Conference. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2677855.2677903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"DISTRIBUTED APPROACH OF CONTINUOUS QUERIES WITH KNN JOIN PROCESSING IN SPATIAL DATA WAREHOUSE." In 9th International Conference on Enterprise Information Systems. SciTePress - Science and and Technology Publications, 2007. http://dx.doi.org/10.5220/0002368501310136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wijnhoven, Fons, Edwin van den Belt, Eddy Verbruggen, and Paul van der Vet. "Internal Data Market Services: An Ontology-Based Architecture and Its Evaluation." In 2003 Informing Science + IT Education Conference. Informing Science Institute, 2003. http://dx.doi.org/10.28945/2599.

Full text
Abstract:
On information markets, many suppliers and buyers of information goods exchange values. Some of these goods are data, whose value is created in buyer interactions with data sources. These interactions are enabled by data market services (DMS). DMS give access to one or several data sources. The major problems with the creation of information value in these contexts are (1) the quality of information retrievals and related queries, and (2) the complexity of matching information needs and supplies when different semantics are used by source systems and information buyers. This study reports about a prototype DMS (called CIRBA), which employs an ontology-based information retrieval system to solve semantic problems for a DMS. The DMS quality is tested in an experiment to assess its quality from a user perspective against a traditional data warehouse (with SQL) solution. The CIRBA solution gave substantially higher user satisfaction than the data warehouse alternative.
APA, Harvard, Vancouver, ISO, and other styles
8

Hammouche, Djamila, Mourad Loukam, Karim Atif, and Khaled Walid Hidouci. "Fuzzy MDX queries for taking into account the ambiguity in querying the baccalaureate data warehouse." In 2017 4th International Conference on Control, Decision and Information Technologies (CoDIT). IEEE, 2017. http://dx.doi.org/10.1109/codit.2017.8102587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

De Rougemont, Michel, and Phuong Thao Cao. "Approximate answers to OLAP queries on streaming data warehouses." In the fifteenth international workshop. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2390045.2390065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

García-García, Javier, and Carlos Ordonez. "Consistency-aware evaluation of OLAP queries in replicated data warehouses." In Proceeding of the ACM twelfth international workshop. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1651291.1651305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography