Dissertations / Theses on the topic 'OLAP technology'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 34 dissertations / theses for your research on the topic 'OLAP technology.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Chui, Chun-kit, and 崔俊傑. "OLAP on sequence data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45823996.
Full textRehman, Nafees Ur [Verfasser]. "Extending the OLAP Technology for Social Media Analysis / Nafees Ur Rehman." Konstanz : Bibliothek der Universität Konstanz, 2015. http://d-nb.info/1079478485/34.
Full textZhao, Hongyan. "A visualization tool to support Online Analytical Processing." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE0000622.
Full textFu, Lixin. "CubiST++ a new approach to improving the performance of ad-hoc cube queries /." [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/ank7110/masterfinal0.pdf.
Full textTitle from first page of PDF file. Document formatted into pages; contains x, 100 p.; also contains graphics. Vita. Includes bibliographical references (p. 95-99).
Bell, Daniel M. "An evaluative case report of the group decision manager : a look at the communication and coordination issues facing online group facilitation /." free to MU campus, to others for purchase, 1998. http://wwwlib.umi.com/cr/mo/fullcit?p9901215.
Full textFerreira, André Luiz Nascente. "GESTÃO DO PROCESSO DE RELIGAÇÃO DE ÁGUA TRATADA NA CIDADE DE GOIÂNIA, UTILIZANDO OLAP E DATA WAREHOUSE." Pontifícia Universidade Católica de Goiás, 2013. http://localhost:8080/tede/handle/tede/2436.
Full textThrough sanitation, economic, social and public health problems are minimized resulting in a considerable improvement in quality of life. For companies in this line of work can improve their results by establishing quality in the managment of their internal processes. There is a specific process of these companies dealing with Reconnections of Treated Water, which usually occur due to cutting of their supply. This work discusses the concepts of Information Technology (IT) in the intent to use them to improve the management of this process, covering specific theories of Software Engineering, Data Warehouse and OLAP. Overall, the work presents the development and implementation of an OLAP tool to assist in specific control over the time spent in performing the Reconnections of Treated Water in the city of Goiânia, state of Goiás, and analyzes the results of this tool, comparing scenarios before and after its implementation of the same. According to the analysis of the results, we observed improvements on the times of reconnections after the implementation of the developed tool.
Através do saneamento, problemas econômicos, sociais e de saúde pública são amenizados resultando em uma melhoria considerável na qualidade de vida da população. Para que empresas desse ramo de atividade possam desempenhar cada vez melhor suas atribuições é necessário estabelecer qualidade na gestão dos seus processos internos. Existe um processo específico dessas empresas que são indispensáveis para a manutenção da qualidade de vida e disponibilização do saneamento para a população, que se refere às Religações de Água Tratada, que geralmente ocorrem devido ao corte do seu abastecimento. Este trabalho aborda conceitos de Tecnologia da Informação (TI) na intenção de utilizá-los para melhorar a gestão deste processo, abrangendo teorias específicas de Engenharia de Software, OLAP e Data Warehouse. De maneira geral, o trabalho apresenta o desenvolvimento e implantação de uma ferramenta OLAP específica para auxiliar no controle sobre o tempo gasto na execução das Religações de Água Tratada na cidade de Goiânia, no estado de Goiás, e analisa os resultados obtidos por essa ferramenta, comparando cenários antes e após a implantação da mesma. De acordo com a análise dos resultados obtidos, pôde-se observar melhorias em relação ao tempo de execução das religações após a implantação da ferramenta desenvolvida.
Баглай, Роман Олегович. "Інформаційна архітектура банку на основі хмарних технологій." Thesis, Національний технічний університет "Харківський політехнічний інститут", 2019. http://repository.kpi.kharkov.ua/handle/KhPI-Press/43523.
Full textThe dissertation for the degree of a candidate in technical sciences (PhD), specialty 05.13.06 – information technologies (122 - computer science). – National Technical University «Kharkiv Polytechnic Institute», Kharkiv, 2019. The research object is the processes of automated management of data flows of the bank's information architecture based on cloud technologies. The research subject is models, methods and information technologies for optimization of banking information processing based on cloud infrastructure. The dissertation is devoted to the solution of the actual scientific and applied problem of increasing the efficiency of processing information of the bank end of day procedure by modernizing the information architecture of the bank based on the introduction of cloud technologies. The dissertation analyzes the feasibility of implementing cloud technologies to support the activities of banking institutions and functioning of business processes. The problems and advantages of cloud technologies at different levels of the bank's architectural landscape are considered, taking into account the specifics of regulatory requirements to activity of a financial institution. The introduction contains the proof dissertation topic relevance, indicates the relationship of work with scientific topics, purpose and objectives of the study, identified the object, subject and methods of research, shows the scientific novelty and practical significance of the results obtained, provides information on practical use, validation of results and their coverage in publications. The first chapter includes analyzes of the basic approaches to banking information management and the perspective fields to apply the cloud technologies for banking information systems. In particular, the decomposition of the object of study into components - "information architecture", "cloud technologies", "cloud computing" "banking information system (IS)", was carried out in order to further apply the methods of analysis and synthesis. The problems and benefits of using cloud technologies in Ukrainian banking institutions remain under-researched. Banks which are non-professional IT companies are forced to invest and maintain a significant amount of IT infrastructure resources and staff to manage their own business processes. In such a situation, cloud technologies help reduce costs and increase the efficiency of banking information systems. The second chapter explores information technology to minimize the security threats of cloud technologies for automated banking systems by using single sign on mechanisms to ensure strong user authentication. The mechanisms of implementation of such authentication and their practical application for security and increase of efficiency of bank business processes are investigated. Proposals have been made on the criteria for choosing a provider of identity access management as a service, single sign-on mechanisms and federated access scenarios to ensure strong authentication of users of banking ISs. The author has improved the method of assessing bank information security threats in the implementation of cloud technologies, which is based on a qualitative analysis of the probability of risk and volume of losses based on the international standard classification of cyber attacks MITRE, which allowed to optimize the mechanisms of protection of the information architecture of the bank against potential cyber attacks. The third chapter contains designed cloud-based IT solutions for the banking system that can transfer large computational loads to the cloud environment, ensuring compliance with the General Data Protection Regulation (GDPR) and national regulators. Anonymization of customer data is described as a solution to avoid the risks associated with the confidentiality of customer data and the need for their consent to the placement of personal data in a cloud environment. Improved information technology for replication of banking IS data, based on mechanisms of customer data depersonification. This allowed to improve the protection of data confidentiality and to fulfill the requirements of the NBU for localization of banking personalized client data on servers physically located in the territory of Ukraine. The developed IT solution architecture combines real-time data processing and batch data uploads. Unlike the traditional way of using data, it is not only migrated to a DB (database) deployed on cloud infrastructure, but also replicated back to On-premise infrastructure. Security requirements, governed by the standards of confidentiality, integrity and availability of data, are fully met by relevant cloud technologies. The author developed a mathematical model of the process of closing a bank's operating day and solved the problem of optimizing the time and cost of information processing for bank ICs deployed in a cloud environment, which allowed to determine the optimal configuration of cloud services in the bank's information architecture based on AWS services.
Kamath, Akash S. "An efficient algorithm for caching online analytical processing objects in a distributed environment." Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1174678903.
Full textБаглай, Роман Олегович. "Інформаційна архітектура банку на основі хмарних технологій." Thesis, Національний технічний університет "Харківський політехнічний інститут", 2019. http://repository.kpi.kharkov.ua/handle/KhPI-Press/43520.
Full textThe research for a Ph. D. science degree by specialty 05.13.06 – information technologies. – Kyiv National University of Trade and Economics, Kyiv, 2019. The feasibility study for implementation of cloud technologies to support the activities of banking institutions and functioning of business processes is conducted in the dissertation. The problems and advantages of cloud technologies at different levels of the bank's architectural landscape are investigated, taking into account the specifics of regulatory activity of a financial institution. The purpose of the dissertation is to increase the efficiency of information processing in frames of end of day procedure of the Core Banking System by modernizing the information architecture of the bank based on cloud technologies implementation. Modern approaches to managing IT security of banking institutions to minimize threats, including those generated by cloud technologies, are considered. A modern approach to building systems with IT security mechanisms is proposed. The analysis of information technology security threats in the implementation of cloud computing has been conducted to ensure the smooth and efficient operation of banking institutions and measures have been proposed to minimize these threats. The proof of concepts for results of the study was included to the relevant projects, driven by the challenges and trends of the banking sector, market and regulatory changes.
Norguet, Jean-Pierre. "Semantic analysis in web usage mining." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210890.
Full textIndeed, according to organizations theory, the higher levels in the organizations need summarized and conceptual information to take fast, high-level, and effective decisions. For Web sites, these levels include the organization managers and the Web site chief editors. At these levels, the results produced by Web analytics tools are mostly useless. Indeed, most of these results target Web designers and Web developers. Summary reports like the number of visitors and the number of page views can be of some interest to the organization manager but these results are poor. Finally, page-group and directory hits give the Web site chief editor conceptual results, but these are limited by several problems like page synonymy (several pages contain the same topic), page polysemy (a page contains several topics), page temporality, and page volatility.
Web usage mining research projects on their part have mostly left aside Web analytics and its limitations and have focused on other research paths. Examples of these paths are usage pattern analysis, personalization, system improvement, site structure modification, marketing business intelligence, and usage characterization. A potential contribution to Web analytics can be found in research about reverse clustering analysis, a technique based on self-organizing feature maps. This technique integrates Web usage mining and Web content mining in order to rank the Web site pages according to an original popularity score. However, the algorithm is not scalable and does not answer the page-polysemy, page-synonymy, page-temporality, and page-volatility problems. As a consequence, these approaches fail at delivering summarized and conceptual results.
An interesting attempt to obtain such results has been the Information Scent algorithm, which produces a list of term vectors representing the visitors' needs. These vectors provide a semantic representation of the visitors' needs and can be easily interpreted. Unfortunately, the results suffer from term polysemy and term synonymy, are visit-centric rather than site-centric, and are not scalable to produce. Finally, according to a recent survey, no Web usage mining research project has proposed a satisfying solution to provide site-wide summarized and conceptual audience metrics.
In this dissertation, we present our solution to answer the need for summarized and conceptual audience metrics in Web analytics. We first described several methods for mining the Web pages output by Web servers. These methods include content journaling, script parsing, server monitoring, network monitoring, and client-side mining. These techniques can be used alone or in combination to mine the Web pages output by any Web site. Then, the occurrences of taxonomy terms in these pages can be aggregated to provide concept-based audience metrics. To evaluate the results, we implement a prototype and run a number of test cases with real Web sites.
According to the first experiments with our prototype and SQL Server OLAP Analysis Service, concept-based metrics prove extremely summarized and much more intuitive than page-based metrics. As a consequence, concept-based metrics can be exploited at higher levels in the organization. For example, organization managers can redefine the organization strategy according to the visitors' interests. Concept-based metrics also give an intuitive view of the messages delivered through the Web site and allow to adapt the Web site communication to the organization objectives. The Web site chief editor on his part can interpret the metrics to redefine the publishing orders and redefine the sub-editors' writing tasks. As decisions at higher levels in the organization should be more effective, concept-based metrics should significantly contribute to Web usage mining and Web analytics.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Malinowski, Gajda Elzbieta. "Designing conventional, spatial, and temporal data warehouses: concepts and methodological framework." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210837.
Full textA data warehouse is a database that allows to store high volume of historical data required for analytical purposes. This data is extracted from operational databases, transformed into a coherent whole, and loaded into a DW during the extraction-transformation-loading (ETL) process.
DW data can be dynamically manipulated using on-line analytical processing (OLAP) systems. DW and OLAP systems rely on a multidimensional model that includes measures, dimensions, and hierarchies. Measures are usually numeric additive values that are used for quantitative evaluation of different aspects about organization. Dimensions provide different analysis perspectives while hierarchies allow to analyze measures on different levels of detail.
Nevertheless, currently, designers as well as users find difficult to specify multidimensional elements required for analysis. One reason for that is the lack of conceptual models for DW and OLAP system design, which would allow to express data requirements on an abstract level without considering implementation details. Another problem is that many kinds of complex hierarchies arising in real-world situations are not addressed by current DW and OLAP systems.
In order to help designers to build conceptual models for decision-support systems and to help users in better understanding the data to be analyzed, in this thesis we propose the MultiDimER model - a conceptual model used for representing multidimensional data for DW and OLAP applications. Our model is mainly based on the existing ER constructs, for example, entity types, attributes, relationship types with their usual semantics, allowing to represent the common concepts of dimensions, hierarchies, and measures. It also includes a conceptual classification of different kinds of hierarchies existing in real-world situations and proposes graphical notations for them.
On the other hand, currently users of DW and OLAP systems demand also the inclusion of spatial data, visualization of which allows to reveal patterns that are difficult to discover otherwise. The advantage of using spatial data in the analysis process is widely recognized since it allows to reveal patterns that are difficult to discover otherwise.
However, although DWs typically include a spatial or a location dimension, this dimension is usually represented in an alphanumeric format. Furthermore, there is still a lack of a systematic study that analyze the inclusion as well as the management of hierarchies and measures that are represented using spatial data.
With the aim of satisfying the growing requirements of decision-making users, we extend the MultiDimER model by allowing to include spatial data in the different elements composing the multidimensional model. The novelty of our contribution lays in the fact that a multidimensional model is seldom used for representing spatial data. To succeed with our proposal, we applied the research achievements in the field of spatial databases to the specific features of a multidimensional model. The spatial extension of a multidimensional model raises several issues, to which we refer in this thesis, such as the influence of different topological relationships between spatial objects forming a hierarchy on the procedures required for measure aggregations, aggregations of spatial measures, the inclusion of spatial measures without the presence of spatial dimensions, among others.
Moreover, one of the important characteristics of multidimensional models is the presence of a time dimension for keeping track of changes in measures. However, this dimension cannot be used to model changes in other dimensions.
Therefore, usual multidimensional models are not symmetric in the way of representing changes for measures and dimensions. Further, there is still a lack of analysis indicating which concepts already developed for providing temporal support in conventional databases can be applied and be useful for different elements composing a multidimensional model.
In order to handle in a similar manner temporal changes to all elements of a multidimensional model, we introduce a temporal extension for the MultiDimER model. This extension is based on the research in the area of temporal databases, which have been successfully used for modeling time-varying information for several decades. We propose the inclusion of different temporal types, such as valid and transaction time, which are obtained from source systems, in addition to the DW loading time generated in DWs. We use this temporal support for a conceptual representation of time-varying dimensions, hierarchies, and measures. We also refer to specific constraints that should be imposed on time-varying hierarchies and to the problem of handling multiple time granularities between source systems and DWs.
Furthermore, the design of DWs is not an easy task. It requires to consider all phases from the requirements specification to the final implementation including the ETL process. It should also take into account that the inclusion of different data items in a DW depends on both, users' needs and data availability in source systems. However, currently, designers must rely on their experience due to the lack of a methodological framework that considers above-mentioned aspects.
In order to assist developers during the DW design process, we propose a methodology for the design of conventional, spatial, and temporal DWs. We refer to different phases, such as requirements specification, conceptual, logical, and physical modeling. We include three different methods for requirements specification depending on whether users, operational data sources, or both are the driving force in the process of requirement gathering. We show how each method leads to the creation of a conceptual multidimensional model. We also present logical and physical design phases that refer to DW structures and the ETL process.
To ensure the correctness of the proposed conceptual models, i.e. with conventional data, with the spatial data, and with time-varying data, we formally define them providing their syntax and semantics. With the aim of assessing the usability of our conceptual model including representation of different kinds of hierarchies as well as spatial and temporal support, we present real-world examples. Pursuing the goal that the proposed conceptual solutions can be implemented, we include their logical representations using relational and object-relational databases.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Ahmed, Usman. "Dynamic cubing for hierarchical multidimensional data space." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00876624.
Full textKhandelwal, Nileshkumar. "An aggregate navigator for data warehouse." Ohio : Ohio University, 2000. http://www.ohiolink.edu/etd/view.cgi?ohiou1172255887.
Full textSilva, Manoela Camila Barbosa da. "Faça no seu ritmo mas não perca a hora: tomada de decisão sob demandado usuário utilizando dados da Web." Universidade Federal de São Carlos, 2017. https://repositorio.ufscar.br/handle/ufscar/9154.
Full textApproved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-16T17:29:46Z (GMT) No. of bitstreams: 1 SILVA_Manoela_2017.pdf: 5765067 bytes, checksum: 241f86d72385de30ffe23c0f4d49a868 (MD5)
Approved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-16T17:29:57Z (GMT) No. of bitstreams: 1 SILVA_Manoela_2017.pdf: 5765067 bytes, checksum: 241f86d72385de30ffe23c0f4d49a868 (MD5)
Made available in DSpace on 2017-10-16T17:30:06Z (GMT). No. of bitstreams: 1 SILVA_Manoela_2017.pdf: 5765067 bytes, checksum: 241f86d72385de30ffe23c0f4d49a868 (MD5) Previous issue date: 2017-08-07
Não recebi financiamento
In the current knowledge age, with the continuous growth of the web data volume and where business decisions must be made quickly, traditional BI mechanisms become increasingly inaccurate in order to help the decision-making process. In response to this scenario rises the BI 2.0 concept, which is a recent one and is mainly based on the Web evolution, having as one of the main characteristics the use of Web sources in decision-making. However, data from Web tend to be volatile to be stored in the DW, making them a good option for situational data. Situational data are useful for decision-making queries at a particular time and situation, and can be discarded after analysis. Many researches have been developed regarding to BI 2.0, but there are still many points to be explored. This work proposes a generic architecture for Decision Support Systems that aims to integrate situational data from Web to user queries at the right time; this is, when the user needs them for decision making. Its main contribution is the proposal of a new OLAP operator, called Drill-Conformed, enabling data integration in an automatic way and using only the domain of values from the situational data.In addition, the operator collaborates with the Semantic Web, by making available the semantics-related discoveries. The case study is a streamings provision system. The results of the experiments are presented and discussed, showing that is possible to make the data integration in a satisfactory manner and with good processing times for the applied scenario.
Na atual era do conhecimento, com o crescimento contínuo do volume de dados da Web e onde decisões de negócio devem ser feitas de maneira rápida, os mecanismos tradicionais de BI se tornam cada vez menos precisos no auxílio à tomada de decisão. Em resposta a este cenário surge o conceito de BI 2.0, que se trata de um conceito recente e se baseia principalmente na evolução da Web, tendo como uma das principais características a utilização de fontes Web na tomada de decisão. Porém, dados provenientes da Web tendem a ser voláteis para serem armazenados no DW, tornando-se uma boa opção para dados transitórios. Os dados transitórios são úteis para consultas de tomada de decisão em um determinado momento e cenário e podem ser descartados após a análise. Muitos trabalhos têm sido desenvolvidos em relação à BI 2.0, mas ainda existem muitos pontos a serem explorados. Este trabalho propõe uma arquitetura genérica para SSDs, que visa integrar dados transitórios, provenientes da Web, às consultas de usuários no momento em que o mesmo necessita deles para a tomada de decisão. Sua principal contribuição é a proposta de um novo operador OLAP , denominado Drill-Conformed, capaz de realizar a integração dos dados de maneira automática e fazendo uso somente do domínio de valores dos dados transitórios. Além disso, o operador tem o intuito de colaborar com a Web semântica, a partir da disponibilização das informações por ele descobertas acerca do domínio de dados utilizado. O estudo de caso é um sistema de disponibilização de streamings . Os resultados dos experimentos são apresentados e discutidos, mostrando que é possível realizar a integração dos dados de maneira satisfatória e com bons tempos de processamento para o cenário aplicado.
Bouadi, Tassadit. "Analyse multidimensionnelle interactive de résultats de simulation : aide à la décision dans le domaine de l'agroécologie." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00933375.
Full textKroeze, Jan Hendrik. "Developing an XML-based, exploitable linguistic database of the Hebrew text of Gen. 1:1-2:3." Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-07282008-121520/.
Full textJanoška, Daniel. "Moderní technologie v OLAP." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235902.
Full textŽižka, Petr. "Implementace nástroje pro analýzu přístupů k webové prezentaci založeného na technologii OLAP." Master's thesis, Vysoká škola ekonomická v Praze, 2007. http://www.nusl.cz/ntk/nusl-1323.
Full textBuyankhishig, Agiimaa. "Využití moderní self-service BI technologie v praxi." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-197265.
Full textBorchert, Christoph [Verfasser], Olaf [Akademischer Betreuer] Spinczyk, and Wolfgang [Gutachter] Schröder-Preikschat. "Aspect-oriented technology for dependable operating systems / Christoph Borchert ; Gutachter: Wolfgang Schröder-Preikschat ; Betreuer: Olaf Spinczyk." Dortmund : Universitätsbibliothek Dortmund, 2017. http://d-nb.info/1133361919/34.
Full textPlante, Mathieu. "Vers des cubes matriciels supportant l’analyse spatiale à la volée dans un contexte décisionnel." Thesis, Université Laval, 2014. http://www.theses.ulaval.ca/2014/30586/30586.pdf.
Full textHänßler, Olaf C. [Verfasser], Didier [Akademischer Betreuer] Théron, and Sergej [Akademischer Betreuer] Fatikow. "Multimodal sensing and imaging technology by integrated scanning electron, force, and nearfield microwave microscopy and its application to submicrometer studies / Olaf C. Hänßler ; Didier Théron, Sergej Fatikow." Oldenburg : BIS der Universität Oldenburg, 2018. http://d-nb.info/1157010199/34.
Full textHänßler, Olaf C. Verfasser], Didier [Akademischer Betreuer] Théron, and Sergej [Akademischer Betreuer] [Fatikow. "Multimodal sensing and imaging technology by integrated scanning electron, force, and nearfield microwave microscopy and its application to submicrometer studies / Olaf C. Hänßler ; Didier Théron, Sergej Fatikow." Oldenburg : BIS der Universität Oldenburg, 2018. http://d-nb.info/1157010199/34.
Full textMabed, Metwaly, and Thomas Köhler. "The Impact of Learning Management System Usage on Cognitive and Affective Performance." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-101320.
Full textChun, Chang Yi, and 張尹駿. "Demand Planning Hierarchy Software System Based on OLAP Technology." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/14780328150612707306.
Full text國立臺灣大學
工業工程學研究所
90
Demand planning has to deal with a lot of complicated information. It is difficult for planners to make decisions efficiently from these data. In order to satisfy the requirement of quick response, the relational database and the spreadsheets are no longer sufficient. Therefore, this thesis applies the new information technology, namely, On-Line Analytical Processing (OLAP), and develops a software system to help users determine a Demand Planning Hierarchy quickly. The concept of Demand Planning Hierarchy (DPH) is first developed by Chen[3] to improve demand planning efficiency. OLAP and the Multidimensional database are utilized to design and implement a DPH software system. This system provides not only the greedy method but also the dynamic programming approach to search for the DPH. In addition, it can support planners to choose a specific product combination, and the system will provide middle out approach to get the DPH. Finally, a demand data set from a semiconductor manufacturing company is used to test the developed software system.
"Materializing views in data warehouse: an efficient approach to OLAP." 2003. http://library.cuhk.edu.hk/record=b5891626.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2003.
Includes bibliographical references (leaves 83-87).
Abstracts in English and Chinese.
Acknowledgement --- p.iii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Data Warehouse and OLAP --- p.4
Chapter 1.2 --- Computational Model: Dependent Lattice --- p.10
Chapter 1.3 --- Materialized View Selection --- p.12
Chapter 1.3.1 --- Materialized View Selection under a Disk-Space Constraint --- p.13
Chapter 1.3.2 --- Materialized View Selection under a Maintenance-Time Con- straint --- p.16
Chapter 1.4 --- Main Contributions --- p.21
Chapter 2 --- A* Search: View Selection under a Disk-Space Constraint --- p.24
Chapter 2.1 --- The Weakness of Greedy Algorithms --- p.25
Chapter 2.2 --- A*-algorithm --- p.29
Chapter 2.2.1 --- An Estimation Function --- p.36
Chapter 2.2.2 --- Pruning Feasible Subtrees --- p.38
Chapter 2.2.3 --- Approaching the Optimal Solution from Two Directions --- p.41
Chapter 2.2.4 --- NIBS Order: Accelerating Convergence --- p.43
Chapter 2.2.5 --- Sliding Techniques: Eliminating Redundant H-Computation --- p.45
Chapter 2.2.6 --- Examples --- p.50
Chapter 2.3 --- Experiment Results --- p.54
Chapter 2.3.1 --- Analysis of Experiment Results --- p.55
Chapter 2.3.2 --- Computing for a Series of S Constraints --- p.60
Chapter 2.4 --- Conclusions --- p.62
Chapter 3 --- Randomized Search: View Selection under a Maintenance-Time Constraint --- p.64
Chapter 3.1 --- Non-monotonic Property --- p.65
Chapter 3.2 --- A Stochastic-Ranking-Based Evolutionary Algorithm --- p.67
Chapter 3.2.1 --- A Basic Evolutionary Algorithm --- p.68
Chapter 3.2.2 --- The Weakness of the rg-Method --- p.69
Chapter 3.2.3 --- Stochastic Ranking: a Novel Constraint Handling Technique --- p.70
Chapter 3.2.4 --- View Selection Using the Stochastic-Ranking-Based Evolu- tionary Algorithm --- p.72
Chapter 3.3 --- Conclusions --- p.74
Chapter 4 --- Conclusions --- p.75
Chapter 4.1 --- Thesis Review --- p.76
Chapter 4.2 --- Future Work --- p.78
Chapter A --- My Publications for This Thesis --- p.81
Bibliography --- p.83
Li, Yu. "Integrating XML data for OLAP using XML schema and UML /." 2005.
Find full textTypescript. Includes bibliographical references (leaves 113-117). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url%5Fver=Z39.88-2004&res%5Fdat=xri:pqdiss &rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR11840
Mansmann, Svetlana [Verfasser]. "Extending the OLAP technology to handle non-conventional and complex data / vorgelegt von Svetlana Mansmann." 2009. http://d-nb.info/993406726/34.
Full textWu, Shih-Jong, and 武士戎. "The Application of OLAP and Association Rule Technology on An Agent-Based E-Commerce Infrastructure." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/44575802190546071732.
Full text淡江大學
資訊工程學系
89
With the rapid development of computer and internet technology. After the Information Revolution, E-commerce is more and more popular. In recently year, the demand of E-commerce increases gradually. So designing and developing the platform for E-commerce are very important. In this virtual market, there are great mount of data, which like transaction data, customer data, etc. This is a challenge that to search and collect data in this market completely, and apply the suitable data mining technique to extract summary information and rule from very large historical data in this E-commerce platform. We can derive knowledge and obtain more commercial information. So we use mobile agent technique to construct an E-commerce infrastructure. In this infrastructure, we apply data warehouse, data mining and OLAP technique to derive and extract knowledge and rule information. Integrated all useful information to dispose these information optimizations in this market. Mining hided pattern and finding information, which have commerce value from data warehouse on this platform. In order to make decision rapidly and precisely in marketing, we apply data mining and OLAP techniques appropriately, and construct an E-commerce infrastructure completely. The major contribution of this paper is using mobile agent to construct an E-commerce infrastructure. In this platform we collect data completely to build a data warehouse. By applying OLAP and data mining techniques, we can extract data, integrate and manage information. And then we will combine ERP (Enterprise Resource planning) and CRM (Customer Relation Management) to E-commerce marketing to complete our system.
Tsai, Long-Tsay, and 蔡隆財. "Using OLAP technology and satisfaction survey to analyze the management strategies of the cable TV system - take Pingtung area as an example." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/99999440603370370568.
Full text國立屏東科技大學
高階經營管理碩士在職專班
94
The cable TV is deemed to be the role of the public utilities. There is status of the same influence of people's livelihood as such industries as the coal gas, water, electricity, etc. It must receive supervision of responsible institution to manage. Moreover, the cable TV also possesses local and community's media nature, the expense–standards should be suited to local conditions. On-line analytical processing (OLAP) is a favorable tool possessing user-friendly interface functions. It allows users picking and fetching multidimensional information fast and finding out overview to analyze the data as information. In this thesis, according to the real state of management of the cable TV, we tried with the on-line analytical processing technology and utilized SQL Server 2000 Analysis Services to support on-line analytical processing function in data transformation and processing. The satisfaction rating of the cable TV is mainly determined by the reciprocation of expectation and performance of the system proprietor’s signal service, quality of customer service and expenses, etc. In this thesis, we tried to make use of five statistical methods: Descriptives, Crosstabs, Chi-square test, t test, and Logistic Regression to explain people’s satisfaction rating for two cable TV systems to complete the analysis and research of satisfaction survey. This thesis is based on the management and administration data of the cable television system operator to analyze and process the real management state by using the on-line analytical processing technology. Subsequently, regarding the result of the satisfaction survey by phone questionnaire, we combined the statistical methods to analyze the audiences’ satisfaction rating and the household behavior deeply and extensively to sketch the relationship between the system operator's operation and administration and the audiences’ satisfaction rating. The quality of the cable TV system operator influences the audiences’ rights. We hope the integrated conclusions of the interaction between the operator's management strategies in Pingtung area and audiences’ rights analyzed by using OLAP technology and satisfaction survey in this thesis can provide the operators with information for the management strategies or suggestions for the government to safeguard the audiences’ rights. Hence to create an advantageous situation for the operator's management strategies, government's administrative performance, and audiences’ rights.
Sanghi, Anupam. "HYDRA: A Dynamic Approach to Database Regeneration." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5959.
Full textRajkumar, S. "Enhancing Coverage and Robustness of Database Generators." Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5528.
Full textBanda, Misheck. "A data management and analytic model for business intelligence applications." Diss., 2017. http://hdl.handle.net/10500/23129.
Full textComputing
M. Sc. (Computing)
"Short, Medium and Long Term Effects of an Online Learning Activity Based (OLAB) Curriculum on Middle School Students’ Achievement in Mathematics: A Quasi-Experimental Quantitative Study." Doctoral diss., 2016. http://hdl.handle.net/2286/R.I.40291.
Full textDissertation/Thesis
Doctoral Dissertation Curriculum and Instruction 2016