To see the other types of publications on this topic, follow the link: Operatic database.

Dissertations / Theses on the topic 'Operatic database'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Operatic database.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gong, Guohui. "On concurrency control in logbased databases." Thesis, Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/8175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gonzaga, André dos Santos. "The Similarity-aware Relational Division Database Operator." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-17112017-135006/.

Full text
Abstract:
In Relational Algebra, the operator Division (÷) is an intuitive tool used to write queries with the concept of for all, and thus, it is constantly required in real applications. However, as we demonstrate in this MSc work, the division does not support many of the needs common to modern applications, particularly those that involve complex data analysis, such as processing images, audio, genetic data, large graphs, fingerprints, and many other non-traditional data types. The main issue is the existence of intrinsic comparisons of attribute values in the operator, which, by definition, are always performed by identity (=), despite the fact that complex data must be compared by similarity. Recent works focus on supporting similarity comparison in relational operators, but no one treats the division. MSc work proposes the new Similarity-aware Division (÷) operator. Our novel operator is naturally well suited to answer queries with an idea of candidate elements and exigencies to be performed on complex data from real applications of high-impact. For example, it is potentially useful to support agriculture, genetic analyses, digital library search, and even to help controlling the quality of manufactured products and identifying new clients in industry. We validate our proposal by studying the first two of these applications.
O operador de Divisão (÷) da Álgebra Relacional permite representar de forma simples consultas com o conceito de para todos, e por isso é requerido em diversas aplicações reais. Entretanto, evidencia-se neste trabalho de mestrado que a divisão não atende às necessidades de diversas aplicações atuais, principalmente quando estas analisam dados complexos, como imagens, áudio, textos longos, impressões digitais, entre outros. Analisando o problema verifica-se que a principal limitação é a existência de comparações de valores de atributos intrínsecas à Divisão Relacional, que, por definição, são efetuadas sempre por identidade (=), enquanto objetos complexos devem geralmente ser comparados por similaridade. Hoje, encontram-se na literatura propostas de operadores relacionais com suporte à similaridade de objetos complexos, entretanto, nenhuma trata a Divisão Relacional. Este trabalho de mestrado propõe investigar e estender o operador de Divisão da Álgebra Relacional para melhor adequá-lo às demandas de aplicações atuais, por meio de suporte a comparações de valores de atributos por similaridade. Mostra-se aqui que a Divisão por Similaridade é naturalmente adequada a responder consultas diversas com um conceito de elementos candidatos e exigências descrito na monografia, envolvendo dados complexos de aplicações reais de alto impacto, com potencial por exemplo, para apoiar a agricultura, análises de dados genéticos, buscas em bibliotecas digitais, e até mesmo para controlar a qualidade de produtos manufaturados e a identificação de novos clientes em indústrias. Para validar a proposta, propõe-se estudar as duas primeiras aplicações citadas.
APA, Harvard, Vancouver, ISO, and other styles
3

Chang, Sidney H. (Sidney Hsiao-Ning) 1978. "Adapting an object-oriented database for disconnected operation." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tomé, Diego Gomes. "A near-data select scan operator for database systems." reponame:Repositório Institucional da UFPR, 2017. http://hdl.handle.net/1884/53293.

Full text
Abstract:
Orientador : Eduardo Cunha de Almeida
Coorientador : Marco Antonio Zanata Alves
Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 21/12/2017
Inclui referências : p. 61-64
Resumo: Um dos grandes gargalos em sistemas de bancos de dados focados em leitura consiste em mover dados em torno da hierarquia de memória para serem processados na CPU. O movimento de dados é penalizado pela diferença de desempenho entre o processador e a memória, que é um problema bem conhecido chamado memory wall. O surgimento de memórias inteligentes, como o novo Hybrid Memory Cube (HMC), permitem mitigar o problema do memory wall executando instruções em chips de lógica integrados a uma pilha de DRAMs. Essas memórias possuem potencial para computação de operações de banco de dados direto em memória além do armazenamento de bancos de dados. O objetivo desta dissertação é justamente a execução do operador algébrico de seleção direto em memória para reduzir o movimento de dados através da memória e da hierarquia de cache. O foco na operação de seleção leva em conta o fato que a leitura de colunas a serem filtradas movem grandes quantidades de dados antes de outras operações como junções (ou seja, otimização push-down). Inicialmente, foi avaliada a execução da operação de seleção usando o HMC como uma DRAM comum. Posteriormente, são apresentadas extensões à arquitetura e ao conjunto de instruções do HMC, chamado HMC-Scan, para executar a operação de seleção próximo aos dados no chip lógico do HMC. Em particular, a extensão HMC-Scan tem o objetivo de resolver internamente as dependências de instruções. Contudo, nós observamos que o HMC-Scan requer muita interação entre a CPU e a memória para avaliar a execução de filtros de consultas. Portanto, numa segunda contribuição, apresentamos a extensão arquitetural HIPE-Scan para diminuir esta interação através da técnica de predicação. A predicação suporta a avaliação de predicados direto em memória sem necessidade de decisões da CPU e transforma dependências de controle em dependências de dados (isto é, execução predicada). Nós implementamos a operação de seleção próximo aos dados nas estratégias de execução de consulta orientada a linha/coluna/vetor para a arquitetura x86 e para nas duas extensões HMC-Scan e HIPE-Scan. Nossas simulações mostram uma melhora de desempenho de até 3.7× para HMC-Scan e 5.6× para HIPE-Scan quando executada a consulta 06 do benchmark TPC-H de 1 GB na estratégia de execução orientada a coluna. Palavras-chave: SGBD em Memória, Cubo de Memória Híbrido, Processamento em Memória.
Abstract: A large burden of processing read-mostly databases consists of moving data around the memory hierarchy rather than processing data in the processor. The data movement is penalized by the performance gap between the processor and the memory, which is the well-known problem called memory wall. The emergence of smart memories, as the new Hybrid Memory Cube (HMC), allows mitigating the memory wall problem by executing instructions in logic chips integrated to a stack of DRAMs. These memories can enable not only in-memory databases but also have potential for in-memory computation of database operations. In this dissertation, we focus on the discussion of near-data query processing to reduce data movement through the memory and cache hierarchy. We focus on the select scan database operator, because the scanning of columns moves large amounts of data prior to other operations like joins (i.e., push-down optimization). Initially, we evaluate the execution of the select scan using the HMC as an ordinary DRAM. Then, we introduce extensions to the HMC Instruction Set Architecture (ISA) to execute our near-data select scan operator inside the HMC, called HMC-Scan. In particular, we extend the HMC ISA with HMC-Scan to internally solve instruction dependencies. To support branch-less evaluation of the select scan and transform control-flow dependencies into data-flow dependencies (i.e., predicated execution) we propose another HMC ISA extension called HIPE-Scan. The HIPE-Scan leads to less iteration between processor and HMC during the execution of query filters that depends on in-memory data. We implemented the near-data select scan in the row/column/vector-wise query engines for x86 and two HMC extensions, HMC-Scan and HIPE-Scan achieving performance improvements of up to 3.7× for HMC-Scan and 5.6× for HIPE-Scan when executing the Query-6 from 1 GB TPC-H database on column-wise. Keywords: In-Memory DBMS, Hybrid Memory Cube, Processing-in-Memory.
APA, Harvard, Vancouver, ISO, and other styles
5

McLean, Angus L. M. Thom III. "Real-time distributed simulation analysis : an application of temporal database and simulation systems research." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/9124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Oosthoek, Peter B. "A DATABASE SYSTEM CONCEPT TO SUPPORT FLIGHT TEST - MEASUREMENT SYSTEM DESIGN AND OPERATION." International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/608879.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Information management is of essential importance during design and operation of flight test measurement systems to be used for aircraft airworthiness certification. The reliability of the data generated by the realtime- and post-processing processes is heavily dependent on the reliability of all provided information about the used flight test measurement system. Databases are well fitted to the task of information management. They need however additional application software to store, manage and retrieve the measurement system configuration data in a specified way to support all persons and aircraft- and ground based systems that are involved in the design and operation of flight test measurement systems. At the Dutch National Aerospace Laboratory (NLR) a "Measurementsystem Configuration DataBase" (MCDB) is being developed under contract with the Netherlands Agency for Aerospace Programs (NIVR) and in cooperation with Fokker to provide the required information management. This paper addresses the functional and operational requirements to the MCDB, its data-contents and computer configuration and a description of its intended way of operation.
APA, Harvard, Vancouver, ISO, and other styles
7

Procházka, Jiří. "Databázový systém pro výrobu desek plošných spojů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-218033.

Full text
Abstract:
Analysis of problems during designing web database and their structure for requirement of manufacturer of PCBs. Study of using applications intended for creation, administration and protection of database system. Systems for operation of web server. Principles of projection tabels in databases. Design of database's structure for production system of PCBs.
APA, Harvard, Vancouver, ISO, and other styles
8

Motara, Yusuf Moosa. "File integrity checking." Thesis, Rhodes University, 2006. http://hdl.handle.net/10962/d1007701.

Full text
Abstract:
This thesis looks at file execution as an attack vector that leads to the execution of unauthorized code. File integrity checking is examined as a means of removing this attack vector, and the design, implementation, and evaluation of a best-of-breed file integrity checker for the Linux operating system is undertaken. We conclude that the resultant file integrity checker does succeed in removing file execution as an attack vector, does so at a computational cost that is negligible, and displays innovative and useful features that are not currently found in any other Linux file integrity checker.
APA, Harvard, Vancouver, ISO, and other styles
9

Alqahatni, Zakyah. "Hierarchical Alignment of Tuples in Databases for Fast Join Processing." OpenSIUC, 2019. https://opensiuc.lib.siu.edu/theses/2604.

Full text
Abstract:
Data is distributed across interconnected relations in relational databases. Relationships between tuples can be rearranged in distinct relations by matching the values of the join attribute, a process called equi-join operation. Unlike standard attempts to design efficient join algorithms in this thesis, an approach is proposed to align tuples in relation so that joins can be readily and effectively done. We position tuples in their respective relationships, called relations alignment, which has matching join attribute values in the corresponding positions. We also address how to align relations and perform joins on aligned relations. The experiments were conducted in this research to measure and analyze the efficiency of the proposed approach compared to standard MySQL joins.
APA, Harvard, Vancouver, ISO, and other styles
10

Poulon, Fanny. "Tissue database of autofluorescence response to improve intra-operative diagnosis of primitive brain tumors." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS236/document.

Full text
Abstract:
Le premier traitement standard pour les tumeurs cérébrales est la résection chirurgicale. Dans cette procédure un enjeu important demeure, l'identification des berges tumorales pour assurer une résection totale et éviter le risque de récidive pour le patient. A ce jour aucune technique d'imagerie peropératoire est capable de résoudre l'infiltration tumorale du tissu sain. La norme pour le diagnostic des berges tumorales est l'analyse histologique des biopsies. Une méthode ex vivo qui requiert un à plusieurs jours pour fournir ler apport pathologique final, un lapse de temps qui peut s'avérer fatal pour le patient. La microscopie optique a récemment été développer vers une utilisation clinique peropératoire pour répondre à cet enjeu. Dans travail, la technique de microscopie à deux-photons a été préférée pouressayer de répondre à cette problématique. Cette méthode donne accès à deux contrastes d'imagerie, la génération de seconde harmonique et l’émission de fluorescence, qui peuvent être combinés à des mesures quantitatives, tel que la spectroscopie et le temps de vie de fluorescence. Combiner ces quatre modalités de détection donnera une information complète sur la structure et le métabolisme de la région observée. Pour soutenir le développement technique vers une sonde endomicroscopique visant une utilisation peropératoire, les données en résultants doivent être fiables, et se montrer d'un intérêt pour le chirurgien. Par conséquent, une base de données sur le signal d'autofluorescence des tissus a été construite et présentée dans ce manuscrit, avec des algorithmes capables de discriminer de façon fiable les régions tumorales des régions saines. Des algorithmes qui ont montré le potentiel d'être automatisé dans une configuration clinique, afin de fournir une réponse en temps-réel au chirurgien
The first standard approach for brain tumor treatment is the surgical resection. In this protocol an important challenge remains, the identification of tumor margins to ensure a complete resection and avoid risk of tumor recurrence. Nowadays no intra-operative means of contrast are able to resolve infiltrated regions from healthy tissue. The standard for tumor margin diagnosis is the histological analysis of biopsies. An ex vivo method that requires one to several days to issue a final pathological reports, a time lapse that could be fatal to the patient. Optical microscopy have recently been developed towards an intra-operative clinical use to answer this challenge. In this work, the technique of two-photon microscopy based on the autofluorescence of tissue have been favored. This technique gives access to two imaging contrasts, the second-harmonic generation and emission of fluorescence, and can be combined to quantitative measurements, such as spectroscopy and fluorescence lifetime. The combination of these four modalities of detection will give a complete structural and metabolic information on the observed region. To support the technical development towards an endomicroscopic probe, the resulted data have to be reliable and proved to be of interest for the surgeon. Consequently, an extensive database of the autofluorescence response of brain tumor tissue have been constructed and presented in this manuscript, with algorithms able to discriminate with reliability tumoral from healthy regions. Algorithms that have shown potential to be automatized in a clinical setting, in order to give a real-time answer to the surgeons
APA, Harvard, Vancouver, ISO, and other styles
11

Bolha, Rosemarie. "Design and development of the missile system Operation and Support Cost AnalyzeR model and database." Master's thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-01202010-020201/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Karlsson, Jan, and Patrik Eriksson. "How the choice of Operating System can affect databases on a Virtual Machine." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4848.

Full text
Abstract:
As databases grow in size, the need for optimizing databases is becoming a necessity. Choosing the right operating system to support your database becomes paramount to ensure that the database is fully utilized. Furthermore with the virtualization of operating systems becoming more commonplace, we find ourselves with more choices than we ever faced before. This paper demonstrates why the choice of operating system plays an integral part in deciding the right database for your system in a virtual environment. This paper contains an experiment which measured benchmark performance of a Database management system on various virtual operating systems. This experiment shows the effect a virtual operating system has on the database management system that runs upon it. These findings will help to promote future research into this area as well as provide a foundation on which future research can be based upon.
APA, Harvard, Vancouver, ISO, and other styles
13

Mukherjee, Bodhisattwa. "Reconfigurable multiprocessor operating system kernel for high performance computing." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/9120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sjö, Kristoffer. "Semantics and Implementation of Knowledge Operators in Approximate Databases." Thesis, Linköping University, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2438.

Full text
Abstract:

In order that epistemic formulas might be coupled with approximate databases, it is necessary to have a well-defined semantics for the knowledge operator and a method of reducing epistemic formulas to approximate formulas. In this thesis, two possible definitions of a semantics for the knowledge operator are proposed for use together with an approximate relational database:

* One based upon logical entailment (being the dominating notion of knowledge in literature); sound and complete rules for reduction to approximate formulas are explored and found not to be applicable to all formulas.

* One based upon algorithmic computability (in order to be practically feasible); the correspondence to the above operator on the one hand, and to the deductive capability of the agent on the other hand, is explored.

Also, an inductively defined semantics for a"know whether"-operator, is proposed and tested. Finally, an algorithm implementing the above is proposed, carried out using Java, and tested.

APA, Harvard, Vancouver, ISO, and other styles
15

Fri, Martin, and Jon Börjesson. "Usage of databases in ARINC 653-compatible real-time systems." Thesis, Linköping University, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57473.

Full text
Abstract:

The Integrated Modular Avionics architecture , IMA, provides means for runningmultiple safety-critical applications on the same hardware. ARINC 653 is aspecification for this kind of architecture. It is a specification for space and timepartition in safety-critical real-time operating systems to ensure each application’sintegrity. This Master thesis describes how databases can be implementedand used in an ARINC 653 system. The addressed issues are interpartitioncommunication, deadlocks and database storage. Two alternative embeddeddatabases are integrated in an IMA system to be accessed from multiple clientsfrom different partitions. Performance benchmarking was used to study the differencesin terms of throughput, number of simultaneous clients, and scheduling.Databases implemented and benchmarked are SQLite and Raima. The studiesindicated a clear speed advantage in favor of SQLite, when Raima was integratedusing the ODBC interface. Both databases perform quite well and seem to begood enough for usage in embedded systems. However, since neither SQLiteor Raima have any real-time support, their usage in safety-critical systems arelimited. The testing was performed in a simulated environment which makesthe results somewhat unreliable. To validate the benchmark results, furtherstudies must be performed, preferably in a real target environment.The Integrated Modular Avionics architecture , IMA, provides means for runningmultiple safety-critical applications on the same hardware. ARINC 653 is aspecification for this kind of architecture. It is a specification for space and timepartition in safety-critical real-time operating systems to ensure each application’sintegrity. This Master thesis describes how databases can be implementedand used in an ARINC 653 system. The addressed issues are interpartitioncommunication, deadlocks and database storage. Two alternative embeddeddatabases are integrated in an IMA system to be accessed from multiple clientsfrom different partitions. Performance benchmarking was used to study the differencesin terms of throughput, number of simultaneous clients, and scheduling.Databases implemented and benchmarked are SQLite and Raima. The studiesindicated a clear speed advantage in favor of SQLite, when Raima was integratedusing the ODBC interface. Both databases perform quite well and seem to begood enough for usage in embedded systems. However, since neither SQLiteor Raima have any real-time support, their usage in safety-critical systems arelimited. The testing was performed in a simulated environment which makesthe results somewhat unreliable. To validate the benchmark results, furtherstudies must be performed, preferably in a real target environment.

APA, Harvard, Vancouver, ISO, and other styles
16

Miller, Nathan D. "Adapting the Skyline Operator in the NetFPGA Platform." Youngstown State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1369586333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hammond, Gregory Alan. "The instrumentation of a parallel, distributed database operation, retrieve-common, for merging two large sets of records." Thesis, Monterey, Calif. : Naval Postgraduate School, 1992. http://handle.dtic.mil/100.2/ADA247486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Nanongkai, Danupon. "Graph and geometric algorithms on distributed networks and databases." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41056.

Full text
Abstract:
In this thesis, we study the power and limit of algorithms on various models, aiming at applications in distributed networks and databases. In distributed networks, graph algorithms are fundamental to many applications. We focus on computing random walks which are an important primitive employed in a wide range of applications but has always been computed naively. We show that a faster solution exists and subsequently develop faster algorithms by exploiting random walk properties leading to two immediate applications. We also show that this algorithm is optimal. Our technique in proving a lower bound show the first non-trivial connection between communication complexity and lower bounds of distributed graph algorithms. We show that this technique has a wide range of applications by proving new lower bounds of many problems. Some of these lower bounds show that the existing algorithms are tight. In database searching, we think of the database as a large set of multi-dimensional points stored in a disk and want to help the users to quickly find the most desired point. In this thesis, we develop an algorithm that is significantly faster than previous algorithms both theoretically and experimentally. The insight is to solve the problem on the streaming model which helps emphasize the benefits of sequential access over random disk access. We also introduced the randomization technique to the area. The results were complemented with a lower bound. We also initiat a new direction as an attempt to get a better query. We are the first to quantify the output quality using "user satisfaction" which is made possible by borrowing the idea of modeling users by utility functions from game theory and justify our approach through a geometric analysis.
APA, Harvard, Vancouver, ISO, and other styles
19

Strednansky, Susan E. "Balancing the Trinity the Fine Art of Conflict Termination /." Maxwell AFB, Ala. : Air University Research Coordinator Office, 1998. http://www.au.af.mil/au/database/research/ay1995/saas/strednse.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sukarevičienė, Gintarė. "Developing business model for geo-location database for the operation of cognitive radio in the TV white space bands." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120620_111925-58983.

Full text
Abstract:
The aim of this thesis is to analyze how technological, economic, political and social factors can be integrated into Business Model for Geo-location database as a controlling entity for operation of Cognitive Radio devices in the TV White Space spectrum range. Tasks of thesis: to perform an analysis of scientific literature in the context of TVWS and to identify technologies of TVWS management, to find factors influencing Geo-location Database Business Model, to put forward Geo-location Database scenarios, to construct classification of Business Model for the Geo-location Database, to provide experimental study of feasibility to deploy distinct classification of Business Model for the distinct scenarios of Geo-location Database. Qualitative methods chosen for the research: exploratory literature analysis, consultations with experts/specialists and conceptual modelling based on scenarios. The exploratory part of the thesis describes existing spectrum shortage problem and presents potential technologies that can solve this problem. The theoretical part of this work introduces research methodology and the concept and principles of Business Model for technology innovation. Analytical part of the thesis seeks to identify potential Business Model configurations for the operations of Geo-location database in the TV White Space spectrum range. This part ends with presenting experimental study of the feasibility of Geo-location Business Model. The final part of the thesis concludes... [to full text]
Šio darbo tikslas – išanalizuoti, kaip technologiniai, ekonominiai, politiniai ir socialiniai faktoriai gali būti integruoti į verslo modelį, skirtą TV spektro tuštumų geografinei duomenų bazei, naudojančiai sumaniojo radijo ryšio sistemas. Tikslui pasiekti išsikelti uždaviniai: atlikti mokslinės literatūros analizę TV spektro tuštumų tema ir identifikuoti spektro tuštumų valdymo technologijas, nustatyti veiksnius, įtakojančius geografinės duomenų bazės verslo modelį, sudaryti geografinės duomenų bazės verslo scenarijus, sudaryti geografinės duomenų bazės verslo modelių klasifikaciją, nustatyti sudarytos verslo modelių klasifikacijos tinkamumą kiekvienam scenarijui bei nustatyti optimalią verslo modelio konfigūraciją. Uždaviniams įgyvendinti taikyti kokybiniai metodai: mokslinės literatūros analizė, konsultacijos su ekspertais bei specialistais, konceptualus modeliavimas, paremtas scenarijų metodu. Pirmoje darbo dalyje aprašomos egzistuojančios spektro trūkumo problemos ir apžvelgiamos potencialios technologijos, kurios gali išspręsti išanalizuotą problemą. Antroji darbo dalis pristato tyrimo metodus ir nagrinėja verslo modelį bei jo principus, galinčius įtakoti technologijos inovaciją. Trečioji darbo dalis siekia identifikuoti ir įvertinti potencialius TV spektro tuštumų geografinės duomenų bazės verslo modelius. Pateikiamos darbo išvados atsižvelgiant į darbo naudingumą, praktiškumą ir esamus apribojimus. Pagrindiniai darbo rezultatai: nustatyta optimali verslo modelių... [toliau žr. visą tekstą]
APA, Harvard, Vancouver, ISO, and other styles
21

Tinnefeld, Christian [Verfasser], and Hasso [Akademischer Betreuer] Plattner. "Building a columnar database on shared main memory-based storage : database operator placement in a shared main memory-based storage system that supports data access and code execution / Christian Tinnefeld ; Betreuer: Hasso Plattner." Potsdam : Universität Potsdam, 2014. http://d-nb.info/1218398442/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Jbantova, Mariana G. "State spill policies for state intensive continuous query plan evaluation." Link to ETD, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-050207-222839/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Kaibo. "Algorithmic and Software System Support to Accelerate Data Processing in CPU-GPU Hybrid Computing Environments." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1447685368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Pokorný, Josef. "Určení pozice útočníka při pokusu o neoprávněný přístup do operačního systému." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220320.

Full text
Abstract:
My master thesis estimates physical location of potential operating system attacker. It deals with basic methods of attack against operating system: spam and viruses, searching the Internet, port scanning and operating system detection. The thesis disserts about a port scanner Nmap, a port scanning detector Scanlogd and about a system log watch Swatch. The thesis deals with geolocation methods of potential operating system attacker. These geolocation methods are divided into an active and a passive types. The active methods measure delay in the Internet. The passive methods query the database. I mentioned a freely accessible Whois database and MaxMind databases. There is a program developed and practically tested. The program simulates an attacker beginning an attack by scanning ports of target machine. The program works with dataset of real IP addresses. The program also detects the attack against operating system. The real and evaluated location of an attacker is got and then shown in a map. At the end there is a review of results and data comparison with colleagues.
APA, Harvard, Vancouver, ISO, and other styles
25

Gande, Santhrushna. "Developing Java Programs on Android Mobile Phones Using Speech Recognition." CSUSB ScholarWorks, 2015. https://scholarworks.lib.csusb.edu/etd/232.

Full text
Abstract:
Nowadays Android operating system based mobile phones and tablets are widely used and had millions of users around the world. The popularity of this operating system is due to its multi-tasking, ease of access and diverse device options. “Java Programming Speech Recognition Application” is an Android application used for handicapped individuals who are not able or have difficultation to type on a keyboard. This application allows the user to write a compute program (in Java Language) by dictating the words and without using a keyboard. The user needs to speak out the commands and symbols required for his/her program. The program has been designed to pick up the Java constant keywords (such as ‘boolean’, ‘break’, ‘if’ and ‘else’), similar to the word received by the speech recognizer system in the application. The “Java Programming Speech Recognition Application” contains external plug-ins such as programming editor and a speech recognizer to record and write the program. These plug-ins come in the form of libraries and pre-coded folders which have to be attached to the main program by the developer.
APA, Harvard, Vancouver, ISO, and other styles
26

Zheng, Mai. "Towards Manifesting Reliability Issues In Modern Computer Systems." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1436283400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Parent, Christine. "L'approche erc : un modele de donnees et une algebre de type entite relation." Paris 6, 1987. http://www.theses.fr/1987PA066568.

Full text
Abstract:
Presentation des definitions formelles d'un modele de donnees, d'une algebre associee et des proprietes mathematiques de ses operateurs. Le modele propose est le plus general possible par rapport aux possibilites liees aux trois concepts de base de l'approche entite relation (type d'entite, type de relation et attribut), avec notamment des attributs a structure recursive et des valeurs doubles
APA, Harvard, Vancouver, ISO, and other styles
28

Tamascelli, Nicola. "A Machine Learning Approach to Predict Chattering Alarms." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
The alarm system plays a vital role to grant safety and reliability in the process industry. Ideally, an alarm should inform the operator about critical conditions only; during alarm floods, the operator may be overwhelmed by several alarms in a short time span. Crucial alarms are more likely to be missed during these situations. Poor alarm management is one of the main causes of unintended plant shut down, incidents and near misses in the chemical industry. Most of the alarms triggered during a flood episode are nuisance alarms –i.e. alarms that do not communicate new information to the operator, or alarms that do not require an operator action. Chattering alarms –i.e. that repeat three or more times in a minute, and redundant alarms –i.e. duplicated alarms, are common forms of nuisance. Identifying nuisance alarms is a key step to improve the performance of the alarm system. Advanced techniques for alarm rationalization have been developed, proposing methods to quantify chattering, redundancy and correlation between alarms. Although very effective, these techniques produce static results. Machine Learning appears to be an interesting opportunity to retrieve further knowledge and support these techniques. This knowledge can be used to produce more flexible and dynamic models, as well as to predict alarm behaviour during floods. The aim of this study is to develop a machine learning-based algorithm for real-time alarm classification and rationalization, whose results can be used to support the operator decision-making procedure. Specifically, efforts have been directed towards chattering prediction during alarm floods. Advanced techniques for chattering, redundancy and correlation assessment have been performed on a real industrial alarm database. A modified approach has been developed to dynamically assess chattering, and the results have been used to train three different machine learning models, whose performance has been evaluated and discussed.
APA, Harvard, Vancouver, ISO, and other styles
29

Vrzal, Miroslav. "Systém logování zpráv." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236737.

Full text
Abstract:
This master's thesis in the first part describes the AS/400 and its message system and concentrates especially on the following areas: predefinition of messages and their storing, types of messages and levels of their importance, work with variables included in message text and ways of sending messages. On the basis of AS/400 message system is designed and implemented message log system for the application loggin for Aegis. s.r.o. The analysis of the message log systems is also a part of the work. The syslog and syslog-ngused in UNIX systems are described, concerning types of messages, importance of messages and filtering and storing of messages. It further describes possibilities of application logging based on Java in the specific case of the Log4jutility. In the second part thesis describes own log message systems design and implementation.
APA, Harvard, Vancouver, ISO, and other styles
30

Papaioannou, Eva [Verfasser]. "Assessing the response of the German Baltic small-scale fishery to changes in the abundance and management of fish resources during 2000-09 - Developing and applying a spatial database to quantify the impacts of changes in resource abundance and management during 2000-09 on the structure and operation of the German Baltic Small-Scale fishery / Eva Papaioannou." Kiel : Universitätsbibliothek Kiel, 2017. http://d-nb.info/1138979708/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Elmi, Saïda. "An Advanced Skyline Approach for Imperfect Data Exploitation and Analysis." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2017. http://www.theses.fr/2017ESMA0011/document.

Full text
Abstract:
Ce travail de thèse porte sur un modèle de requête de préférence, appelée l'opérateur Skyline, pour l'exploitation de données imparfaites. L'imperfection de données peut être modélisée au moyen de la théorie de l'évidence. Ce type de données peut être géré dans des bases de données imparfaites appelées bases de données évidentielles. D'autre part, l'opérateur skyline est un outil puissant pour extraire les objets les plus intéressants dans une base de données.Dans le cadre de cette thèse, nous définissons une nouvelle sémantique de l'opérateur Skyline appropriée aux données imparfaites modélisées par la théorie de l'évidence. Nous introduisons par la suite la notion de points marginaux pour optimiser le calcul distribué du Skyline ainsi que la maintenance des objets Skyline en cas d'insertion ou de suppression d'objets dans la base de données.Nous modélisons aussi une fonction de score pour mesurer le degré de dominance de chaque objet skyline et définir le top-k Skyline. Une dernière contribution porte sur le raffinement de la requête Skyline pour obtenir les meilleurs objets skyline appelés objets Etoile ou Skyline stars
The main purpose of this thesis is to study an advanced database tool named the skyline operator in the context of imperfect data modeled by the evidence theory. In this thesis, we first address, on the one hand, the fundamental question of how to extend the dominance relationship to evidential data, and on the other hand, it provides some optimization techniques for improving the efficiency of the evidential skyline. We then introduce efficient approach for querying and processing the evidential skyline over multiple and distributed servers. ln addition, we propose efficient methods to maintain the skyline results in the evidential database context wben a set of objects is inserted or deleted. The idea is to incrementally compute the new skyline, without reconducting an initial operation from the scratch. In the second step, we introduce the top-k skyline query over imperfect data and we develop efficient algorithms its computation. Further more, since the evidential skyline size is often too large to be analyzed, we define the set SKY² to refine the evidential skyline and retrieve the best evidential skyline objects (or the stars). In addition, we develop suitable algorithms based on scalable techniques to efficiently compute the evidential SKY². Extensive experiments were conducted to show the efficiency and the effectiveness of our approaches
APA, Harvard, Vancouver, ISO, and other styles
32

Hameurlain, Abdelkader. "L'inference dans les bases de donnees relationnelles." Toulouse 3, 1987. http://www.theses.fr/1987TOU30281.

Full text
Abstract:
Presentation des diverses methodes de traitement de requetes recursives. Des strategies d'optimisation sont etudiees et plusieurs facteurs influant les performances des methodes de traitement de requetes recursives sont mis en evidence. Une methode de traitement de requetes manipulant des relations virtuelles est proposee
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Zebin. "Intégration des méthodes de sensibilité d'ordre élevé dans un processus de conception optimale des turbomachines : développement de méta-modèles." Thesis, Ecully, Ecole centrale de Lyon, 2014. http://www.theses.fr/2014ECDL0047/document.

Full text
Abstract:
La conception optimale de turbomachines repose usuellement sur des méthodes itératives avec des évaluations soit expérimentales, soit numériques qui peuvent conduire à des coûts élevés en raison des nombreuses manipulations ou de l’utilisation intensive de CPU. Afin de limiter ces coûts et de raccourcir les temps de développement, le présent travail propose d’intégrer une méthode de paramétrisation et de métamodélisation dans un cycle de conception d’une turbomachine axiale basse vitesse. La paramétrisation, réalisée par l’étude de sensibilité d’ordre élevé des équations de Navier-Stokes, permet de construire une base de données paramétrée qui contient non seulement les résultats d’évaluations, mais aussi les dérivées simples et les dérivées croisées des objectifs en fonction des paramètres. La plus grande quantité d’informations apportée par les dérivées est avantageusement utilisée lors de la construction de métamodèles, en particulier avec une méthode de Co-Krigeage employée pour coupler plusieurs bases de données. L’intérêt économique de la méthode par rapport à une méthode classique sans dérivée réside dans l’utilisation d’un nombre réduit de points d’évaluation. Lorsque ce nombre de points est véritablement faible, il peut arriver qu’une seule valeur de référence soit disponible pour une ou plusieurs dimensions, et nécessite une hypothèse de répartition d’erreur. Pour ces dimensions, le Co-Krigeage fonctionne comme une extrapolation de Taylor à partir d’un point et de ses dérivées. Cette approche a été expérimentée avec la construction d’un méta-modèle pour une hélice présentant un moyeu conique. La méthodologie fait appel à un couplage de bases de données issues de deux géométries et deux points de fonctionnement. La précision de la surface de réponse a permis de conduire une optimisation avec un algorithme génétique NSGA-2, et les deux optima sélectionnés répondent pour l’un à une maximisation du rendement, et pour l’autre à un élargissement de la plage de fonctionnement. Les résultats d’optimisation sont finalement validés par des simulations numériques supplémentaires
The turbomachinery optimal design usually relies on some iterative methods with either experimental or numerical evaluations that can lead to high cost due to numerous manipulations and intensive usage of CPU. In order to limit the cost and shorten the development time, the present thesis work proposes to integrate a parameterization method and the meta-modelization method in an optimal design cycle of an axial low speed turbomachine. The parameterization, realized by the high order sensitivity study of Navier-Stokes equations, allows to construct a parameterized database that contains not only the evaluations results, but also the simple and cross derivatives of objectives as a function of parameters. Enriched information brought by the derivatives are utilized during the meta-model construction, particularly by the Co-Kriging method employed to couple several databases. Compared to classical methods that are without derivatives, the economic benefit of the proposed method lies in the use of less reference points. Provided the number of reference points is small, chances are a unique point presenting at one or several dimensions, which requires a hypothesis on the error distribution. For those dimensions, the Co-Kriging works like a Taylor extrapolation from the reference point making the most of its derivatives. This approach has been experimented on the construction of a meta-model for a conic hub fan. The methodology recalls the coupling of databases based on two fan geometries and two operating points. The precision of the meta-model allows to perform an optimization with help of NSGA-2, one of the optima selected reaches the maximum efficiency, and another covers a large operating range. The optimization results are eventually validated by further numerical simulations
APA, Harvard, Vancouver, ISO, and other styles
34

(14570256), Dimitrios Kopanakis. "Operapaedia." Thesis, 2004. https://figshare.com/articles/thesis/Operapaedia/22014290.

Full text
Abstract:

This project is concerned with identifying and investigating a problem faced by students undertaking studies in opera, namely, readily available performance related materials for students of opera to consult when preparing for a performance. The purpose of this project is to develop an internet-based resource containing a range of information about operas, musical scores and their interpretation, opera recordings, opera literature, opera composers, opera performers, and opera companies as a solution to this problem. The Project uses aspects of opera interpretation and opera libretti as a theoretical framework.

In order to accomplish these outcomes, this project demonstrates that no single source of information that is presently available in print or electronic form is comprehensive, reliable and convenient enough to provide opera students with information necessary for use in a production. The Project highlights the need for an electronic resource that is comprehensive for opera students.

The analysis of the usability of website research literature indicates that successful electronic resources embody the principles of: "24/7", 'anywhere', 'anytime' access; ease-of-access, user-friendly website design; and resources relevant to the specific enquiry of the user in their design.

The Project leads to the conclusion that no single source of information is comprehensive enough for an opera student to acquire, process and assimilate all of the information necessary for use in a production.

The overarching product of the project is a design and guidelines for the development of an internet-based resource aimed at eliminating the lack of performance related materials in the field of opera study. I call this resource Operapaedia.

APA, Harvard, Vancouver, ISO, and other styles
35

HUANG, HOU-XHENG, and 黃厚生. "Design of database for distribution system operation." Thesis, 1989. http://ndltd.ncl.edu.tw/handle/18817477638167210308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Houng, Yung-Cheng, and 洪永城. "A Pipelined Database Machine with Efficient Join Operation." Thesis, 1986. http://ndltd.ncl.edu.tw/handle/92687223573359740342.

Full text
Abstract:
碩士
國立清華大學
計算機管理決策研究所
74
Database management has become increasingly important in recent years. How to design an efficient database machine thus becomes an essential topic. This thesis presents a database machine for supporting the primitive operations of a relational algebra. First, the database machines that have been proposed so far are reviewed. Second, we propose a new database machine which adopts the hash-sort-merge strategy to implement these primitive operations. Finally, we compare the performance of our database machine with that of some other database machines. In our results, when the size of operand relations is large, our database machine is superior.
APA, Harvard, Vancouver, ISO, and other styles
37

Huang, Lee-Wen, and 黃立文. "Similarity Retrieval on Video Database Based upon Module Operation." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/95237395890701420282.

Full text
Abstract:
碩士
國立交通大學
資訊管理研究所
83
In this thesis, we propose a method of retrieving videos from thevideo database based upon the temporal relationship among the videos. We transform each video(query) into a set of ordered triples, (Oi,Oj,Rij)s, where Oi and Oj are two symbol objects and Rij is the temporal relationship between Oi and Oj. Then, we construct a hashing table for all the triples corresponding to the videos in the video database. Every ordered triple is assigned a prime number. So, each video Vi can be transformed into a positive integer value Pi. A query can be transformed into a positive integer value Pq via the preconstructed hashing table. The answer to a query is a collection of videos such that their corresponding Pi''s can be divided by Pq, i.e., the remainder of Pi/Pq is equal to 0. A video database query system is built based upon the proposed method. From computation results, we notice that the query time increases linearly as the size of the video database grows.
APA, Harvard, Vancouver, ISO, and other styles
38

"Auditing database integrity with special reference to UNIX and INFORMIX." Thesis, 2015. http://hdl.handle.net/10210/13474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Huang, Shin-Hau, and 黃信豪. "The Influence of DataBase Marketing on the Enterprises' Operating Performance." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/19001280294606535262.

Full text
Abstract:
碩士
中原大學
資訊管理研究所
101
In 2000’s, developing in I.T. rapidly. The enterprises have the ability to store large amounts of customer information (such as Data Warehouse) and taken advantage of data analysis tools (such as data mining, etc.) to find out the real needs of potential customers. Then those actionable knowledge that can be used to improve the effectiveness of the decision-making process in the marketing department. Therefore, the Database marketing not only offers proper products for each segmented customers, but also expands the enterprises' market share. For the purpose of elevating competition, companies must grasp correct information in time and adopt the best strategy. It becomes more important to construe the database-marketing system to elevate strategic advantage. This research served planning and analyzing activities of database marketing as the theme, and applied literature review method to form an integrated database marketing planning procedure. This thesis also aimed at Ford Lio Ho Motor Company, supplier and a dealer as the case, and then conducted in-depth interviews and historical data in order to understand the applicability and related operation know-how of the underlying model. An analysis of the case indicates that the more using database-marketing system, the more profits of the services will be generated. Although practical values are contained in theory architectures, limitations such as representative and objective factors exist. At the end, the research suggests that effective analysis of database could be a marketing tool. How to recruit more customers and increase consuming frequency will be an important issue for “Dealers” to practice database-marketing.
APA, Harvard, Vancouver, ISO, and other styles
40

Viennot, Nicolas. "Deterministic, Mutable, and Distributed Record-Replay for Operating Systems and Database Systems." Thesis, 2016. https://doi.org/10.7916/D8VM4CTT.

Full text
Abstract:
Application record and replay is the ability to record application execution and replay it at a later time. Record-replay has many use cases including diagnosing and debugging applications by capturing and reproducing hard to find bugs, providing transparent application fault tolerance by maintaining a live replica of a running program, and offline instrumentation that would be too costly to run in a production environment. Different record-replay systems may offer different levels of replay faithfulness, the strongest level being deterministic replay which guarantees an identical reenactment of the original execution. Such a guarantee requires capturing all sources of nondeterminism during the recording phase. In the general case, such record-replay systems can dramatically hinder application performance, rendering them unpractical in certain application domains. Furthermore, various use cases are incompatible with strictly replaying the original execution. For example, in a primary-secondary database scenario, the secondary database would be unable to serve additional traffic while being replicated. No record-replay system fit all use cases. This dissertation shows how to make deterministic record-replay fast and efficient, how broadening replay semantics can enable powerful new use cases, and how choosing the right level of abstraction for record-replay can support distributed and heterogeneous database replication with little effort. We explore four record-replay systems with different semantics enabling different use cases. We first present Scribe, an OS-level deterministic record-replay mechanism that support multi-process applications on multi-core systems. One of the main challenge is to record the interaction of threads running on different CPU cores in an efficient manner. Scribe introduces two new lightweight OS mechanisms, rendezvous point and sync points, to efficiently record nondeterministic interactions such as related system calls, signals, and shared memory accesses. Scribe allows the capture and replication of hard to find bugs to facilitate debugging and serves as a solid foundation for our two following systems. We then present RacePro, a process race detection system to improve software correctness. Process races occur when multiple processes access shared operating system resources, such as files, without proper synchronization. Detecting process races is difficult due to the elusive nature of these bugs, and the heterogeneity of frameworks involved in such bugs. RacePro is the first tool to detect such process races. RacePro records application executions in deployed systems, allowing offline race detection by analyzing the previously recorded log. RacePro then replays the application execution and forces the manifestation of detected races to check their effect on the application. Upon failure, RacePro reports potentially harmful races to developers. Third, we present Dora, a mutable record-replay system which allows a recorded execution of an application to be replayed with a modified version of the application. Mutable record-replay provides a number of benefits for reproducing, diagnosing, and fixing software bugs. Given a recording and a modified application, finding a mutable replay is challenging, and undecidable in the general case. Despite the difficulty of the problem, we show a very simple but effective algorithm to search for suitable replays. Lastly, we present Synapse, a heterogeneous database replication system designed for Web applications. Web applications are increasingly built using a service-oriented architecture that integrates services powered by a variety of databases. Often, the same data, needed by multiple services, must be replicated across different databases and kept in sync. Unfortunately, these databases use vendor specific data replication engines which are not compatible with each other. To solve this challenge, Synapse operates at the application level to access a unified data representation through object relational mappers. Additionally, Synapse leverages application semantics to replicate data with good consistency semantics using mechanisms similar to Scribe.
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Mao-Sheng, and 陳茂盛. "Construction and Benefits Analyze Knowledge Database for the NC Machine Operation Codes." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/48657058305877467602.

Full text
Abstract:
碩士
逢甲大學
工業工程學所
91
Abstract The knowledge of machine parts operation process had been regarded as a tacit knowledge for a long time and it was difficult to take down by words. However, at the present time, information and software technology are more popular. This research used SOLIDCAM software to produce NC Codes for plastic machine equipment in designing parts. Then, codes were saved and constructed into an operation process knowledge database by computer. From the knowledge database, we can choose proper NC Codes to be sent to collaboration factories by network. These factories will then conduct the operation parts according to NC Codes and achieve the goal of purchaser-managed quality (PMQ). In order to evaluate the benefit of established NC Code knowledge database, we used three types of operation machine, including traditional machine, CNC machine and providing NC Code control machine. They were used to operate parts and then identify the difference of production cost, proportion nonconforming and delivery lead-time respectively. The comparative results show that the average production cost of NC code control machine is decreased 3%, the proportion nonconforming is reduced 5%, and the delivery lead-time is shortened to 5 days rather than the traditional machine. Compared to the traditional machine and NC Code, CNC machine operation shows the least benefit when the number of parts operated is below seven pieces.
APA, Harvard, Vancouver, ISO, and other styles
42

Hong-Chan, Ma, and 馬宏燦. "A study about data network provider's operation requirement in value added database industry." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/72647000381421675862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Aroonsrisopon, Tanet. "Analysis of stratified charge operation and negative valve overlap operation using direct fuel injection in homogeneous charge compression ignition engines." 2006. http://www.library.wisc.edu/databases/connect/dissertations.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Huang, Jen-Wei, and 黃仁偉. "An Operation to Efficiently Migrate Spatial Objects between Two Spatial Databases." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/19475245970571387044.

Full text
Abstract:
碩士
朝陽科技大學
資訊管理系碩士班
96
In recent year, spatial databases have been increasingly and widely used in many applications. The R-tree supports a dynamic index structure for efficiently retrieving objects. In the past, several papers discussed the access of individual object in R-tree. If objects are migrated one by one, some nodes in the two R-trees may overflow or underflow repeatedly. The database performance may decrease because the two R-trees may be reconstructed again and again. In this thesis, we propose a new operation, call spatial-migrate, for R-tree. The function of this operation is to combine one group of objects into another group of objects according to a special relationship between the two groups of objects. That is, more than one spatial objects in one R-tree are migrated to another R-tree at the same time. When many single-object insertions/deletions are replaced by a multiple-objects insertion/deletion, many redundant node-splits and/or MBR-adjustments can be omitted. While processing spatial-migrate, some nodes overflow due to the insertion of many objects in the combining R-tree. Our method splits each of these nodes into several nodes. In other words, we once generate enough nodes to contain all the objects inserted into the old node. Each node at most has only one node-split and/or MBR-adjustment. Moreover, some nodes underflow due to the deletion of many objects in the combined R-tree. In tradition, the underflow nodes are deleted form R-tree and all objects or child node of these nodes must be reinserted into the combined R-tree. The R-tree is reconstructed again and again. The system performance will be influenced obviously. Hence, we use a mergence method to deal with the objects in underflow nodes. All remaining objects are redistributed to leaf nodes. Then the combined R-tree is reconstructed form bottom to top. Our method avoids the R-tree shorten after some objects are deleted from the R-tree but recover again after some objects are reinserted into the R-tree. Therefore, the proposed spatial-migrate operation can obviously reduce node-split and MBR-adjustment to efficiently improve index structure and database performance.
APA, Harvard, Vancouver, ISO, and other styles
45

Jones, Stephen Todd. "Implicit operating system awareness in a virtual machine monitor." 2007. http://www.library.wisc.edu/databases/connect/dissertations.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hunt, Andrew L. "Implementation of the primary operation, retrieve-common, of the multi-backend database system (MBDS)." Thesis, 1986. http://hdl.handle.net/10945/21913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Chung, Min-Huey, and 鍾明惠. "Implementation and Operation of Pressure Injury Database and Compound Query in Nursing Quality Control." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/x9eh8t.

Full text
Abstract:
碩士
國立臺灣大學
事業經營碩士在職學位學程
107
By managing adverse events in medical care, administrator can detect errors, analyze nature and causes of adverse events, and establish mechanisms to prevent them. However, most adverse event information management systems do not provide event prediction function, and most adverse event related research uses a small sample. Therefore, the present research develops an adverse event cloud database and management information system to provide clinical personnel for the management and prevention of adverse events. The specific methods and objectives include: (1) Designing and establishing an adverse event database with MySQL, including drugs events, falls events, tubing events, pressure injury and needle stick injury; (2) Building an application web service with PHP for user to manage data, search events, and generate statistical charts of adverse events; (3) Providing users explore important factors of damage of adverse events with decision tree model function in the web page. In this study, the random forest model was used to predict the severity of hospital pressure injuries, and the correct rate was 54.5~64.4%. The most important factors affecting the degree of pressure injury in the adverse events were the occurred units. The others include the site of occurrence, repeated friction at the apophysis, the use of assistive devices, long-term fixed posture, and Barden Scale score. In addition to facilitating medical personnel to report on adverse events in the hospital, this system also provides decision tree model results. These can be reference for clinical and have important contributions to the prevention and management of adverse events, and finally to enhance patient safety.
APA, Harvard, Vancouver, ISO, and other styles
48

CHIANG, HUNG-YOU, and 蔣宏有. "CMMI Organization Training Procedure for Constructing “Standard Operation Process” based Parameter Database in Military War-Gaming Systems – Using Air Force Parameter Database in JTLS." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/56912330168146261579.

Full text
Abstract:
碩士
國防管理學院
國防資訊研究所
94
As fast changing in modern weapon system, international political situation, combat environment, and more complicate operation mission, computer-aid simulation analysis acts an important and essential role in tactical training and force deployment. There are more and more exercises to simulate many military or political situations in recent years. Under the high usage of war-gaming or simulation, relatively weapon parameters and database parameters of standard operation process will be more important in simulation outcome than ever. The exercise of MND follows the exercise training cycle-designing, planning, preparing, execution, verification and analysis. Because exercise parameters database was established by simple combat planning and officer working experience in the past, it didn’t have any strict controlling and checking mechanism. The simulation process will be interrupted by personal factor easily and to cause the exercise to have to stop the combat mission. The goal of the research is to establish the “standard operation process” of building weapon parameter database. CMMI is an international software engineering standard. CMMI can help us to verify present problem in building exercise parameter database. With regard to CMMI practical work, it is to provide a standard operation process and structure with investigation and interview approach to realize the practical opinions and to evaluate performance of participants after using CMMI to reduce the managing and collecting cost of building weapon parameter database and to promote the effectiveness and efficiency of the exercise simulation in order to achieve the military mission goal.
APA, Harvard, Vancouver, ISO, and other styles
49

Teng, Kuang-Hung, and 鄧光宏. "A Study of String Matching System Based on Chinese Word Segmentation and Database Set Operation." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/83un3v.

Full text
Abstract:
碩士
中華大學
資訊管理學系碩士班
101
People can easily receive various data, information, and even knowledge through Internet and Information technology which is very popular in recent years. People may copy and modify the entire downloaded data, digitized information turning to own paper work. Because people can easily accessing to Internet and information technology to use downloaded information and data, this causes more plagiarism issues. Many studies over the past use of statistics, vectors, matrices, and location changing to compare with, and by adding some superfluous words or sentences between stings, it will be greatly reduced the accurate rate of matching system; Moreover, it may cause students keep plagiarism if matching system cannot find the alignments correctly. The way students plagiarism others, not just copy full sentences, but slightly modified, for instance, adding a lot of superfluous words in a paragraph, or changing the order of words in the original article, which will be reduced the accurate rate of matching system. This study uses Chinese Word Segmentation and Database Set Operation as a base to construct a string matching system to solve the excessive superfluous words and order problems. This study use Chinese word segmentation method to break a string into words, and Database Set Operation for matching strings in common correctly to compare with. Database Set Operation may be more efficient than the program with lots of words inside its memory. This study creates a prototype, which will compare with methods of matching words, and T brand’s matching system, to verify its efficiency and accuracy. In the end, the result of the prototype shows that the efficiency is not performed well, but accuracy performance is 100 % identified and highlighted the same wordings in strings.
APA, Harvard, Vancouver, ISO, and other styles
50

Oh, Chongsun. "Models for design, evaluation, and operation of RFID in conveyor-based systems /." 2009. http://www.library.wisc.edu/databases/connect/dissertations.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography