To see the other types of publications on this topic, follow the link: Database centres.

Dissertations / Theses on the topic 'Database centres'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Database centres.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wongchachom, Chumnong. "An investigation into a community information database system in the northeast of Thailand community empowerment through community learning centres /." Connect to thesis, 2006. http://portal.ecu.edu.au/adt-public/adt-ECU2006.0018.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Moratal, Ferrando Núria. "The role of large research infrastructures in scientifics creativity : a user-level analysis in the cases of a biological database platform and a synchrotron." Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAB003/document.

Full text
Abstract:
A l'origine de cette thèse il y a le constat d’une science en changement. Ce changement se caractérise par deux grandes tendances globales : la dépendance croissante à des grands équipements coûteux et partagés et la production de données de masse qui sont également très coûteuses à stocker et gérer. Dans les deux cas ces ressources sont financées par des programmes publics et proposés à la communauté scientifique selon un principe d’ouverture à des utilisateurs extérieurs sous forme de Infrastructures de recherche (IR). Plusieurs facteurs peuvent nous amener à penser que les IR sont des lieux favorables à la créativité. Cependant les moyens par lesquels les IR favorisent la créativité n’ont pas été étudiés. L’objectif de cette thèse est de répondre à cette question. La problématique se décline en deux sous-questions de recherche. D’abord nous nous demandons, comment les IR peuvent-elles contribuer à la créativité scientifique de leurs utilisateurs ? Puis nous nous interrogeons sur : comment mesurer cet impact ?
At the origin of this thesis there is the observation of a changing science. This change is characterized by two major global trends: the growing reliance on large expensive and shared equipment and the production of mass data which are also very expensive to store and manage. In both cases these resources are financed by public programs and proposed to the scientific community according to a principle of openness to external users in the form of Research Infrastructures (RIs). Several factors may lead us to believe that RIs are favourable places for creativity. However, the means by which RIs promote creativity have not been studied. The purpose of this thesis is to answer this question. The research question is divided into two sub-questions of research. First, we wonder how IRs can contribute to the scientific creativity of their users. Then we ask ourselves: how to measure this impact
APA, Harvard, Vancouver, ISO, and other styles
3

Ibrahim, Karim. "Management of Big Annotations in Relational Database Management Systems." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/272.

Full text
Abstract:
Annotations play a key role in understanding and describing the data, and annotation management has become an integral component in most emerging applications such as scientific databases. Scientists need to exchange not only data but also their thoughts, comments and annotations on the data as well. Annotations represent comments, Lineage of data, description and much more. Therefore, several annotation management techniques have been proposed to efficiently and abstractly handle the annotations. However, with the increasing scale of collaboration and the extensive use of annotations among users and scientists, the number and size of the annotations may far exceed the size of the original data itself. However, current annotation management techniques don’t address large scale annotation management. In this work, we propose three chapters to that tackle the Big annotations from three different perspectives (1) User-Centric Annotation Propagation, (2) Proactive Annotation Management and (3) InsightNotes Summary-Based Querying. We capture users' preferences in profiles and personalizes the annotation propagation at query time by reporting the most relevant annotations (per tuple) for each user based on time plan. We provide three Time-Based plans, support static and dynamic profiles for each user. We support a proactive annotation management which suggests data tuples to be annotated in case new annotation has a reference to a data value and user doesn’t annotate the data precisely. Moreover, we provide an extension on the InsightNotes: Summary-Based Annotation Management in Relational Databases by adding query language that enable the user to query the annotation summaries and add predicates on the annotation summaries themselves. Our system is implemented inside PostgreSQL.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Xin. "Human-centered semantic retrieval in multimedia databases." Birmingham, Ala. : University of Alabama at Birmingham, 2008. https://www.mhsl.uab.edu/dt/2008p/chen.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Alabama at Birmingham, 2008.
Additional advisors: Barrett R. Bryant, Yuhua Song, Alan Sprague, Robert W. Thacker. Description based on contents viewed Oct. 8, 2008; title from PDF t.p. Includes bibliographical references (p. 172-183).
APA, Harvard, Vancouver, ISO, and other styles
5

Mencner, Jacek. "Využití nástrojů Business Intelligence k zefektivnění zákaznického centra." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2017. http://www.nusl.cz/ntk/nusl-318318.

Full text
Abstract:
This master's thesis deals with proposal of Business Intelligence solution. The main task is analysis of Customer Care processes, based on which the Business Intelligence solution has been designed. Thanks to obtained knowledge changes has been made to get more effective contact process. The first part of the thesis describes the theoretical foundations of Business Intelligence. In the second part is analysis of the current situation and solution proposal.
APA, Harvard, Vancouver, ISO, and other styles
6

Timm, Sarah Louise. "The Generation and Management of Museum-Centered Geologic Materials and Information." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/31572.

Full text
Abstract:
This thesis integrates three disciplines: geosciences, computer science, and museum collections management. Although these are not commonly integrated, by developing their intersection this thesis uniquely contributes a much-needed system for effectively managing geological collections. The lack of effective organization and management of collections can result in a serious problem: not only is history lost, but so is the potential for collection of further data from documented samples using newer analytical techniques. Using the Department of Geosciences at Virginia Tech as a beta testing ground, the electronic geological management system, EGEMS, was developed (Chapter 2). A database such as EGEMS should provide ready access to useful information including, a materialâ s provenance or current location, as well as any published analytical data. Past experiences volunteering in museums have allowed the author to design a system that is easily queried for such information. The organizational scheme and data model integral to the functionality of EGEMS was driven by direct experiences with geological research, in particular the electron microprobe analyses of Mn-rich minerals from the Hutter Mine, Virginia (Chapter 1). The final component of this thesis (Chapter 3) describes a facet of museum science that is most importantâ communication. This project records the development of a museum exhibit. Titled â The Search for the Mysterious Mineral,â this approach relies on pedagogical tools to engage the audience, and to illustrate how the scientific method used by a geologist is the same technique used in any problem solving. The exploration involved in these projects has lead to an enhanced understanding and appreciation for connections among generating, managing, and communicating geological information.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
7

Lännhult, Peter. "Principles of a Central Database for System Interfaces during Train Development." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-12014.

Full text
Abstract:
This thesis has developed a database solution for storage of interface data which are to different systems in a train, the interface data is used in the design of data communication between different systems in the vehicles. The database solution has focused on following problems: revision control of project related data, consistency of interface data between documentation and database, the possibility to roll back the database to an earlier revision, and the possibility to extract delta documents between two revisions in the database. For demonstration of the database solution, a user interface program has been created which communicates with the database. Revision control of the database has been solved by dividing the project related data into three sections: one approved, one modified, and one revised section. The approved section always contains the latest approved data and thereby the ability to read data even though it is subject for a revision at the moment. The modified section contains data that are currently being changed.  Obsolete data are stored in the revised section. To aviod inconsistency of interface data which are stored in both Word documents and in the database, the data is extracted from the database and inserted into tables in the Word documents. The Word documents contain bookmarks where the tables shall be inserted. Algorithms for rolling back the database to an earlier revision, and to extract delta documents were created. These algorithms are not implemented in the user interface program. As a result from this thesis, the interface data is revision controlled and no data is removed from the database during the change process; the data is moved between sections with different flags and revision numbers. Only if the database is rolled back to an earlier revision, data is removed. The functionality to transfer data from the database into tables in Word documents is verified.
Detta examensarbete har tagit fram en databaslösning för lagring av gränssnittsdata för olika systemenheter i ett tåg, gränssnittsdatat används i konstruktionen av kommunikation mellan olika system i fordonen. Databaslösningen har fokuserats på följande problem: revisionskontroll av projekt relaterat data, att gränssnittsdata överensstämmer mellan dokument och databasen, möjligheten att kunna gå tillbaks till en tidigare revision i databasen, samt möjligheten att kunna exportera delta dokument mellan två revisioner i databasen. För att demonstrera databaslösningen har ett användarprogram skapats som kommunicerar med databasen. Revisionskontroll i databasen har lösts genom att dela upp det projektrelaterade datat i tre sektioner: en godkänd, en modifierad samt en reviderad sektion. I den godkända sektionen finns alltid det senast godkända datat och möjligheten att läsa dessa data även om den är under ändring. I den modifierade sektonen finns data som är under pågående ändring. Data som har blivit ersatt återfinns i den reviderade sektionen. För att undvika inkonsekvens av gränssnittssdata som återfinns både i Word-dokument samt i databasen, extraheras datat från databasen till tabeller i Word-dokumenten. Word-dokumenten innehåller bokmärken där tabellerna sätts in. Algoritmer är framtagna för att kunna backa tillbaka till en tidigare revision i databasen samt kunna exportera delta dokument. Dessa algoritmer är inte implementerade i användarprogrammet. Detta examensarbete har resluterat i att gränssnittsdatat är revisionskontrollerat och inget data tas bort från databasen under en ändringsrutin, datat flyttas bara mellan olika sektioner med olika flaggor och revisionsnummer. Endast om man går tillbaks till en tidigare revision tas data bort ur databasen. Funktionaliteten att överföra gränssnittsdata från databasen till tabeller i Word-dokument är verifierad.
APA, Harvard, Vancouver, ISO, and other styles
8

Lehner, Wolfgang. "Energy-Efficient In-Memory Database Computing." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-115547.

Full text
Abstract:
The efficient and flexible management of large datasets is one of the core requirements of modern business applications. Having access to consistent and up-to-date information is the foundation for operational, tactical, and strategic decision making. Within the last few years, the database community sparked a large number of extremely innovative research projects to push the envelope in the context of modern database system architectures. In this paper, we outline requirements and influencing factors to identify some of the hot research topics in database management systems. We argue that—even after 30 years of active database research—the time is right to rethink some of the core architectural principles and come up with novel approaches to meet the requirements of the next decades in data management. The sheer number of diverse and novel (e.g., scientific) application areas, the existence of modern hardware capabilities, and the need of large data centers to become more energy-efficient will be the drivers for database research in the years to come.
APA, Harvard, Vancouver, ISO, and other styles
9

Wormet, Jody R. "Federated Search Tools in Fusion Centers : Bridging Databases in the Information Sharing Environment." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/17480.

Full text
Abstract:
Approved for public release; distribution is unlimited
This research utilized a semi-structured survey instrument delivered to subject matter experts within the national network of fusion centers and employed a constant comparison method to analyze the survey results. This smart practice exploration informed through an appreciative inquiry lens found considerable variation in how fusion centers plan for, gather requirements, select and acquire federated search tools to bridge disparate databases. These findings confirmed the initial hypothesis that fusion centers have received very little guidance on how to bridge disconnected databases to enhance the analytical process. This research should contribute to the literature by offering a greater understanding of the challenges faced by fusion centers, when considering integrating federated search tools; by evaluating the importance of the planning, requirements gathering, selection and acquisition processes for integrating federated search tools; by acknowledging the challenges faced by some fusion centers during these integration processes; and identifying possible solutions to mitigate those challenges. As a result, the research will be useful to individual fusion centers and more broadly, the National Fusion Center Association, which provides leadership to the national network of fusion centers by sharing lessons learned, smart practices, and other policy guidance.
APA, Harvard, Vancouver, ISO, and other styles
10

Ahmed, S. M. Zabed. "A user-centered design of a web-based interface to bibliographic databases." Thesis, Loughborough University, 2002. https://dspace.lboro.ac.uk/2134/6893.

Full text
Abstract:
This thesis reports results of a research study into the usefulness of a user-centred approach for designing information retrieval interfaces. The main objective of the research was to examine the usability of an existing Web-based IR system in order to design a user-centred prototype Web interface. A series of usability experiments was carried out with the Web of Science. The first experiment was carried out using both novice and experienced users to see their performance and satisfaction with the interface. A set of search tasks was obtained from a user survey and was used in the study. The results showed that there were no significant differences in the time taken to complete the tasks, and the number of different search terms used between the two search groups. Novice users were significantly more satisfied with the interface than the experienced group. However, the experienced group was significantly more successful, and made fewer errors than the novice users. The second experiment was conducted on novices' learning and retention with the Web of Science using the same equipment, tasks and environment. The results of the original learning phase of the experiment showed that novices could readily pick up interface functionality when a brief training was provided. However, their retention of search skills weakened over time. Their subjective satisfaction with the interface also diminished from learning to retention. These findings suggested that the fundamental difficulties of searching IR systems still remain with the Web-based version. A heuristic evaluation was carried out to find out the usability problems in the Web of Science interface. Three human factors experts evaluate the interface. The heuristic evaluation was very helpful in identifying some interface design issues for Web IR systems. The most fundamental of these was increasing the match between system and the real world. The results of both the usability testing and the heuristic evaluations served as a baseline for designing a prototype Web interface. The prototype was designed based on a conceptual model of users' information seeking. Various usability evaluation methods were used to test the usability of the prototype system. After each round of testing, the interface was modified in accordance with the test findings. A summative evaluation of the prototype interface showed that both novice and experienced users improved their search performance. Comparative analysis with the earlier usability studies also showed significant improvements in performance and satisfaction with the prototype. These results show that user-centred methods can yield better interface design for IR systems.
APA, Harvard, Vancouver, ISO, and other styles
11

Lopez, Julien. "Au-delà des frontières entre langages de programmation et bases de données." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS235/document.

Full text
Abstract:
Plusieurs classes de solutions permettent d'exprimer des requêtes dans des langages de programmation: les interfaces spécifiques telles que JDBC, les mappings objet-relationnel ou object-relational mapping en anglais (ORMs) comme Hibernate, et les frameworks de requêtes intégrées au langage comme le framework LINQ de Microsoft. Cependant, la plupart de ces solutions ne permet de requêtes visant plusieurs bases de données en même temps, et aucune ne permet l'utilisation de logique d'application complexe dans des requêtes aux bases de données. Dans cette thèse, nous détaillons la création d'un framework de requêtes intégrées au langage nommé BOLDR qui permet d'évaluer dans les bases de données des requêtes écrites dans des langages de programmation généralistes qui contiennent de la logique d'application, et qui ciblent différentes bases de données potentiellement basées sur des modèles de données différents. Dans ce framework, les requêtes d'une application sont traduites vers une représentation intermédiaire de requêtes, puis réécrites pour éviter le phénomène "d'avalanche de requêtes" et pour profiter au maximum des capacités d'optimisation des bases de données, et enfin envoyées pour évaluation vers les bases de données ciblées et les résultats obtenus sont convertis dans le langage de programmation de l'application. Nos expériences montrent que les techniques implémentées dans ce framework sont applicables pour de véritables applications centrées données, et permettent de gérer efficacement un vaste champ de requêtes intégrées à des langages de programmation généralistes
Several classes of solutions allow programming languages to express queries: Specific APIs such as JDBC, Object-Relational Mappings (ORMs) such as Hibernate, and language-integrated query frameworks such as Microsoft's LINQ. However, most of these solutions do not allow for efficient cross-databases queries, and none allow the use of complex application logic from the programming language in queries. In this thesis, we create a language-integrated query framework called BOLDR that, in particular, allows the evaluation in databases of queries written in general-purpose programming languages that contain application logic, and that target different databases of possibly different data models. In this framework, application queries are translated to an intermediate representation, then rewritten in order to avoid query avalanches and make the most out of database optimizations, and finally sent for evaluation to the corresponding databases and the results are converted back to the application. Our experiments show that the techniques we implemented are applicable to real-world database applications, successfully handling a variety of language-integrated queries with good performances
APA, Harvard, Vancouver, ISO, and other styles
12

De, Jongh Martha Susanna. "A national electronic database of special music collections in South Africa." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/2370.

Full text
Abstract:
Thesis (MMus (Music))--University of Stellenbosch, 2009.
In the absence of a state-sponsored South African archive that focuses on collecting, ordering, cataloguing and preserving special music collections for research, the Documentation Centre for Music (DOMUS) was established in 2005 as a research project at the University of Stellenbosch. Music research in South Africa is often impeded by inaccessibility of materials, staff shortages at archives and libraries, financial constraints and time-consuming ordering and cataloguing processes. Additionally there is, locally, restricted knowledge of the existence, location and status of relevant primary sources. Accessibility clearly depends on knowing of the existence of materials, as well as the extent to which collections have been ordered and catalogued. An overview of repositories such as the Nasionale Afrikaanse Letterkundige Museum and Navorsingsentrum (NALN), the now defunct National Documentation Centre for Music and the International Library of African Music (ILAM) paints a troubling picture of archival neglect and disintegration. Apart from ILAM, which has a very specific collecting and research focus, this trend was one that ostensibly started in the 1980s and is still continuing. It could be ascribed to a lack of planning and forward thinking under the previous political dispensation, aggravated by policies of transformation and restructuring in the current one. Existing sources supporting research on primary materials are dated and not discipline-specific. Thus this study aims to address issues of inaccessibility of primary music materials by creating a comprehensive and ongoing national electronic database of special music collections in South Africa. It is hoped that this will help to alert researchers to the existence and status of special music collections housed at various levels of South African academic and civil society.
APA, Harvard, Vancouver, ISO, and other styles
13

Holland, Henry. "A collated digital, geological map database for the central Namaqua Province using geographical information system technology." Thesis, Rhodes University, 1997. http://hdl.handle.net/10962/d1005548.

Full text
Abstract:
The genlogy of the Namaqua Province is notoriously difficult to map and interpret due to polymetamorphic and multiple deformation events and limlted outcrop. Current maps of the Province reflect diverse interpretations of stratigraphy as a consequence of these difficulties. A Geographic Information System is essentially a digital database and a set of functions and procedures to capture, analyse and manipulate spatially related data. A GIS is therefore ideally suited to the study and analysis of maps. A digital map database was established, using modem GIS technology, to facilitate the collation of existing maps of an area in the Central Namaqua Province (CNP). This database is based on a lithological classification system similar to that used by Harris (1992), rather than on an interpretive stratigraphic model. In order to establish the database, existing geological maps were scanned into a GIS, and lines of outcrop and lithological contacts were digitised using a manual line following process, which is one of the functions native to a GIS. Attribute data were then attached to the resultant polygons. The attribute database consists of lithological, textural and mineralogical data, as well as stratigraphical classification data according to the South African Committee for Stratigraphy (SACS), correlative names assigned to units by the Precambrian Research Unit, the Geological Survey of South Africa, the Bushmanland Research group and the University of the Orange Free State. Other attribute data included in the database, are tectonic and absolute age information, and the terrane classification for the area. This database reflects the main objective of the project and also serves as a basis for further expansion of a geological GIS for the CNP. Cartographic and database capabilities of the GIS were employed to produce a collated lithological map of the CNP. A TNTmipsTM Spatial Manipulation Language routine was written to produce a database containing two fields linked to each polygon, one for lithology and one for a correlation probability factor. Correlation factors are calculated in this routine from three variables, namely the prominence a worker attached to a specific lithology within a unit or outcrop, the agreement amongst the various workers on the actual lithology present within an outcrop, and the correspondence between the source of the spatial element (mapped outcrop) and the source of the attribute data attached to it. Outcrops were displayed on the map according to the lithology with the highest correlation factor, providing a unique view of the spatial relationships and distribution patterns of lithological units in the CNP. A second map was produced indicating the correlation factors for lithologies within the CNP. Thematic maps are produced in a GIS by selecting spatial elements according to a set of criteria, usually based on the attribute database, and then displaying the elements as maps. Maps created by this process are known as customised maps, since users of the GIS can customise the selection and display of elements according to their needs. For instance, all outcrops of rock units containing particular lithologies of a given age occurring in a specific terrane can be displayed - either on screen or printed out as a map. The database also makes it possible to plot maps according to different stratigraphic classification systems. Areas where various workers disagree on the stratigraphic classification of units can be isolated, and displayed as separate maps in order to aid in the collation process. The database can assist SACS in identifying areas in the CNP where stratigraphic classification is still lacking or agreements on stratigraphic nomenclature have not yet been attained. More than one database can be attached to the spatial elements in a GIS, and the Namaqua-GIS can therefore be expanded to include geochemical, geophysical, economic, structural and geographical data. Other data on the area, such as more detailed maps, photographs and satellite images can be attached to the lithological map database in the correct spatial relationship. Another advantage of a GIS is the facility to continually update the database(s) as more information becomes available and/or as interpretation of the area is refined.
APA, Harvard, Vancouver, ISO, and other styles
14

Pleehajinda, Parawee. "Database centric software test management framework for test metrics." Master's thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-187120.

Full text
Abstract:
Big amounts of test data generated by the current used software testing tools (QA-C/QA-C++ and Cantata) contain a variety of different values. The variances cause enormous challenges in data aggregation and interpretation that directly affect generation of test metrics. Due to the circumstance of data processing, this master thesis introduces a database-centric test management framework for test metrics aims at centrally handling the big data as well as facilitating the generation of test metrics. Each test result will be individually parsed to be a particular format before being stored in a centralized database. A friendly front-end user interface is connected and synchronized with the database that allows authorized users to interact with the stored data. With a granularity tracking mechanism, any stored data will be systematically located and programmatically interpreted by a test metrics generator to create various kinds of high-quality test metrics. The automatization of the framework is driven by Jenkins CI to automatically and periodically performing the sequential operations. The technology greatly and effectively optimizes and reduces effort in the development, as well as enhance the performance of the software testing processes. In this research, the framework is only started at managing the testing processes on software-unit level. However, because of the independence of the database from levels of software testing, it could also be expanded to support software development at any level.
APA, Harvard, Vancouver, ISO, and other styles
15

Kreis, Nellie S. "A Feasibility Study of the Implementation of CD ROM Databases in Secondary School Library Media Centers." NSUWorks, 1987. http://nsuworks.nova.edu/gscis_etd/650.

Full text
Abstract:
School library media centers have traditionally provided students and teachers with information resources in two formats: print materials, including books, periodicals and microforms; and audiovisual materials including film, television and audio devices. Recent technological advances and the changing nature of information/publishing industries now suggest it might be desirable for school library media centers to provide information resources in an electronic format including computer, compact disc, and/or video disc databases. This paper examines the feasibility of introducing a site-specific periodical literature CD ROM database into the secondary school library media center in three ways: an attitude survey, a queuing analysis and a cost analysis. Four secondary schools in the Dade County Public School system were selected to receive a CD ROM database on a trial basis. A posttest only, equivalent-groups AB design was used to measure students' confidence in their abilities to locate periodical literature, the student’s perceptions of the level of difficulty in the task and the availability of such literature in the school library media center, and student satisfaction with the school library media center and its collection. The study also applied standard quantitative techniques, i.e. queuing models and simulation, to the use of the traditional print periodical index and a CD ROM periodical index. To develop a queuing model, at the index table and the service or search times were needed. Service times were based on 1088 observations of students using The Readers guide to Periodical Literature and INFOTRAC II, a CD. ROM database. Arrival rates were based on two typical school library media center patron access patterns--a "worst" case and a "best" case scenario .Using this information, two models were constructed. Each model was run twice, once for The Readers Guide to Periodical Literature and once for INFOTRAC II. The cost analysis included such components as the school discretionary budget and district funding for media specialist salaries. School discretionary expenditures included initial start-up costs for hardware and recurring costs for subscription and supplies. Media specialist salary costs to the district calculated based on the time used and the for instruction in periodical research and intervention and help in student searches. The attitude survey showed that in every area tested, INFOTRAC II users had a more positive attitude toward periodical research and the school library media center than did Readers Guide users. The search data gathered for the queuing analysis were of itself significant. Student’s using The Readers Guide to Periodical Literature took an average of 13.68 minutes to complete a search. INFOTRAC II users needed only 5.4 minutes. The queuing analyses showed that a single CD ROM terminal, when compared to a seven volume READER-S GUIDE, makes a poorly planned media center activity worse, and a well-planned media center activity better. The financial impact of the CD ROM database is felt most by the school-site discretionary budget. Annual subscription costs for CD ROM products are significantly higher than print indexes. Data collected indicated that the CD ROM index, INFOTRAC II, required little or no instruction by media specialists, while formal instruction in the use of The Readers Guide to Periodical Literature Ranged from 15 to 180 minutes per class. At the same time, daily logs showed that Readers Guide users required an average of 46 minutes a day of professional help, while INFOTRAC II users needed only 15 minutes a day. When costs of instruction and intervention by media specialists are included, the more expensive CD ROM product becomes highly competitive to the print index. In conclusion, secondary schools that place a high priority on expanding student’s perceptions of how to access information and creating enthusiasm for independent investigations should consider CD ROM databases. In addition, the CD ROM database offers a vehicle for providing computer usage experiences for the entire student body. Careful evaluation should be given to the issues of equitable access, availability of cited sources, library media center management chores and individual school collection development goals when making a Selection.
APA, Harvard, Vancouver, ISO, and other styles
16

Willmes, Christian [Verfasser], Georg [Gutachter] Bareth, and Ulrich [Gutachter] Lang. "CRC806-Database: A semantic e-Science infrastructure for an interdisciplinary research centre / Christian Willmes ; Gutachter: Georg Bareth, Ulrich Lang." Köln : Universitäts- und Stadtbibliothek Köln, 2016. http://d-nb.info/1126094218/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Brown, Mark Keller. "Optical laser technology and its application to Defense Manpower Data Center's (DMDC) Querry Facsimile (QFAX) database system." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27187.

Full text
Abstract:
Optical storage technology holds the capability to increase the amount of data stored on a single 5 1/4-inch disc from 1.2 Megabytes (Mb) to 640 Mb. The three basic types of optical disks, Compact Disk-Read Only Memory (CD-ROM), Write Once, Read Many (WORM), and erasable optical, each has its own application niche. For this reason, it is critical for managers to analyze present systems carefully prior to seeking optical storage solutions. An in-depth evaluation of performance and interfacing requirements of currently marketed optical systems was performed. That evaluation was used in the process of determining if Defense Manpower Data Center's (DMDC) Querry Facsimile (QFAX) System, a system of nine databases currently stored on a direct access storage device (DASD), was a candidate for an optical storage application. Additional consideration was given to industry standards for optical devices. A detailed analysis of the current system configuration and end-users requirements was made to determine acceptability of optical systems interfaces and associated capabilities. Keywords: Information storage; Interactive online data bases. Theses. (edc)
APA, Harvard, Vancouver, ISO, and other styles
18

Szoltys, Kryštof. "Parametrické CAD systémy a databáze součástí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217957.

Full text
Abstract:
Autodesk Inventor is a full 3D CAD system. This system includes tools for working environment tool, component design tool including information data management and technical support. The target of the first part is to describe the most important new features brought by Autodesk Inventoru 2009 version. In of next chapters the work describes creation common iPart, which is basically different variants set (proportions, material …) of one entity, and preparation before correct publication to the content center, which is a virtual database of all iPart. Also there is presented how to create the content center, how to work with it and adjust data. The aim of work is then the creation database a stator and a rotor packet for the firm ATAS electromotor Náchod Inc. In the thesis there is described creation of the lamination as an iPart and their publication to the new content center.
APA, Harvard, Vancouver, ISO, and other styles
19

Wilson, John Robert. "U/Pb Zircon Ages of Plutons from the Central Appalachians and GIS-Based Assessment of Plutons with Comments on Their Regional Tectonic Significance." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/35248.

Full text
Abstract:
The rocks of the Appalachian orogen are world-class examples of collisional and extensional tectonics, where multiple episodes of mountain building and rifting from the pre-Cambrian to the present are preserved in the geologic record. These orogenic events produced plutonic rocks, which can be used as probes of the thermal state of the source region. SIMS (secondary ion mass spectrometry) U/Pb ages of zircons were obtained for ten plutons (Leatherwood, Rich Acres, Melrose, Buckingham, Diana Mills, Columbia, Poore Creek, Green Springs, Lahore and Ellisville) within Virginia. These plutons are distinct chemically, isotopically, and show an age distribution where felsic rocks are approximately 440 Ma, and Mafic rocks are approximately 430 Ma. Initial strontium isotopic ratios and bulk geochemical analyses were also performed. These analyses show the bimodal nature of magmatism within this region. In order to facilitate management of geologic data, including radiometric ages, strontium isotope initial ratios and major element geochemistry, a GIS based approach has been developed. Geospatially references sample locations, and associated attribute data allow for analysis of the data, and an assessment of the accuracy of field locations of plutons at both regional and local scales. The GIS based assessment of plutons also allows for the incorporation of other multidisciplinary databases to enhance analysis of regional and local geologic processes. Extending such coverage to the central Appalachians (distribution of lithotectonic belts, plutons, and their ages and compositions) will enable a rapid assessment of tectonic models.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
20

Falk, Joakim. "Usability guided development of a participant database system." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-146269.

Full text
Abstract:
This project consisted of the development of a web based participant database system to replace a spreadsheet based one for the project “Vi Ses! Språkvänner i Linköping” and the evaluation of it. The design and implementation was done iteratively using a collection of usability guidelines as well as the results of a set of user tests. User tests were also used for the evaluation of the web database system. The task during the user tests was participant matching and the main measurement taken was task completion time. The project resulted in a web database system that could fully replace the spreadsheet database system. The evaluation tests showed that this system was both faster to match participants in as well as less error prone.
Detta projekt bestod av utvecklingen och evalueringen av ett webbaserat databassystem för deltagare. Den skulle ersätta ett existerande Excel-baserat databassystem för projektet “Vi Ses! Språkvänner i Linköping”. Designen och implementationen gjordes iterativt med hjälp av användbarhetsriktlinjer och resultat från användartester. Användartester användes även för evalueringen av webbdatabassystemet. Uppgiften i användartesterna var deltagarmatchning och det huvudsakliga mätvärdet var hur lång tid uppgiften tog. Projektet resulterade i ett webbaserat databassystem som helt kunde ersätta det Excel-baserade systemet. Evalueringstesterna visade att systemet var både snabbare och mindre felbenäget att matcha deltagare i.
APA, Harvard, Vancouver, ISO, and other styles
21

Heese, Ralf. "Resource Centered Store." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17408.

Full text
Abstract:
Mit dem Resource Description Framework (RDF) können Eigenschaften von und die Beziehungen zwischen Ressourcen maschinenverarbeitbar beschrieben werden. Dadurch werden diese Daten für Maschinen zugänglicher und können unter anderem automatisch Daten zu einer Ressource lokalisieren und verarbeiten, unterschiedliche Bedeutungen einer Zeichenkette erkennen und implizite Informationen ableiten. Das Datenmodell von RDF und der zugehörigen Anfragesprache SPARQL basiert auf gerichteten und beschrifteten Multigraphen. Forschungsergebnisse haben gezeigt, dass relationale DBMS zum Verwalten von RDF-Daten ungeeignet sind. Native basierende RDF-DBMS können Anfragen in kürzerer Zeit verarbeiten. Der Leistungsgewinn wird durch redundantes Speichern von Tripeln in mehreren B+-Bäumen erzielt. Jedoch sind Join-ähnliche Operationen zum Berechnen des Ergebnisses erforderlich, was bei größeren Anfragen zu Leistungseinbußen führt. In dieser Arbeit wird der Resource Centered Store (RCS) entwickelt, dessen Speichermodell RDF-inhärente Eigenschaften ausnutzt, um Anfragen ohne die Notwendigkeit redundanter Speicherung effizient beantworten zu können. Die grundlegende Idee des RCS-Speichermodells besteht im Gruppieren der Daten als sternförmigen Teilgraphen auf Datenbankseiten. Die verwendeten Prinzipien ähnelt denen in RDBMS und daher können deren Algorithmen zur Beantwortung von Anfragen wiederverwendet werden. Darüber hinaus werden Transformationsregeln und Heuristiken zum Optimieren von SPARQL-Anfragen zum Finden eines möglichst optimalen Ausführungsplans definiert. In diesem Kontext wurden auch graphmusterbasierte Indexe spezifiziert und deren Nutzen für die Verarbeitung von Anfragen untersucht. Das RCS-Speichermodell wurde prototypisch implementiert und im Vergleich zum nativen RDF-DBMS Jena TDB evaluiert. Die durchgeführten Experimenten zeigen, dass das System insbesondere für das Beantworten von Anfragen mit großen sternförmigen Teilmustern geeignet ist.
The Resource Description Framework (RDF) is the conceptual foundation for representing properties of real-world or virtual resources and describing the relationships between them. Standards based on RDF allow machines to access and process information automatically and locate additional data about resources. It also supports the discovery of relationships between concepts. The smallest information unit in RDF are triples which form a directed labeled multi-graph. The query language SPARQL is also based on a graph model which makes it difficult for relational DBMS to store and query RDF data efficiently. The most performant DBMS for managing and querying RDF data implement a RDF-specific storage model based on a set of B+ tree indexes. The key disadvantages of these systems are the increased usage of secondary storage in cause of redundantly stored triples as well as the necessity of expensive join operation to compute the solutions of a SPARQL query. In this work we develop and describe the Resource Centered Store which exploits RDF inherent characteristics to avoid the requirement for storing triples redundantly while improving the query performance of larger queries. In the RCS storage model triples are grouped by their first component (subject) and storing these star-shaped subgraphs on database pages -- similar to relational DBMS. As a result the RCS can benefit from principles and algorithms that have been developed in the context of relational databases. Additionally, we defined transformation rules and heuristics to optimize SPARQL queries and generate an efficient query execution plan. In this context we also defined graph pattern based indexes and investigated their benefits for computing the solutions of queries. We implemented the RCS storage model prototypically and compared it to the native RDF DBMS Jena TDB. Our experiments showed that our storage model is especially suited to speed up the query performance of large star-shaped graph pattern.
APA, Harvard, Vancouver, ISO, and other styles
22

Abidi, Amna. "Imperfect RDF Databases : From Modelling to Querying." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2019. http://www.theses.fr/2019ESMA0008/document.

Full text
Abstract:
L’intérêt sans cesse croissant des données RDF disponibles sur le Web a conduit à l’émergence de multiple et importants efforts de recherche pour enrichir le formalisme traditionnel des données RDF à des fins d’exploitation et d’analyse. Le travail de cette thèse s’inscrit dans la continuation de ces efforts en abordant la problématique de la gestion des données RDF en présence d’imperfections (manque de confiance/validité, incertitude, etc.). Les contributions de la thèse sont comme suit: (1) Nous avons proposé d’appliquer l’opérateur skyline sur les données RDF pondérées par des mesures de confiance (Trust-RDF) dans le but d’extraire les ressources les plus confiantes selon des critères définis par l’utilisateur. (2) Nous avons discuté via des méthodes statistiques l’impact des mesures de confiance sur le Trust-skyline.(3) Nous avons intégré à la structure des données RDF un quatrième élément, exprimant une mesure de possibilité. Pour gérer cette mesure de possibilité, un cadre langagier appropriée est étudié, à savoir Pi-SPARQL, qui étend le langage SPARQL aux requêtes permettant de traiter des distributions de possibilités. (4) Nous avons étudié une variante d’opérateur skyline pour extraire les ressources RDF possibilistes qui ne sont éventuellement dominées par aucune autre ressource dans le sens de l’optimalité de Pareto
The ever-increasing interest of RDF data on the Web has led to several and important research efforts to enrich traditional RDF data formalism for the exploitation and analysis purpose. The work of this thesis is a part of the continuation of those efforts by addressing the issue of RDF data management in presence of imperfection (untruthfulness, uncertainty, etc.). The main contributions of this dissertation are as follows. (1) We tackled the trusted RDF data model. Hence, we proposed to extend the skyline queries over trust RDF data, which consists in extracting the most interesting trusted resources according to user-defined criteria. (2) We studied via statistical methods the impact of the trust measure on the Trust-skyline set.(3) We integrated in the structure of RDF data (i.e., subject-property-object triple) a fourth element expressing a possibility measure to reflect the user opinion about the truth of a statement.To deal with possibility requirements, appropriate framework related to language is introduced, namely Pi-SPARQL, that extends SPARQL to be possibility-aware query language.Finally, we studied a new skyline operator variant to extract possibilistic RDF resources that are possibly dominated by no other resources in the sense of Pareto optimality
APA, Harvard, Vancouver, ISO, and other styles
23

Jordaan, Leandra. "Designing and developing a prototype indigenous knowledge database and devising a knowledge management framework." Thesis, Bloemfontein : Central University of Technology, Free State, 2009. http://hdl.handle.net/11462/121.

Full text
Abstract:
Thesis (M. Tech.) - Central University of Technology, Free State, 2009
The purpose of the study was to design and develop a prototype Indigenous Knowledge (IK) database that will be productive within a Knowledge Management (KM) framework specifically focused on IK. The need to develop a prototype IK database that can help standardise the work being done in the field of IK within South Africa has been established in the Indigenous Knowledge Systems (IKS) policy, which stated that “common standards would enable the integration of widely scattered and distributed references on IKS in a retrievable form. This would act as a bridge between indigenous and other knowledge systems” (IKS policy, 2004:33). In particular within the indigenous people’s organizations, holders of IK, whether individually or collectively, have a claim that their knowledge should not be exploited for elitist purposes without direct benefit to their empowerment and the improvement of their livelihoods. Establishing guidelines and a modus operandi (KM framework) are important, especially when working with communities. Researchers go into communities to gather their knowledge and never return to the communities with their results. The communities feel enraged and wronged. Creating an IK network can curb such behaviour or at least inform researchers/organisations that this behaviour is damaging. The importance of IK is that IK provides the basis for problem-solving strategies for local communities, especially the poor, which can help reduce poverty. IK is a key element of the “social capital” of the poor; their main asset to invest in the struggle for survival, to produce food, to provide shelter, or to achieve control of their own lives. It is closely intertwined with their livelihoods. Many aspects of KM and IK were discussed and a feasibility study for a KM framework was conducted to determine if any existing KM frameworks can work in an organisation that works with IK. Other factors that can influence IK are: guidelines for implementing a KM framework, information management, quality management, human factors/capital movement, leading role players in the field of IK, Intellectual Property Rights (IPR), ethics, guidelines for doing fieldwork, and a best plan for implementation. At this point, the focus changes from KM and IK to the prototype IK database and the technical design thereof. The focus is shifted to a more hands-on development by looking at the different data models and their underlying models. A well-designed database facilitates data management and becomes a valuable generator of information. A poorly designed database is likely to become a breeding ground for redundant data. The conceptual design stage used data modelling to create an abstract database structure that represents real-world objects in the most authentic way possible. The tools used to design the database are platform independent software; therefore the design can be implemented on many different platforms. An elementary prototype graphical user interface was designed in order to illustrate the database’s three main functions: adding new members, adding new IK records, and searching the IK database. The IK database design took cognisance of what is currently prevailing in South Africa and the rest of the world with respect to IK and database development. The development of the database was done in such a way as to establish a standard database design for IK systems in South Africa. The goal was to design and develop a database that can be disseminated to researchers/organisations working in the field of IK so that the use of a template database can assist work in the field. Consequently the work in the field will be collected in the same way and based on the same model. At a later stage, the databases could be interlinked and South Africa can have one large knowledge repository for IK.
APA, Harvard, Vancouver, ISO, and other styles
24

Miles, David B. L. "A User-Centric Tabular Multi-Column Sorting Interface For Intact Transposition Of Columnar Data." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1160.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Light, Geraldine. "User-Centered Design Strategies for Clinical Brain-Computer Interface Assistive Technology Devices." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/6349.

Full text
Abstract:
Although in the past 50 years significant advances based on research of brain-computer interface (BCI) technology have occurred, there is a scarcity of BCI assistive technology devices at the consumer level. This multiple case study explored user-centered clinical BCI device design strategies used by computer scientists designing BCI assistive technologies to meet patient-centered outcomes. The population for the study encompassed computer scientists experienced with clinical BCI assistive technology design located in the midwestern, northeastern, and southern regions of the United States, as well as western Europe. The multi-motive information systems continuance model was the conceptual framework for the study. Interview data were collected from 7 computer scientists and 28 archival documents. Guided by the concepts of user-centered design and patient-centered outcomes, thematic analysis was used to identify codes and themes related to computer science and the design of BCI assistive technology devices. Notable themes included customization of clinical BCI devices, consideration of patient/caregiver interaction, collective data management, and evolving technology. Implications for social change based on the findings from this research include focus on meeting individualized patient-centered outcomes; enhancing collaboration between researchers, caregivers, and patients in BCI device development; and reducing the possibility of abandonment or disuse of clinical BCI assistive technology devices.
APA, Harvard, Vancouver, ISO, and other styles
26

Sucomine, Nivia Maria. "Caracterização e análise do patrimônio arbóreo da malha viária urbana central do município de São Carlos-SP." Universidade Federal de São Carlos, 2009. https://repositorio.ufscar.br/handle/ufscar/4275.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:00:30Z (GMT). No. of bitstreams: 1 2720.pdf: 2483262 bytes, checksum: da5b3c25da3029f77e54f2e7f04023c6 (MD5) Previous issue date: 2009-10-22
Financiadora de Estudos e Projetos
The road afforestation from the central urban area of São Carlos-SP was studied in order to know it and diagnose it in order to provide support for its maintenance and expansion. To this end, we performed a qualitative and quantitative assessment of tree cover, all its information and analysis developed and entered into a computerized database. We have found 103 different plant species in a total of 2626 individuals, 147 killed, 502 seedlings and 1977 adults. Schinus molle and Murraya paniculata were the most abundant species. For the diversity index and the average index of trees per kilometer of street obtained the values 3.18 and 26.73 respectively. The average diameter at breast height (DBH) was 21.4 cm; 55% of plants had height less than 5.40 m; 45% of the population is compound of native species and only 26 species are fructiferous. Small was the number of affected plants or plant injury, but more significant was the damage caused by the crown pruning irregular. 899 individuals were in conflict with one or more equipment around and 76% had little or no free space pavement. The results obtained it was concluded that this area has a high diversity floristic. In quantitative terms, this population was still very poor, but the overall situation of trees was satisfactory. It is recommended that an intensive program of planting of trees (with less abundant species) in the streets with the lowest level of individuals and another program for the maintenance and conservation of the stock already in place, pruning and expansion of free spaces on the sidewalks.
A arborização viária da área central urbana do município de São Carlos-SP foi estudada com o objetivo de conhecê-la e diagnosticá-la a fim de oferecer subsídios para a sua manutenção e incremento. Para tal, foi realizado um levantamento quali-quantitativo da cobertura arbórea, sendo todas suas informações e análises inseridas e desenvolvidas em um banco de dados informatizado. Foram encontradas 103 espécies botânicas distintas num total de 2626 indivíduos, sendo 147 mortos, 502 mudas e 1977 adultos. Schinus molle e Murraya paniculata foram às espécies mais abundantes. Para os índices de diversidade e de indivíduos por quilometragem de rua, obteve-se os valores 3,18 e 26,73 respectivamente. O diâmetro a altura do peito (DAP) médio foi de 21,4cm; 55% dos vegetais apresentaram altura menor que 5,40m; 45% da população são nativas e apenas 26 espécies são frutíferas. Pequeno foi o número de plantas acometidas de injúria ou fitossanidade, porém o mais significativo foram os danos nas copas decorrentes de podas irregulares. Dentre todos os indivíduos, 899 estão em conflito com um ou mais equipamento em entorno, sendo a fiação aérea a principal, e 76% possuíam pouco ou nenhum espaço livre de pavimentação. Por meio dos resultados obtidos, concluiu-se que a composição florística dessa área é bem diversa. Em termos quantitativos, essa população é ainda muito deficiente, mas a situação geral das árvores é bem satisfatória. Recomenda-se um programa intensivo de incentivo de plantio de mudas (com espécies menos abundantes) nas ruas com menor índice de indivíduos e outro programa com vista à manutenção e conservação da arborização já implantada, podas e ampliação dos espaços livre de pavimentação nas calçadas.
APA, Harvard, Vancouver, ISO, and other styles
27

Meiring, Linda. "A distribution model for the assessment of database systems knowledge and skills among second-year university students." Thesis, [Bloemfontein?] : Central University of Technology, Free State, 2009. http://hdl.handle.net/11462/44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Akroun, Lakhdar. "Decidability and complexity of simulation preorder for data-centric Web services." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22523/document.

Full text
Abstract:
Dans cette thèse nous nous intéressons au problème d’analyse des spécifications des protocoles d’interactions des services Web orientés données. La spécification de ce type de protocoles inclut les données en plus de la signature des opérations et des contraintes d’ordonnancement des messages. L’analyse des services orientés données est complexe car l’exécution d’un service engendre une infinité d’états. Notre travail se concentre autour du problème d’existence d’une relation de simulation quand les spécifications des protocoles des services Web sont représentés en utilisant un système à transition orienté données. D’abord nous avons étudié le modèle Colombo [BCG+05]. Dans ce modèle, un service (i) échange des messages en utilisant des variables ; (ii) modifie une base de donnée partagée ; (iii) son comportement est modélisé avec un système à transition. Nous montrons que tester l’existence de la relation de simulation entre deux services Colombo non bornée est indécidable. Puis, nous considérons le cas où les services sont bornés. Nous montrons pour ce cas que le test de simulation est (i) exptime-complet pour les services Colombo qui n’accèdent pas à la base de donnée (noté ColomboDB=∅), et (ii) 2exptime-complet quand le service peut accéder à une base de donnée bornée (Colombobound). Dans la seconde partie de cette thèse, nous avons défini un modèle générique pour étudier l’impact de différents paramètres sur le test de simulation dans le contexte des services Web orientés données. Le modèle générique est un système à transition gardé qui peut lire et écrire à partir d’une base de donnée et échanger des messages avec son environnement (d’autres services ou un client). Dans le modèle générique toutes les actions sont des requêtes sur des bases de données (modification de la base de données, messages échangés et aussi les gardes). Dans ce contexte, nous avons obtenu les résultats suivant : (i) pour les services gardés sans mise à jour, le test de simulation est caractérisé par rapport à la décidabilité du test de satisfiabilité du langage utilisé pour exprimer les gardes augmenté avec une forme restrictive de négation, (ii) pour les services sans mise à jour mais qui peuvent envoyer comme message le résultat d’une requête, nous avons trouvé des conditions suffisantes d’indécidabilité et de décidabilité par rapport au langage utilisé pour exprimer l’échange de messages, et (iii) nous avons étudié le cas des services qui ne peuvent que insérer des tuples dans la base de donnée. Pour ce cas, nous avons étudié la simulation ainsi que la weak simulation et nous avons montré que : (a) la weak simulation est indécidable quand les requêtes d’insertion sont des requêtes conjonctives, (b) le test de simulation est indécidable si la satisfiabilité du langage de requête utilisé pour exprimer les insertions augmenté avec une certaine forme de négation est indécidable. Enfin, nous avons étudié l’interaction entre le langage utilisé pour exprimer les gardes et celui utilisé pour les insertions, nous exhibons une classe de service où la satisfiabilité des deux langages est décidable alors que le test de simulation entre les services qui leur sont associés ne l’est pas
In this thesis we address the problem of analyzing specifications of data-centric Web service interaction protocols (also called data-centric business protocols). Specifications of such protocols include data in addition to operation signatures and messages ordering constraints. Analysis of data-centric services is a complex task because of the inherently infinite states of the underlying service execution instances. Our work focuses on characterizing the problem of checking a refinement relation between service interaction protocol specifications. More specifically, we consider the problem of checking the simulation preorder when service business protocols are represented using data-centric state machines. First we study the Colombo model [BCG+05]. In this framework, a service (i) exchanges messages using variables; (ii) acts on a shared database; (iii) has a transition based behavior. We show that the simulation test for unbounded Colombo is undecidable. Then, we consider the case of bounded Colombo where we show that simulation is (i) exptime-complete for Colombo services without any access to the database (noted ColomboDB=∅), and (ii) 2exptime-complete when only bounded databases are considered (the obtained model is noted Colombobound). In the second part of this thesis, we define a generic model to study the impact of various parameters on the simulation test in the context of datacentric services. The generic model is a guarded transition system acting (i.e., read and write) on databases (i.e., local and shared) and exchanging messages with its environment (i.e., other services or users). The model was designed with a database theory perspective, where all actions are viewed as queries (i.e modification of databases, messages exchanges and guards). In this context, we obtain the following results (i) for update free guarded services (i.e., generic services with guards and only able to send empty messages) the decidability of simulation is fully characterized w.r.t decidability of satisfiability of the query language used to express the guards augmented with a restrictive form of negation, (ii) for update free send services (i.e., generic services without guards and able to send as messages the result of queries over local and shared database), we exhibit sufficient conditions for both decidability and undecidability of simulation test w.r.t the language used to compute messages payloads, and (iii) we study the case of insert services (i.e., generic services without guards and with the ability of insert the result of queries into the local and the shared database). In this case, we study the simulation as well as the weak simulation relations where we show that: (i) the weak simulation is undecidable when the insertions are expressed as conjunctive queries, (ii) the simulation is undecidable if satisfiability of the query language used to express the insertion augmented with a restricted form of negation is undecidable. Finally, we study the interaction between the queries used as guards and the ones used as insert where we exhibit a class of services where satisfiability of both languages is decidable while simulation is undecidable
APA, Harvard, Vancouver, ISO, and other styles
29

Minami, Masayo, Tsuyoshi Tanaka, Koshi Yamamoto, Koichi Mimura, Yoshihiro Asahara, Makoto Takeuchi, Hidekazu Yoshida, and Setsuo Yogo. "Database for geochemical mapping of the northeastern areas of Aichi Prefecture, central Japan ^ XRF major element data of stream sediments collected in 1994 to 2004 -." Dept. of Earth and Planetary Sciences, Nagoya University, 2005. http://hdl.handle.net/2237/7650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Fiala, Petr. "Propojení ekonomického informačního systému a nemocničních informačních systémů." Master's thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-10834.

Full text
Abstract:
Proposal for an integrated information system architecture of health care facility. Particullary it is the integration of economic information system for financial and personnel management of hospitals and hospital information systems - laboratory information system, documentation system, ordering system, a system for the care of patients, etc. The aim of the project is to design the structure of a central database, and links between systems, so the able to collaborate in real time.
APA, Harvard, Vancouver, ISO, and other styles
31

Andersen, Emelie. "Aeshna viridis distribution and habitat choices in South and Central Sweden and the possibility to use a database as a tool in monitoring a threatened species." Thesis, Högskolan i Halmstad, Ekologi och miljövetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-25561.

Full text
Abstract:
Aeshna viridis, a dragonfly generally considered to be a specialist as it in most cases choosesStratiotes aloides as its habitat, have suffered badly from habitat loss and fragmentationsthroughout Europe under the last century as the human demand of land use have grown. It´sthereby considered near threatened on EU red list and is included in the Habitat Directive.This means that it is protected by EU law as all EU Member States is committed to protect,monitor and report back to EU the status of the species. Several European countries havedesigned protection plans for S. aloides to improve the preservation of A. viridis. My study inSouth and Central Sweden shows that the strong connection between A. viridis and S. aloidesmay not be consistent all over the distribution range of A. viridis, as my survey showed thatlarvae occur among other water plants when S. aloides is not present. Another aim in thisstudy was to evaluate the possibility to use occurrence data on A. viridis and S. aloides fromthe Species Observations System to monitor A. viridis distribution and dispersal. My studyimplies uncertainties of how well the datasets reflects reality and more research is necessarybefore clarifying if datasets could be a possible tool in conservation management of A. viridis.
APA, Harvard, Vancouver, ISO, and other styles
32

Day, Peter Francis. "The development and assessment of a computer database for the recording of dento-alveolar trauma in children and a multi-centre randomised controlled trial of tooth avulsion and replantation." Thesis, University of Leeds, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.503294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Chaomei. "Traditional Chinese medical clinic system." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2517.

Full text
Abstract:
The Chinese Medical Clinic System is designed to help acupuncturists and assistants record and store information. This system can maintain and schedule appointments and view patient diagnoses effectively. The system will be implemented on a desktop PC connected to the internet to facilitate the acupuncturists record of information.
APA, Harvard, Vancouver, ISO, and other styles
34

Pastor, Jonathan. "Contributions à la mise en place d'une infrastructure de Cloud Computing à large échelle." Thesis, Nantes, Ecole des Mines, 2016. http://www.theses.fr/2016EMNA0240/document.

Full text
Abstract:
La croissance continue des besoins en puissance de calcul a conduit au triomphe du modèle de Cloud Computing. Des clients demandeurs en puissance de calcul vont s’approvisionner auprès de fournisseurs d’infrastructures de Cloud Computing, mises à disposition via Internet. Pour réaliser des économies d’échelles, ces infrastructures sont toujours plus grandes et concentrées en quelques endroits, conduisant à des problèmes tels que l’approvisionnement en énergie, la tolérance aux pannes et l’éloignement des utilisateurs. Cette thèse s’est intéressée à la mise en place d’un système d’IaaS massivement distribué et décentralisé exploitant un réseau de micros centres de données déployés sur la dorsale Internet, utilisant une version d’OpenStack revisitée pendant cette thèse autour du support non intrusif de bases de données non relationnelles. Des expériences sur Grid’5000 ont montré des résultats intéressants sur le plan des performances, toutefois limités par le fait qu’OpenStack ne tirait pas avantage nativement d’un fonctionnement géographiquement réparti. Nous avons étudié la prise en compte de la localité réseau pour améliorer les performances des services distribués en favorisant les collaborations proches. Un prototype de l’algorithme de placement de machines virtuelles DVMS, fonctionnant sur une topologie non structurée basée sur l’algorithme Vivaldi, a été validé sur Grid’5000. Ce prototype a fait l’objet d’un prix scientifique lors de l’école de printemps Grid’50002014. Enfin, ces travaux nous ont amenés à participer au développement du simulateur VMPlaceS
The continuous increase of computing power needs has favored the triumph of the Cloud Computing model. Customers asking for computing power will receive supplies via Internet resources hosted by providers of Cloud Computing infrastructures. To make economies of scale, Cloud Computing that are increasingly large and concentrated in few attractive places, leading to problems such energy supply, fault tolerance and the fact that these infrastructures are far from most of their end users. During this thesis we studied the implementation of an fully distributed and decentralized IaaS system operating a network of micros data-centers deployed in the Internet backbone, using a modified version of OpenStack that leverages non relational databases. A prototype has been experimentally validated onGrid’5000, showing interesting results, however limited by the fact that OpenStack doesn’t take advantage of a geographically distributed functioning. Thus, we focused on adding the support of network locality to improve performance of Cloud Computing services by favoring collaborations between close nodes. A prototype of the DVMS algorithm, working with an unstructured topology based on the Vivaldi algorithm, has been validated on Grid’5000. This prototype got the first prize at the large scale challenge of the Grid’5000 spring school in 2014. Finally, the work made with DVMS enabled us to participate at the development of the VMPlaceS simulator
APA, Harvard, Vancouver, ISO, and other styles
35

Simmons, Anita Joyce. "Health Portal Functionality and the Use of Patient-Centered Technology." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/3071.

Full text
Abstract:
Health portals are dedicated web pages for medical practices to provide patients access to their electronic health records. The problem identified in this quality improvement project was that the health portal in the urgent care setting had not been available to staff nor patients. To provide leadership with information related to opening the portal, the first purpose of the project was to assess staff and patients' perceived use, ease of use, attitude toward using, and intention to use the portal. The second purpose was to evaluate the portal education materials for the top 5 urgent care diagnoses: diabetes, hypertension, asthma, otitis media, and bronchitis for understandability and actionability using the Patient Education Material Assessment Tool, Simple Measures of Goobledygook, and the Up to Date application. The first purpose was framed within the technology acceptance model which used a 26-item Likert scale ranging from -3 (total disagreement) to +3 (total agreement). The staff (n = 8) and patients (n = 75) perceived the portal as useful (62%; 60%), easy to use (72%; 70%), expressed a positive attitude toward using (71%; 73%), and would use the technology (54%; 70%). All materials were deemed understandable (74%-95%) with 70% being the acceptable percentage. Diabetes, otitis media, and bronchitis were deemed actionable (71-100%), but hypertension (57%) and asthma (40%) had lower actionability percentages. Hypertension, asthma, and otitis media had appropriate reading levels (6-8th grade). However, diabetes (10th grade) and bronchitis (12th grade) were higher with the target being less than 8th grade level. All handouts were found to be evidence-based. Recommendations were to revise the diabetes and bronchitis educational handouts to improve readability. Social change can be promoted by this project by facilitating positive patient outcomes at urgent care clinics.
APA, Harvard, Vancouver, ISO, and other styles
36

Khashoqji, Moayad. "Structural characterisation of novel poly-aryl compounds." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/structural-characterisation-of-novel-polyaryl-compounds(3fb1fac6-548a-4afc-8ac2-5a14885b0ba4).html.

Full text
Abstract:
Poly-aryl, also known as polyphenylene compounds are a class of dendrimer, which contain a large number of aromatic rings. They are of interest because they display restricted rotation of their stearically congested aromatic rings. These extended structures have the potential to act as precursors for even larger aromatic systems and have many applications including electronic devices, drug delivery and catalysis. A total of 23 novel poly-aryl compounds have been examined using single crystal X-ray diffraction and a number of structural patterns have emerged. Six of the compounds contain alkynes and it has been observed that their conformation is governed by a combination of conjugation between the alkyne and aryl groups and inter-molecular interactions. In the more extended poly-aryl compounds steric congestion rules out any possibility of conjugation between the rings and their conformation is governed by intra-molecular non-bonded interactions in the core of the molecules and by inter-molecular interactions in their periphery. Where possible, solution NMR measurements were carried out on the poly-aryl compounds and confirmed that the solution structures are in agreement with those obtained from individual crystal.
APA, Harvard, Vancouver, ISO, and other styles
37

Bentayeb, Fadila. "Entrepôts et analyse en ligne de données complexes centrés utilisateur : un nouveau défi." Habilitation à diriger des recherches, Université Lumière - Lyon II, 2011. http://tel.archives-ouvertes.fr/tel-00752126.

Full text
Abstract:
Les entrepôts de données répondent à un réel besoin en matière d'accès à l'information résumée. Cependant, en suivant le processus classique d'entreposage et d'analyse en ligne (OLAP) de données, les systèmes d'information décisionnels (SID) exploitent très peu le contenu informationnel des données. Alors même que les SID sont censés être centrés utilisateur, l'OLAP classique ne dispose pas d'outils permettant de guider l'utilisateur vers les faits les plus intéressants du cube. La prise en compte de l'utilisateur dans les SID est une problématique nouvelle, connue sous le nom de personnalisation, qui pose plusieurs enjeux peu ou pas étudiés. Le travail présenté dans ce mémoire vise à proposer des solutions innovantes dans le domaine de la personnalisation dans les entrepôts de données complexes. L'originalité de nos travaux de recherche a consisté à montrer qu'il est pertinent d'intégrer la sémantique dans tout le processus d'entreposage, soit en invitant l'utilisateur à exprimer ses propres connaissances métier, soit en utilisant les méthodes de fouille de données pour extraire des connaissances cachées. En s'appuyant sur l'intuition que des connaissances sur le métier, sur les données entreposées et leur usage (requêtes) peuvent contribuer à aider l'utilisateur dans son exploration et sa navigation dans les données, nous avons proposé une première approche de personnalisation basée sur les connaissances explicites des utilisateurs. En empruntant le concept d'évolution de schéma, nous avons relâché la contrainte du schéma fixe de l'entrepôt, pour permettre d'ajouter ou de supprimer un niveau de hiérarchie dans une dimension. Ces travaux ont été étendus pour recommander à l'utilisateur des hiérarchies de dimension nouvelles basées sur la découverte de nouvelles structures naturelles grâce aux principes d'une méthode de classification (K-means). Nous avons par ailleurs développé la fouille en ligne en s'appuyant uniquement sur les outils offerts par les systèmes de gestion de bases de données (SGBD). La fouille en ligne permet d'étendre les capacités analytiques des SGBD, support des entrepôts de données, de l'OLAP vers une analyse structurante, explicative et prédictive ; et venir en appui à la personnalisation. Afin de prendre en compte à la fois l'évolution des données et celle des besoins tout en garantissant l'intégration structurelle et sémantique des données, nous avons proposé une approche d'analyse en ligne à la demande, qui s'appuie sur un système de médiation à base d'ontologies. Par ailleurs, nous avons proposé un modèle multidimensionnel d'objets complexes basé sur le paradigme objet qui permet de représenter les objets de l'univers de façon plus naturelle et de capter la sémantique qu'ils véhiculent. Un opérateur de projection cubique est alors proposé pour permettre à l'utilisateur de créer des cubes d'objets complexes personnalisés. Toutes nos solutions ont été développées et testées dans le contexte des entrepôts de données relationnels et/ou XML.
APA, Harvard, Vancouver, ISO, and other styles
38

Simões, Marco Paulo Ventura. "Do Cáspio para o mundo : as trocas comerciais de bens do Cáucaso e Ásia Central." Master's thesis, Instituto Superior de Economia e Gestão, 2017. http://hdl.handle.net/10400.5/13428.

Full text
Abstract:
Mestrado em Economia Internacional e Estudos Europeus
As transações comerciais são uma parte das relações sociais entre os homens. Ao longo dos tempos, os países organizaram-se social, e economicamente e procuram diferentes parceiros comerciais com vista ao aumento da sua riqueza e do seu bem estar, nomeadamente através das trocas de produtos. Se no século XX, o espaço europeu era a maior zona económica, neste século, outras zonas geográficas ganharam protagonismo nas trocas comerciais de bens, nomeadamente a China e o Sudeste Asiático e mais recentemente o Cáucaso e a Ásia Central. O objetivo desta dissertação é analisar as trocas comerciais de bens das antigas repúblicas soviéticas (definidas pelo FMI como Cáucaso e Ásia Central) com o resto do mundo, a Rússia, a UE-28 e Portugal, tendo por base os dados obtidos da Comtrade Database da ONU no período 2004-2013, assim como demais relatórios realizados por instituições públicas e privadas.
Business' transactions are a part of social relations between men. Over time the countries have organized themselves socially and economically and look for different trading partners looking for an increasing their wealth, majority through the exchange of products. If in twentieth century, the European area was the largest economic zone, in this century, other geographical areas gained prominence in trade, like China and Southeast Asia and more recently Central Asia. The aim of this work is to analyze the trade in goods of the former Soviet republics (defined by the IMF as the Caucasus and Central Asia) with the world, Russia, the EU-28 and Portugal, based on the data obtained from Comtrade UN Database in the period 2004-2013 and other reports from public and private institutions.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
39

Everett, Inez Celeste. "Web accessibility: Ensuring access to online course instruction for students with disabilities." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2367.

Full text
Abstract:
The number of instructors introducing web-based elements in the course curriculum is growing and students need to be able to access content on the web to participate. As such, a campus website with accessibility design standards for course developers at California State University showed potential to greatly assist in equalizing the educational playing field for students with disabilities.
APA, Harvard, Vancouver, ISO, and other styles
40

Revol, Bruno. "Pharmacoépidémiologie des apnées du sommeil Impact of concomitant medications on obstructive sleep apnoea Drugs and obstructive sleep apnoeas Diagnosis and management of central sleep apnea syndrome Baclofen and sleep apnoea syndrome: analysis of VigiBase® the WHO pharmacovigilance database Gabapentinoids and sleep apnea syndrome: a safety signal from the WHO pharmacovigilance database Valproic acid and sleep apnea: a disproportionality signal from the WHO pharmacovigilance database Ticagrelor and Central Sleep Apnea What is the best treatment strategy for obstructive sleep apnoea-related hypertension? Who may benefit from diuretics in OSA? A propensity score-matched observational study." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALV026.

Full text
Abstract:
Avant leur mise sur le marché, l'évaluation clinique des médicaments repose sur des essais contrôlés randomisés. Bien qu'ils représentent la méthode de référence, leurs résultats sont nécessairement limités aux patients inclus dans ces essais. De plus, ils sont d’abord conçus pour mesurer l'efficacité des traitements, avant d’évaluer leurs effets indésirables. Concernant le syndrome d'apnées du sommeil (SAS), alors que de nombreux essais médicamenteux ont été menés, la plupart des résultats sont de faible niveau de preuve, voire contradictoires. Outre la durée et les effectifs limités de ces essais, une explication est que le SAS est une pathologie hétérogène en termes de symptômes et de physiopathologie, incluant divers "phénotypes" de patients. Des données de vie réelle sont donc nécessaires pour définir quels médicaments pourraient améliorer le SAS ou les comorbidités associées et quels patients pourraient en bénéficier. Au contraire, les cliniciens doivent être avertis que certains médicaments peuvent induire ou aggraver le SAS.La pharmacoépidémiologie fait désormais partie de toute enquête de pharmacovigilance, car elle permet une approche à la fois descriptive et comparative des notifications spontanées. Des associations entre l'exposition à un ou plusieurs médicaments et l'apparition d'effets indésirables peuvent ainsi être recherchées. Comme pour toutes les études observationnelles, la principale difficulté consiste à contrôler les facteurs de confusion. L'un des modèles couramment utilisés est l'analyse cas/non-cas, qui étudie la disproportionnalité entre le nombre d’effets indésirables rapportés avec le médicament d’intérêt, par rapport aux effets notifiés pour les autres médicaments. Nous avons ainsi montré des associations significatives entre l'utilisation de baclofène, des gabapentinoïdes ou de l'acide valproïque et la survenue de SAS dans la base de pharmacovigilance de l'OMS, suggérant le rôle du système GABAergique dans la pathogenèse des apnées centrales d’origine médicamenteuse. Un signal de disproportionnalité a également été observé pour le ticagrélor, reposant sur un mécanisme d'action différent.Les analyses pharmacoépidémiologiques permettent également d'étudier le bénéfice des médicaments en vie réelle. Le score de propension est utilisé pour minimiser les biais de sélection et recréer des conditions de comparabilité proches de celles des essais randomisés. À l'aide de ces méthodes statistiques, nous avons évalué l'intérêt potentiel de cibler le système rénine-angiotensine pour la prise en charge de l'hypertension artérielle chez les patients atteints d’apnées obstructives, en particulier avec l’utilisation des sartans. Chez ces mêmes patients apnéiques et hypertendus, nos travaux suggèrent que les diurétiques pourraient diminuer la sévérité des apnées, notamment en cas de surpoids ou d’obésité modérée. Des études prospectives sont désormais nécessaires afin de confirmer ces résultats, car les données de vie réelle ne peuvent se substituer aux essais cliniques contrôlés
The clinical evaluation of drugs before approval is based on randomized controlled trials. Although they are considered as the gold standard for testing drugs, their results are necessarily limited to patients included in the trials. Moreover, almost all clinical trials are primarily designed to assess the efficacy of a treatment, so safety is only a secondary concern. Regarding sleep apnea syndrome (SAS), while many drug trials have been conducted, most of the results are weak or even contradictory. In addition to limited trial duration and population size, one explanation is that the sleep apnea population is highly heterogeneous with respect to symptoms and physiological traits linked to disease pathogenesis, giving various patient “phenotypes”. Real-life data are therefore needed to define which drugs could improve SAS or associated comorbidities and who might benefit from them. On the contrary, clinicians need to be aware that some drugs may induce or worsen sleep apnea.Pharmacoepidemiology is now part of any pharmacovigilance survey, as it provides both descriptive and comparative approaches of spontaneous reports. Associations between the exposure to one or more drugs and the occurrence of adverse effects can thus be sought. As for all observational studies, the major difficulty is to control for confounding factors. One of the study designs commonly used, is the case/non-case analysis, which investigates disproportionality between the numbers of adverse drug reactions reported with the drug of interest compared to the number reported with all other drugs. In this way, we showed significant associations between the use of baclofen, gabapentinoids or valproic acid and the reporting of SAS in the WHO drug adverse event database, suggesting a role of the GABAergic system in the pathogenesis of drug-induced central sleep apnea. A disproportionality signal was also found for ticagrelor, based on a different mechanism of action.Pharmacoepidemiological analyses also make it possible to study the benefit of drugs in real-life. Propensity scores are used to minimize selection bias, leading to a comparability between the exposure groups close to that observed in randomized trials. Using these statistical methods, we have investigated the potential value of targeting the renin-angiotensin system for the management of hypertension in obstructive sleep apnea (OSA) patients, especially the use of sartans. For hypertensive apneic patients, our work suggests that diuretics could decrease the severity of OSA, particularly in the overweight or moderately obese. Prospective studies are now needed to confirm these findings, because real-life data cannot be a substitute for controlled clinical trials
APA, Harvard, Vancouver, ISO, and other styles
41

Kissinger, Thomas, Benjamin Schlegel, Dirk Habich, and Wolfgang Lehner. "QPPT: Query Processing on Prefix Trees." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-113269.

Full text
Abstract:
Modern database systems have to process huge amounts of data and should provide results with low latency at the same time. To achieve this, data is nowadays typically hold completely in main memory, to benefit of its high bandwidth and low access latency that could never be reached with disks. Current in-memory databases are usually columnstores that exchange columns or vectors between operators and suffer from a high tuple reconstruction overhead. In this paper, we present the indexed table-at-a-time processing model that makes indexes the first-class citizen of the database system. The processing model comprises the concepts of intermediate indexed tables and cooperative operators, which make indexes the common data exchange format between plan operators. To keep the intermediate index materialization costs low, we employ optimized prefix trees that offer a balanced read/write performance. The indexed tableat-a-time processing model allows the efficient construction of composed operators like the multi-way-select-join-group. Such operators speed up the processing of complex OLAP queries so that our approach outperforms state-of-the-art in-memory databases.
APA, Harvard, Vancouver, ISO, and other styles
42

Cateriano-Alberdi, Maria Paula, Cecilia D. Palacios-Revilla, and Eddy R. Segura. "Survey of Diagnostic Criteria for Fetal Distress in Latin American and African Countries: Over Diagnosis or Under Diagnosis?" Glorigin LifeSciences, 2017. http://hdl.handle.net/10757/622212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Bothma, Bernardus Christian. "Performance and reliability optimisation of a data acquisition and logging system in an integrated component-handling environment." Thesis, Bloemfontein : Central University of Technology, Free State, 2011. http://hdl.handle.net/11462/14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Aguboshim, Felix Chukwuma. "User Interface Challenges of Banking ATM Systems in Nigeria." ScholarWorks, 2018. https://scholarworks.waldenu.edu/dissertations/4956.

Full text
Abstract:
The use of banking automated teller machine (ATM) technological innovations have significant importance and benefits in Nigeria, but numerous investigations have shown that illiterate and semiliterate Nigerians do not perceive them as useful or easy-to-use. Developing easy-to-use banking ATM system interfaces is essential to accommodate over 40% illiterate and semiliterate Nigerians, who are potential users of banking ATM systems. The purpose of this study was to identify strategies software developers of banking ATM systems in Nigeria use to create easy-to-use banking ATM system interfaces for a variety of people with varying abilities and literacy levels. The technology acceptance model was adopted as the conceptual framework. The study's population consisted of qualified and experienced developers of banking ATM system interfaces chosen from 1 organization in Enugu, Nigeria. The data collection process included semistructured, in-depth face-to-face interviews with 9 banking ATM system interface developers and the analysis of 11 documents: 5 from participant case organizations and 6 from nonparticipant case organizations. Member checking was used to increase the validity of the findings from the participants. Through methodological triangulation, 4 major themes emerged from the study: importance of user-centered design strategies, importance of user feedback as essential interface design, value of pictorial images and voice prompts, and importance of well-defined interface development process. The findings in this study may be beneficial for the future development of strategies to create easy-to-use ATM system interfaces for a variety of people with varying abilities and literacy levels and for other information technology systems that are user interface technology dependent.
APA, Harvard, Vancouver, ISO, and other styles
45

Esker, Donald Anton. "An Analysis of the Morrison Formation’s Terrestrial Faunal Diversity Across Disparate Environments of Deposition, Including the Aaron Scott Site Dinosaur Quarry in Central Utah." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1233009882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Salazar, Angulo Rafael, and Lozano Luis Anthony Laguna. "Diseño de un modelo de transformación digital de los procesos centrales que permita elevar la productividad de una empresa de logística ligera de Lima, Perú." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2021. http://hdl.handle.net/10757/656229.

Full text
Abstract:
La demanda de servicios logísticos en el Perú se encuentra en constante crecimiento, en junio del 2019 existían 753 empresas formales, de las cuales el 99.5% están clasificadas como pymes. Uno de los principales problemas de este tipo de empresas es que el 90% opera en promedio 10 meses y el resto logra sobrepasar dicho tiempo de operación. Asimismo, las metodologías y modelos existentes sobre transformación digital están orientados a organizaciones consolidas y con un grado de desarrollo cultural importante. Además, el tiempo de implementación tarda entre 3 y 4 años, lo cual es poco viable para la realidad de las pymes peruanas. Por ello, la presente investigación plantea diseñar un modelo de transformación digital que impulse a las empresas del sector logístico. Conforme a los estudios realizados, la transformación digital potencia la productividad de las empresas peruanas, ya que el 60% de las pymes que se digitalizan logran facturar el doble de las que no lo hacen. El principal propósito del presente modelo es incrementar la productividad de las organizaciones para poder extender su ciclo de vida, lo cual reducirá la tasa de desempleo en el país y a su vez, contribuirá a reducir la pobreza del mismo, y dicha situación conllevará a un incremento en el PBI. Para el desarrollo del modelo se emplearán herramientas de transformación digital, ya que la mayoría de las empresas están transformando sus modelos de negocio a través de dichas tecnologías, las cuales son factibles de implementar debido a los grandes avances tecnológicos.
The demand for logistics services in Peru is constantly growing, in June 2019 there were 753 formal companies, of which 99.5% are classified as SMEs. One of the main problems of this type of company is that 90% operate an average of 10 months and the rest manage to exceed said operating time. Likewise, the existing methodologies and models on digital transformation are aimed at consolidated organizations with a significant degree of cultural development. In addition, the implementation time takes between 3 and 4 years, which is not very viable for the reality of Peruvian SMEs. For this reason, this research proposes designing a digital transformation model that encourages companies in the logistics sector. According to the studies carried out, digital transformation boosts the productivity of Peruvian companies, since 60% of SMEs that digitize manage to bill twice as much as those that do not. The main purpose of this model is to increase the productivity of organizations in order to extend their life cycle, which will reduce the unemployment rate in the country and, in turn, will contribute to reducing poverty, and this situation will lead to an increase in the PBI. For the development of the model, digital transformation tools will be used, since most companies are transforming their business models through these technologies, which are feasible to implement due to great technological advances.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
47

Maccagnani, Anna. "Studio applicato sull'evoluzione dell'ERP da locale a cloud: il caso Microsoft Dynamics 365 Business Central." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Confronto tra i diversi paradigmi di implementazione ERP on premise e cloud based, con riferimento a Microsoft Dynamics, e sviluppo di un’applicazione in Dynamics NAV ed una extension in Business Central. Si è proceduto delineando le due diverse modalità di programmazione delle due soluzioni software, in modo da creare due diverse applicazioni che rappresentano i due diversi paradigmi di sviluppo che differenziano soluzioni di tipo on premise rispetto a soluzioni cloud based, evidenziando i relativi vantaggi ed evoluzioni.
APA, Harvard, Vancouver, ISO, and other styles
48

Casallas-Gutiérrez, Rubby. "Objets historiques et annotations pour les environnements logiciels." Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00004982.

Full text
Abstract:
Dans un environnement guidé par les procédés de fabrication de logiciel (EGPFL), la gestion de l'information est un problème complexe qui doit concilier deux besoins: gérer le produit logiciel et gérer les procédés de fabrication. Outre la grande quantité d'entités diverses et fortement interdépendantes, la gestion du produit doit prendre en compte l'aspect évolutif et les facteurs de variation du logiciel, ainsi que la nature coopérative des activités de fabrication des logiciels. La gestion des procédés recouvre la modélisation, l'exécution, l'évaluation et la supervision des procédés. Diverses informations doivent alors être prises en compte: la trace d'exécution des procédés, les événements survenus dans l'environnement et les mesures de qualité. Nous proposons les objets historiques annotés pour gérer l'information d'un EGPFL. L'objet historique constitue la notion de base d'un modèle à objets historique permettant de représenter à la fois les entités logicielles et leur évolution. La notion d'annotation vient, quant à elle, enrichir ce modèle pour permettre d'introduire des informations qui dénotent des faits (notes, mesures, observations, etc) pouvant être ponctuellement associés aux entités de l'EGPFL. Un langage de requêtes est défini afin d'accéder aux différentes informations. Grâce à ce langage, l'EGPFL dispose d'un service puissant pour rassembler, à partir de la base d'objets, les diverses informations nécessaires à l'évaluation et au contrôle des procédés de fabrication. Nous proposons également d'exploiter les possibilités offertes par notre modèle pour définir des événements et, éventuellement, en conserver un historique. Les événements permettent d'identifier des situations liant des informations provenant aussi bien de l'état courant que des états passés de l'EGPFL. C'est pourquoi la définition d'un événement peut comporter des conditions exprimées dans le langage de requêtes. L'emploi d'annotations permet d'enregistrer les occurences d'événements, ainsi qu'une partie de l'état du système. Une implantation du modèle est proposée dans le système Adèle
APA, Harvard, Vancouver, ISO, and other styles
49

Delettre, Christian. "Plateforme ouverte, évolutive, sécurisée et orientée utilisateur pour l'e-commerce." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4111/document.

Full text
Abstract:
De nos jours, l’e-commerce est devenu un écosystème complexe où de multiples solutions (en termes de plateforme) sont possibles et réalisables pour un e-commerçant. En parallèle, un nouveau paradigme a fait son apparition, celui du Cloud Computing. Malgré les avantages certains qu’il apporte, peu des plateformes existantes sont pensées pour fonctionner sur une architecture Cloud. De plus, face à la complexité d’obtenir une plateforme d’e-commerce (PE) sécurisée, flexible et évolutive s’appuyant sur des applications et services hétérogènes existants et répondant aux besoins des e-commerçants, il est légitime de se demander si une PE basée sur le Cloud permettrait de réellement simplifier les difficultés rencontrées par les e-commerçants. Cette thèse propose de valider la pertinence de l’utilisation du Cloud dans un contexte d’e-commerce avant de proposer les principes architecturaux d’une PE ouverte, évolutive et sécurisée basée sur une architecture de Cloud. De plus, la mise en œuvre d’une PE par un e-commerçant, n’est pas orientée utilisateur. Face à ceci, nous proposons un mécanisme orienté utilisateur simplifiant la mise en œuvre d’une PE tout en assurant un haut degré de sécurité au sein de celle-ci. Enfin, nous nous sommes également intéressés à répondre à la question suivante dans un contexte d’e-commerce : Comment assurer qu’aucune inférence d’activités sur une taille constatée d’une BD ne puisse être réalisée par des entités non autorisées ? Pour y répondre, nous proposons une solution de sécurité de dissimulation de données orientée utilisateur permettant de résoudre la propriété de confidentialité forte des données au sein des SGBDR
Nowadays, e-commerce has become a complex ecosystem where multiple solutions (in terms of platforms) are possible and feasible for e-merchant. Concurrently, a new paradigm called Cloud Computing has emerged. Despite some advantages it brings, few of these platforms have been designed to operate on a Cloud architecture. Thus, because of the complexity to design a flexible and scalable e-commerce platform (EP), based on existing heterogeneous applications/services and fulfilling the needs of e-merchants, it is legitimate to ask ourself if a PE based on the Cloud would really simplify the difficulties faced by e-merchants. This thesis aims to validate the relevance of using the Cloud Computing in the e-commerce context and propose the architectural principles of an open, scalable and secure EP based on a Cloud architecture. In addition, the EP used by e-merchants are not user-centric EP. As a consequence, we propose a user-centric mechanism simplifying the design and implementation of an EP while ensuring a high security level. Finally, we tried to answer the following question: How to ensure that no activity inference on a database size, in an e-commerce context, can be achieved by unauthorized entities? As a response, we propose a user-centric security solution of data concealment to resolve the property of strong data confidentiality within relational database management system (RDBMS)
APA, Harvard, Vancouver, ISO, and other styles
50

Gueguen, Juliette. "Evaluation des médecines complémentaires : quels compléments aux essais contrôlés randomisés et aux méta-analyses ?" Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS072/document.

Full text
Abstract:
Les médecines complémentaires sont nombreuses et variées, leur recours est largement répandu et en hausse. Selon les pratiques, les données d’évaluation sont plus ou moins riches, mais il y a peu de conclusions consensuelles quant à leur efficacité, même en cas de littérature abondante. Nous commencerons par un état des lieux de l’adéquation des méthodes conventionnelles utilisées pour l’évaluation du médicament, à savoir de l’essai contrôlé randomisé (ECR) et des méta-analyses, pour l’évaluation des médecines complémentaires.A travers trois applications pratiques, nous réfléchirons ensuite à l’apport d’autres méthodes, moins reconnues à ce jour dans le champ de l’evidence based medecine mais pouvant apporter d’autres éclairages. En particulier, nous discuterons de l’intérêt des méthodes mixtes, des études qualitatives et de l’exploitation des grandes bases de données médico-administratives. Nous réaliserons une revue mixte sur l’évaluation de l’hypnose pour le travail et l’accouchement, une étude qualitative sur l’expérience du qi gong par des patientes hospitalisées pour anorexie mentale sévère, et nous étudierons le potentiel d’exploitation du Système National d'Information Inter Régimes de l'Assurance Maladie (SNIIRAM) pour évaluer les médecines complémentaires. Les deux premiers axes nous amèneront à questionner le choix des critères de jugement et des instruments de mesure utilisés dans les ECR et nous inciteront à accorder davantage de place et de légitimité à la perspective du patient. Plus largement, cela nous invitera à remettre en cause la suprématie traditionnellement accordée aux études quantitatives pour la remplacer par une vision non hiérarchique mais synergique des approches qualitatives et quantitatives. Le troisième axe nous permettra d’identifier les limites actuelles à l’exploitation du SNIIRAM pour l’évaluation des médecines complémentaires, à la fois sur le plan technique et sur le plan de la représentativité. Nous proposerons des mesures concrètes pour rendre possible et pertinente son exploitation dans le champ de l'évaluation des médecines complémentaires.Enfin, dans la discussion générale, nous tiendrons compte du fait que l’évaluation des médecines complémentaires n’a pas pour but d’autoriser ou non une mise sur le marché. Ainsi, contrairement à l'évaluation des médicaments, l'évaluation des médecines complémentaires ne s'inscrit pas toujours dans une visée de prise de décision. Nous soulignerons l’importance de tenir compte de la visée (visée de connaissance ou visée de décision) dans l’élaboration d’une stratégie de recherche et nous proposerons deux stratégies différentes en nous appuyant sur la littérature et les résultats issus de nos trois exemples d'application. Concernant la stratégie de recherche à visée de prise de décision, nous montrerons l’importance des étapes de définition de l’intervention, d’identification des critères de jugement pertinents, et d’optimisation de l’intervention, avant la réalisation d’essais pragmatiques visant à évaluer l’efficacité en vie réelle. Nous verrons comment la volonté d’évaluer ces pratiques nous renvoie à des défis en terme de réglementation et nous soulignerons par ailleurs la nécessité d’évaluer la sécurité de ces pratiques en développant des systèmes de surveillance adaptés
Complementary medicines are numerous and varied, their use is widespread and increasing.Quality and quantity of evaluation data depend on the type of complementary medicines, but there are few consensual conclusions about their effectiveness, even in the case of abundant literature. We will start with an inventory of the adequacy of the conventional methods used for drug evaluation, namely randomized controlled trials (RCT) and meta-analyzes, for the evaluation of complementary medicines. Through three practical applications, we will then consider the contribution of other methods, less recognized to date in the field of evidence-based medicine but potentially contributive to shed light on other perspectives. In particular, we will discuss the advantages of mixed methods, qualitative studies and the exploitation of large health administrative databases. We will conduct a mixed-method review of the assessment of hypnosis for labor and childbirth, a qualitative study on the experience of qi gong by patients hospitalized for severe anorexia nervosa and we will study the potential of the French national health insurance database (SNIIRAM) to evaluate complementary medicines. The first two axis will lead us to question the choice of outcomes and measurement tools used in RCTs and to value and legitimate the patient's perspective. More broadly, it will invite us to question the hierarchical vision of qualitative and quantitative research that traditionally attributes supremacy to quantitative studies. It will encourage us to replace it with a synergistic vision of qualitative and quantitative approaches. The third axis will enable us to identify the current limits to the use of SNIIRAM for the evaluation of complementary medicines, both technically and in terms of representativeness. We will propose concrete measures to make its exploitation possible and relevant in the field of evaluation of complementary medicines.Finally, in the general discussion, we shall take account of the fact that the evaluation of complementary medicines is not part of a marketing authorization process. Thus, contrary to drug evaluation, complementary medicines evaluation does not always imply decision making. We will emphasize the importance of considering the aim (aim of knowledge or aim of decision) in the development of a research strategy. We will propose two different strategies based on the literature and the results from our three examples. Concerning the research strategy aimed at decision-making, we will show the importance of defining the intervention, identifying the relevant outcomes, and optimizing the intervention first, before carrying out pragmatic clinical trials to evaluate its effectiveness. We will discuss the regulatory challenges complementary medicine evaluation confronts us to, and stress the need to assess the safety of these practices by developing appropriate monitoring systems
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography