To see the other types of publications on this topic, follow the link: System entity.

Dissertations / Theses on the topic 'System entity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'System entity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Crawley, Stephen Christopher. "The entity system : an object-based filing system." Thesis, University of Cambridge, 1986. https://www.repository.cam.ac.uk/handle/1810/250882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Veselý, Mikuláš. "Simulace chodu podniku Prádelna Lety." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-81880.

Full text
Abstract:
This diploma thesis deals with the theme simulation models. It is separated to three parts: theoretic, a part describing a simulation program and a practical part. The opening part's role is to introduce to a reader the problem about simulation models, inform him about simulation project building and define some key words. The second part explains a work with simulation program Simprocess. In the third part the reader finds out, how to use theoretic knowledge. It is demonstrated on the simulation of the operations of the company Prádelna Lety. A goal of this part is to make a simulation model, which describes the reality and can solve the main problems identified in the beginning of this part. All of this is the object of the analysis. After that is possible to give some advises about improving the operations.
APA, Harvard, Vancouver, ISO, and other styles
3

Hansen, Hugo, and Oliver Öhrström. "Benchmarking and Analysis of Entity Referencing Within Open-Source Entity Component Systems." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20074.

Full text
Abstract:
Runtime performance is essential for real time games, the faster a game can run the more features designers can put into the game to accomplish their vision.A popular architecture for video games is the Entity Component System architecture aimed to improve both object composition and performance. There are many tests for how this architecture performs under its optimal linear execution.This thesis presents a performance comparison of how several popular open-source Entity Component System libraries perform when fetching data from other entities during iteration. An object-oriented test is also done to compare against and verify if the known drawbacks of object-orientation can still be seen within these test cases. Our results show that doing a random lookup during iteration can cause magnitudes worse performance for Entity Component Systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Seviṇc, Süleyman 1960. "ENTITY STRUCTURE REPRESENTATION FOR LOCAL AREA NETWORK SIMULATION (SYSTEM, EXPERT)." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/275551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kelly, Michael Robert 1953. "Intelligent space laboratory organizational design using system entity structure concepts." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/291985.

Full text
Abstract:
This thesis is the product of a knowledge acquisition effort, whose objective was to obtain information essential to the modelling and simulation of a robotically operated laboratory on board the forthcoming space station "Freedom." The information is represented using the system entity structure, a knowledge representation scheme that utilizes artificial intelligence concepts. The system entity structure details the design information and associated knowledge required for the intelligent autonomous operation of the space-based laboratory. The approach is proven to be very beneficial for organizing and displaying the vast amounts of information that constitute this intricate system design. Knowledge management, representation, and the nature of a future software implementation are also addressed.
APA, Harvard, Vancouver, ISO, and other styles
6

Elsebai, A. "A rules based system for named entity recognition in modern standard Arabic." Thesis, University of Salford, 2009. http://usir.salford.ac.uk/14925/.

Full text
Abstract:
The amount of textual information available electronically has made it difficult for many users to find and access the right information within acceptable time. Research communities in the natural language processing (NLP) field are developing tools and techniques to alleviate these problems and help users in exploiting these vast resources. These techniques include Information Retrieval (IR) and Information Extraction (IE). The work described in this thesis concerns IE and more specifically, named entity extraction in Arabic. The Arabic language is of significant interest to the NLP community mainly due to its political and economic significance, but also due to its interesting characteristics. Text usually contains all kinds of names such as person names, company names, city and country names, sports teams, chemicals and lots of other names from specific domains. These names are called Named Entities (NE) and Named Entity Recognition (NER), one of the main tasks of IE systems, seeks to locate and classify automatically these names into predefined categories. NER systems are developed for different applications and can be beneficial to other information management technologies as it can be built over an IR system or can be used as the base module of a Data Mining application. In this thesis we propose an efficient and effective framework for extracting Arabic NEs from text using a rule based approach. Our approach makes use of Arabic contextual and morphological information to extract named entities. The context is represented by means of words that are used as clues for each named entity type. Morphological information is used to detect the part of speech of each word given to the morphological analyzer. Subsequently we developed and implemented our rules in order to recognise each position of the named entity. Finally, our system implementation, evaluation metrics and experimental results are presented.
APA, Harvard, Vancouver, ISO, and other styles
7

Bhattad, Pradnya Brijmohan, MohD Ibrahim, Omer Sheikh, and Debalina Das. "Plesimonas Shigelloides Induced Crohn’s Disease Flare-A Rare Entity." Digital Commons @ East Tennessee State University, 2021. https://dc.etsu.edu/asrf/2021/presentations/50.

Full text
Abstract:
Crohn’s disease is an inflammatory bowel disease that may involve any part of the gastrointestinal tract with a variety of extraintestinal features. A flare of Crohn’s disease may present as partial small bowel obstruction or peritonitis. Dehydration, infectious agents, and cigarette smoking are some of the factors linked to a relapse of Crohn’s disease. Plesimonas Shigelloides, a bacterium that belongs to the enterobacteriaceae group may rarely lead to a flare of Crohn’s disease. We describe the case of a 31-year-old male with Crohn’s disease who developed a flare triggered by Plesimonas Shigelloides infection presenting as partial small bowel obstruction with ileal narrowing, and regional lymphadenopathy that responded to immunosuppressants.
APA, Harvard, Vancouver, ISO, and other styles
8

Alanazi, Saad. "A Named Entity Recognition system applied to Arabic text in the medical domain." Thesis, Staffordshire University, 2017. http://eprints.staffs.ac.uk/3129/.

Full text
Abstract:
Currently, 30-35% of the global population uses the Internet. Furthermore, there is a rapidly increasing number of non-English language internet users, accompanied by an also increasing amount of unstructured text online. One area replete with underexploited online text is the Arabic medical domain, and one method that can be used to extract valuable data from Arabic medical texts is Named Entity Recognition (NER). NER is the process by which a system can automatically detect and categorise Named Entities (NE). NER has numerous applications in many domains, and medical texts are no exception. NER applied to the medical domain could assist in detection of patterns in medical records, allowing doctors to make better diagnoses and treatment decisions, enabling medical staff to quickly assess a patient's records and ensuring that patients are informed about their data, as just a few examples. However, all these applications would require a very high level of accuracy. To improve the accuracy of NER in this domain, new approaches need to be developed that are tailored to the types of named entities to be extracted and categorised. In an effort to solve this problem, this research applied Bayesian Belief Networks (BBN) to the process. BBN, a probabilistic model for prediction of random variables and their dependencies, can be used to detect and predict entities. The aim of this research is to apply BBN to the NER task to extract relevant medical entities such as disease names, symptoms, treatment methods, and diagnosis methods from modern Arabic texts in the medical domain. To achieve this aim, a new corpus related to the medical domain has been built and annotated. Our BBN approach achieved a 96.60% precision, 90.79% recall, and 93.60% F-measure for the disease entity, while for the treatment method entity, it achieved 69.33%, 70.99%, and 70.15% for precision, recall, and F-measure, respectively. For the diagnosis method and symptom categories, our system achieved 84.91% and 71.34%, respectively, for precision, 53.36% and 49.34%, respectively, for recall, and 65.53% and 58.33%, for F-measure, respectively. Our BBN strategy achieved good accuracy for NEs in the categories of disease and treatment method. However, the average word length of the other two NE categories observed, diagnosis method and symptom, may have had a negative effect on their accuracy. Overall, the application of BBN to Arabic medical NER is successful, but more development is needed to improve accuracy to a standard at which the results can be applied to real medical systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Cheng. "A system to support clerical review, correction and confirmation assertions in entity identity information management." Thesis, University of Arkansas at Little Rock, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3711495.

Full text
Abstract:

Clerical review of Entity Resolution(ER) is crucial for maintaining the entity identity integrity of an Entity Identity Information Management (EIIM) system. However, the clerical review process presents several problems. These problems include Entity Identity Structures (EIS) that are difficult to read and interpret, excessive time and effort to review large Identity Knowledgebase (IKB), and the duplication of effort in repeatedly reviewing the same EIS in same EIIM review cycle or across multiple review cycles. Although the original EIIM model envisioned and demonstrated the value of correction assertions, these are applied to correct errors after they have been found. The original EIIM design did not focus on the features needed to support the process of clerical review needed to find these errors.

The research presented here extends and enhances the original EIIM model in two very significant ways. The first is a design for a pair of confirmation assertions that complement the original set of correction assertions. The confirmation assertions confirm correct linking decisions so that they can be excluded from further clerical review. The second is a design and demonstration of a comprehensive visualization system that supports clerical review, and both correction and confirmation assertion configurations in EIIM. This dissertation also describes how the confirmation assertions and the new visualization system have been successfully integrated into the OYSTER open source EIIM framework.

APA, Harvard, Vancouver, ISO, and other styles
10

Chau, Ting-Hey. "Translation Memory System Optimization : How to effectively implement translation memory system optimization." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169218.

Full text
Abstract:
Translation of technical manuals is expensive, especially when a larger company needs to publish manuals for their whole product range in over 20 different languages. When a text segment (i.e. a phrase, sentence or paragraph) is manually translated, we would like to reuse these translated segments in future translation tasks. A translated segment is stored with its corresponding source language, often called a language pair in a Translation Memory System. A language pair in a Translation Memory represents a Translation Entry also known as a Translation Unit. During a translation, when a text segment in a source document matches a segment in the Translation Memory, available target languages in the Translation Unit will not require a human translation. The previously translated segment can be inserted into the target document. Such functionality is provided in the single source publishing software, Skribenta developed by Excosoft. Skribenta requires text segments in source documents to find an exact or a full match in the Translation Memory, in order to apply a translation to a target language. A full match can only be achieved if a source segment is stored in a standardized form, which requires manual tagging of entities, and often reoccurring words such as model names and product numbers. This thesis investigates different ways to improve and optimize a Translation Memory System. One way was to aid users with the work of manual tagging of entities, by developing Heuristic algorithms to approach the problem of Named Entity Recognition (NER). The evaluation results from the developed Heuristic algorithms were compared with the result from an off the shelf NER tool developed by Stanford. The results shows that the developed Heuristic algorithms is able to achieve a higher F-Measure compare to the Stanford NER, and may be a great initial step to aid Excosofts’ users to improve their Translation Memories.
Översättning av tekniska manualer är väldigt kostsamt, speciellt när större organisationer behöver publicera produktmanualer för hela deras utbud till över 20 olika språk. När en text (t.ex. en fras, mening, paragraf) har blivit översatt så vill vi kunna återanvända den översatta texten i framtida översättningsprojekt och dokument. De översatta texterna lagras i ett översättningsminne (Translation Memory). Varje text lagras i sitt källspråk tillsammans med dess översättning på ett annat språk, så kallat målspråk. Dessa utgör då ett språkpar i ett översättningsminnessystem (Translation Memory System). Ett språkpar som lagras i ett översättningsminne utgör en Translation Entry även kallat Translation Unit. Om man hittar en matchning när man söker på källspråket efter en given textsträng i översättningsminnet, får man upp översättningar på alla möjliga målspråk för den givna textsträngen. Dessa kan i sin tur sättas in i måldokumentet. En sådan funktionalitet erbjuds i publicerings programvaran Skribenta, som har utvecklats av Excosoft. För att utföra en översättning till ett målspråk kräver Skribenta att text i källspråket hittar en exakt matchning eller en s.k. full match i översättningsminnet. En full match kan bara uppnås om en text finns lagrad i standardform. Detta kräver manuell taggning av entiteter och ofta förekommande ord som modellnamn och produktnummer. I denna uppsats undersöker jag hur man effektivt implementerar en optimering i ett översättningsminnessystem, bland annat genom att underlätta den manuella taggningen av entitier. Detta har gjorts genom olika Heuristiker som angriper problemet med Named Entity Recognition (NER). Resultat från de utvecklade Heuristikerna har jämförts med resultatet från det NER-verktyg som har utvecklats av Stanford. Resultaten visar att de Heuristiker som jag utvecklat uppnår ett högre F-Measure jämfört med Stanford NER och kan därför vara ett bra inledande steg för att hjälpa Excosofts användare att förbättra deras översättningsminnen.
APA, Harvard, Vancouver, ISO, and other styles
11

Lee, Hojun. "ONTOLOGY-BASED DATA FUSION WITHIN A NET-CENTRIC INFORMATION EXCHANGE FRAMEWORK." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/193779.

Full text
Abstract:
With the advent of Network-Centric Warfare (NCW) concepts, Command and Control (C2) Systems need efficient methods for communicating between heterogeneous systems. To extract or exchange various levels of information within the networks requires interoperability between human and machine as well as between machine and machine. This dissertation explores the Information Exchange Framework (IEF) concept of distributed data fusion sensor networks in Network-centric environments. It is used to synthesize integrative battlefield pictures by combining the Battle Management Language (BML) and System Entity Structure (SES) ontology framework for C2 systems. The SES is an ontology framework that can facilitate information exchange in a network environment. From the perspective of the SES framework, BML serves to express pragmatic frames, since it can specify the information desired by a consumer in an unambiguous way. This thesis formulates information exchange in the SES ontology via BML and defines novel pruning and transformation processes of the SES to extract and fuse data into higher level representations. This supports the interoperability between human users and other sensor systems. The efficacy of such data fusion and exchange is illustrated with several battlefield scenario examples.A second intercommunication issue between sensor systems is how to ensure efficient and effective message passing. This is studied by using Cursor-on-Target (CoT), an effort to standardize a battlefield data exchange format. CoT regulates only a few essential data types as standard and has a simple and efficient structure to hold a wide range of message formats used in dissimilar military enterprises. This thesis adopts the common message type into radar sensor networks to manage the target tracking problem in distributed sensor networks.To demonstrate the effectiveness of the proposed Information Exchange Framework for data fusion systems, we illustrate the approach in an air defense operation scenario using DEVS modeling and simulation. The examples depict basic air defense operation procedure. The demonstration shows that the information requested by a commander is delivered in the right way at the right time so that it can support agile decision making against threats.
APA, Harvard, Vancouver, ISO, and other styles
12

Sekula, Radim. "Návrh databázového systému pro správu akcí." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2017. http://www.nusl.cz/ntk/nusl-318631.

Full text
Abstract:
The Master’s thesis deals with the proposal of database system for event management according to the requirements of the client, which is ANeT-Advanced Network Technology, Ltd. The first part of the thesis summarizes the theoretical bases of database systems and object-oriented programming focusing on .NET Framework. In the second part the current situation and the specific needs of the company are analyzed, and based on this analysis a solution proposal is created in the third part.
APA, Harvard, Vancouver, ISO, and other styles
13

Brelage, Christian S. "Web information system development conceptual modelling of navigation for satisfying information needs." Berlin Logos-Verl, 2005. http://deposit.ddb.de/cgi-bin/dokserv?id=2793065&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Tawadros, Fady, Sakshi Singal, Maria Zayko, and Devapiran Jaishankar. "Mucosal Associated Lymphoid tissue of the Skin, A Common Entity in a Rare Location." Digital Commons @ East Tennessee State University, 2019. https://dc.etsu.edu/asrf/2019/schedule/55.

Full text
Abstract:
Marginal zone (MZ) lymphomas (MZLs) represent a group of lymphomas originating from B lymphocytes of the “marginal zone” which is the external part of the secondary lymphoid follicles. The WHO classifies MZL into 3 entities; extranodal MZL, splenic MZL and nodal MZL. Extranodal marginal zone lymphoma (EMZL) can arise in different tissues, including the stomach, salivary gland, lung, small bowel, thyroid, ocular adnexa and skin. We present a 25 years old female with a history of angioedema and chronic cutaneous eczema who developed an unusual EMZL. Patient presented with a history of rapidly enlarging skin nodule on her left elbow that had been present for almost one year. Over a period of 2-3 weeks she felt the nodule rapidly changed in size and shape. Excisional biopsy of the mass revealed a lymphoid infiltrate based in the reticular dermis and focally extending into the subcutaneous adipose tissue with formation of disrupted lymphoid follicles positive for CD20, CD23 and BCL2 but negative for CD10, Cyclin D1 and SOX11. Diagnosis was consistent with extranodal marginal zone lymphoma of mucosa-associated lymphoid tissue (MALT lymphoma). Patient on presentation did not have any B symptoms other cutaneous lesions, lymphadenopathy or hepatosplenomegaly. PET scan revealed no evidence of abnormal uptake leading to a final Stage IE definition. Patient initiated definitive radiation therapy. EMZL accounts for 5 -10 % of non-Hodgkin lymphoma. It has been described often in organs that are normally devoid of germinal centers. It may arise in reactive lymphoid tissue induced by chronic inflammation in extranodal sites. Primary cutaneous marginal zone lymphoma (PCMZL) is associated with infectious etiologies such as Borrelia burgdorferi and less commonly with viral infections or in relation to autoimmune disorders. Autoimmune disorders, specifically Sjögren's syndrome is associated with a 30-fold increased risk of marginal zone lymphoma. Localized disease can be treated by local radiotherapy, intralesional injections or excision. Widespread skin disease is usually treated with a CD20 directed monoclonal antibody-Rituximab. Patients with PCMZL usually have an indolent clinical course. Extracutaneous dissemination of MALT Lymphoma is uncommon and happens in 6-8 % of patients. The 5 years overall survival is between 98-100%. Family physicians and dermatologists should have a high index of suspicion for this rare lymphoma subtype especially in patients with inflammatory chronic skin conditions and atopy.
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Kan. "Design and Implementation of an Enterprise Entity Extraction System Based on a Multi-Source Data Match Model." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-197206.

Full text
Abstract:
With the rapid development of the Internet and the increase in the number of online users, there  is a growing interest in commerce for SMEs. However,  it remains a concern  as how to precisely describe the small to medium companies in the field of SME enterprises. The goal of this thesis is to set a service-oriented matching model based on the study of existing enterprise matching models and data format. In this thesis, the author  is going to address the model that is able to present  E-commerce  SMEs through three  dimensions -- reputation, geographic information and enterprise type. The model would enable the buyers to find better target enterprises from the range of SMEs and at the same time allow SMEs to present  more  reliable and complete information to the buyers. The thesis is going to develop and evaluate the prototype based on the matching model.
APA, Harvard, Vancouver, ISO, and other styles
16

Brostedt, Nathan. "Heterogeneous data in information system integration : A Case study designing and implementing an integration mechanism." Thesis, Uppsala universitet, Institutionen för informatik och media, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-325961.

Full text
Abstract:
The data of different information systems is heterogeneous. As systems are being integrated, it’s necessary to bridge inconsistencies to reduce heterogenous data. To integrate heterogenous systems a mediator can be used. The mediator acts as a middle-layer for integrating systems, it handles transfers and translating of data between systems. A case study was conducted, developing a prototype of an integration mechanism for exchanging genealogical data, that used the mediator concept. Further, a genealogical system was developed to take use of the integration mechanism, integrating with a genealogy service. To test the reusability of the integration mechanism, a file import/export system and a system for exporting data from the genealogy service to a file was developed. The mechanism was based on the usage of a neutral entity model, that integrating systems could translate to. A neutralizing/de-neutralizing mechanism was used for the translating of the data between the neutral entity model, and a system specific entity model. The integration mechanism was added to the genealogy system as an addon. The integration mechanism was successful at integrating genealogy data. The outcomes included: The integration mechanism can be added as an addon to one or more systems being integrated. It was good to have a neutral entity model, together with a neutralizing/de-neutralizing mechanism.
APA, Harvard, Vancouver, ISO, and other styles
17

Yosef, Mohamed Amir [Verfasser], and Gerhard [Akademischer Betreuer] Weikum. "U-AIDA : a customizable system for named entity recognition, classification, and disambiguation / Mohamed Amir Yosef. Betreuer: Gerhard Weikum." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2016. http://d-nb.info/1083894722/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Brelage, Christian S. "Web information system development : conceptual modelling of navigation for satisfying information needs /." Berlin : Logos-Verl, 2006. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=014831687&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Cheon, Saehoon. "Experimental Frame Structuring For Automated Model Construction: Application to Simulated Weather Generation." Diss., The University of Arizona, 2007. http://hdl.handle.net/10150/195473.

Full text
Abstract:
The source system is the real or virtual environment that we are interested in modeling. It is viewed as a source of observable data, in the form of time-indexed trajectories of variables. The data that has been gathered from observing or experimenting with a system is called the system behavior data base. The time indexed trajectories of variables provide an important clue to compose the DEVS (discrete event specification) model. Once event set is derived from the time indexed trajectories of variable, the DEVS model formalism can be extracted from the given event set. The process must not be a simple model generation but a meaningful model structuring of a request. The source data and query designed with SES are converted to XML Meta data by XML converting process. The SES serves as a compact representation for organizing all possible hierarchical composition of system so that it performs an important role to design the structural representation of query and source data to be saved. For the real data application, the model structuring with the US Climate Normals is introduced. Moreover, complex systems are able to be developed at different levels of resolution. When the huge size of source data in US Climate Normals are implemented for the DEVS model, the model complexity is unavoidable. This issue is dealt with the creation of the equivalent lumped model based on the concept of morphism. Two methods to define the resolution level are discussed, fixed and dynamic definition. Aggregation is also discussed as the one of approaches for the model abstraction. Finally, this paper will introduce the process to integrate the DEVSML(DEVS Modeling Language) engine with the DEVS model creation engine for the Web Service Oriented Architecture.
APA, Harvard, Vancouver, ISO, and other styles
20

Quintero, Michael C. "Constructing a Clinical Research Data Management System." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/7081.

Full text
Abstract:
Clinical study data is usually collected without knowing what kind of data is going to be collected in advance. In addition, all of the possible data points that can apply to a patient in any given clinical study is almost always a superset of the data points that are actually recorded for a given patient. As a result of this, clinical data resembles a set of sparse data with an evolving data schema. To help researchers at the Moffitt Cancer Center better manage clinical data, a tool was developed called GURU that uses the Entity Attribute Value model to handle sparse data and allow users to manage a database entity’s attributes without any changes to the database table definition. The Entity Attribute Value model’s read performance gets faster as the data gets sparser but it was observed to perform many times worse than a wide table if the attribute count is not sufficiently large. Ultimately, the design trades read performance for flexibility in the data schema.
APA, Harvard, Vancouver, ISO, and other styles
21

Waqar, Hafiz Umer. "Comparison between .NET and Java EEImplementation of Cash & Bank System." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-23415.

Full text
Abstract:
The demands of software engineering increases with everyday passed and every organization shows interest to work on daily routine work in the computerized system to improve efficiency and accuracy of data. Most of organization demands different kinds of computerized software solutions that developed in modern technologies. There are different software development technologies that is getting popular with the passage of time and provide high quality product to their user. It is not an easy to decide which technology choose to develop computerized system where most software falls. The primary purpose of thesis is to compare two different modern software development technologies by develop an application with same functional requirements. This thesis report will help to user for selection of software development technology. Cash and bank financial application is choose to develop using .NET and Java Enterprise Edition. Microsoft Visual Studio development environment is used for .NET development and NetBeans IDE is used for Java Enterprise Edition application. .NET framework contains many language that are interrelated with each other like C#, VB, J#, C++ and F#. Cash and bank application is develop in C# and Java programming languages and persistence storage of data is design in Microsoft SQL Server. Cash and bank application is develop with three tier architecture and layer approach. The syntax of .NET and Java Enterprise Edition is quite similar but procedure to establish connection, data retrieval and handling the data in different way. NET platform provide built in libraries, components and efficient user controls that helps for fast development in short period of time.
APA, Harvard, Vancouver, ISO, and other styles
22

Švábenský, David. "Návrh projektu informačního systému malé firmy." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2008. http://www.nusl.cz/ntk/nusl-221938.

Full text
Abstract:
The objective of this dissertation is to suggest the structure of new IS in the company, to suit the recent data flows, and simultaneously to satisfy the needs of the user. According to the suggestions and concepts mentioned in the thesis, it should be possible to pre – set a completely functional IS.
APA, Harvard, Vancouver, ISO, and other styles
23

Capdevila, Ibañez Bruno. "Serious game architecture and design : modular component-based data-driven entity system framework to support systemic modeling and design in agile serious game developments." Paris 6, 2013. http://www.theses.fr/2013PA066727.

Full text
Abstract:
Depuis une dizaine d’années, on constate que les propriétés d’apprentissage inhérentes aux jeux-vidéo incitent de nombreux développeurs à explorer leur potentiel en tant que moyen d’expression pour des buts divers et innovateurs (sérieux). L’apprentissage est au cœur de l’expérience de jeu, mais il prend normalement place dans les domaines affectifs et psychomoteurs. Quand l’apprentissage cible un contenu sérieux, des designers cognitifs/pédagogiques doivent veiller à son effectivité dans le domaine cognitif. Dans des équipes éminemment multidisciplinaires (jeu, technologie, cognition et art), la compréhension et la communication sont indispensables pour une collaboration efficace dès la première étape de conception. Dans une approche génie logiciel, on s’intéresse aux activités (multidisciplinaires) du processus de développement plutôt qu’aux disciplines elles-mêmes, dans le but d’uniformiser et clarifier le domaine. Puis, nous proposons une fondation logicielle qui renforce ce modèle multidisciplinaire grâce à une approche d’underdesign qui favorise la création des espaces de design collaboratifs. Ainsi, Genome Engine peut être vu comme une infrastructure sociotechnique dirigée-donnée qui permet à des développeurs non-programmeurs, comme le game designer et éventuellement le designer cognitif, de participer activement dans la construction du design du produit, plutôt que de l’évaluer déjà en temps d’utilisation. Son architecture est fondée sur un style de système de systèmes d’entités, ce qui contribue à sa modularité, sa réutilisabilité et adaptabilité, ainsi qu’à fournir des abstractions qui favorisent la communication. Plusieurs projets réels nous ont permis de tester notre approche
For the last ten years, we witness how the inherent learning properties of videogames entice several creators into exploring their potential as a medium of expression for diverse and innovative (serious) purposes. Learning is at the core of the play experience, but it usually takes place at the affective and psychomotor domains. When the learning targets the serious content, cognitive/instructional designers must ensure its effectiveness at the cognitive domain. In such eminently multidisciplinary teams (game, technology, cognition, art), understanding and communication are essential for an effective collaboration from the early stage of inception. In a software engineering approach, we focus on the (multidisciplinary) activities of the development process rather than the disciplines themselves, with the intent to uniform and clarify the field. Then, we propose a software foundation that reinforces this multidisciplinary model thanks to an underdesign approach that favors the creation of collaborative design workspaces. Thereby, Genome Engine can be considered as a data-driven sociotechnical infrastructure that provides non-programmer developers, such as game designers and eventually cognitive designers, with a means to actively participate in the construction of the product design, rather than evaluating it once in usage time. Its architecture is based on a component-based application framework with an entity system of systems runtime object model, which contributes to modularity, reuse and adaptability, as well as to provide familiar abstractions that ease communication. Our approach has been extensively evaluated with the development of several serious game projects
APA, Harvard, Vancouver, ISO, and other styles
24

Hylock, Ray Hales. "Beyond relational: a database architecture and federated query optimization in a multi-modal healthcare environment." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/2526.

Full text
Abstract:
Over the past thirty years, clinical research has benefited substantially from the adoption of electronic medical record systems. As deployment has increased, so too has the number of researchers seeking to improve the overall analytical environment by way of tools and models. Although much work has been done, there are still many uninvestigated areas; two of which are explored in this dissertation. The first pertains to the physical storage of the data itself. There are two generally accepted storage models: relational and entity-attribute-value (EAV). For clinical data, EAV systems are preferred due to their natural way of managing many-to-many relationships, sparse attributes, and dynamic processes along with minimal conversion effort and reduction in federation complexities. However, the relational database management systems on which they are implemented, are not intended to organize and retrieve data in this format; eroding their performance gains. To combat this effect, we present the foundation for an EAV Database Management System (EDBMS). We discuss data conversion methodologies, formulate the requisite metadata and partitioned type-sensing index structures, and provide detailed runtime and experimental analysis with five extant methods. Our results show that the prototype, EAVDB, reduces space and conversion requirements while enhancing overall query performance. The second topic concerns query performance in a federated environment. One method used to decrease query execution time, is to pre-compute and store "beneficial" queries (views). The View Selection Problem (VSP) identifies these views subject to resource constraints. A federated model, however, has yet to be developed. In this dissertation, we submit three advances in view materialization. First, a more robust optimization function, the Minimum-Maintenance View Selection Problem (MMVSP), is derived by combining existing approaches. Second, the Federated View Selection Problem (FVSP), built upon the MMVSP, and federated data cube lattice are formalized. The FVSP allows for multiple querying nodes, partial and full materialization, and data propagation constriction. The latter two are shown to greatly reduce the overall number of valid solutions within the solution space and thus a novel, multi-tiered approach is given. Lastly, EAV materialization, which is introduced in this dissertation, is incorporated into an expanded, multi-modal variant of the FVSP. As models and heuristics for both the federated and EAV VSP, to the best of our knowledge, do not exist, this research defines two new branches of data warehouse optimization. Coupled with our EDBMS design, this dissertation confronts two main challenges associated with clinical data warehousing and federation.
APA, Harvard, Vancouver, ISO, and other styles
25

Vencel, Jan. "Aplikace pro správu databáze plaveckého oddílu." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2019. http://www.nusl.cz/ntk/nusl-400229.

Full text
Abstract:
The diploma thesis focuses on the design, creation and implementation of a desktop application for managing a swimming database of the ASK Blansko swim club. It analyzes the current state of processes in the club and compares them with modern practice. It contains a description of all the functions of the application together with the economic evaluation of the project. It represents a complete solution because of the inefficient and outdated way of keeping the club database. It emphasizes the successful implementation of the application in the club practice.
APA, Harvard, Vancouver, ISO, and other styles
26

Esplund, Emil, and Elin Gosch. "Digitalisering av individuell studieplan : Från PDF till en grund för ett digitalt system." Thesis, Uppsala universitet, Institutionen för informatik och media, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413553.

Full text
Abstract:
Denna uppsats ämnar belysa om det går att digitalisera den individuella studieplanen på en institution vid Uppsala universitet. Digitalisering anses vara en av de största trenderna som påverkar dagens samhälle och det har visats bidra till ökad tillgänglighet och effektivitet inom offentlig förvaltning. Vid Uppsala universitet finns det i dagsläget behov av ett digitalt system för hanteringen av den individuella studieplanen. Studieplanen hanteras för tillfället som ett papper i form av en PDF och upplevs enligt tidigare studier, som ett mindre bra planerings- och uppföljningsverktyg. Forskningsarbetet har utförts baserat på forskningsstrategin Design & creation. Resultatet av forskningsprocessen är en informationsmodell, en processmodell i olika nivåer, samt en databas. Dessa modeller och databas är baserade på tidigare forskning, befintliga dokument och empiriskt material från intervjuer. Tidigare forskning omfattar ett verktyg för digitalisering, problem med identifierare, samt forskning kring modellering. De befintliga dokumenten utgörs av en tidigare studie om den individuella studieplan, samt lagstiftning, riktlinjer och allmän information gällande denna studieplan. Intervjuerna genomfördes med 9 informanter som använder studieplanen i sin roll på Uppsala universitet. Modellerna och databasen har utvärderats i en kriteriebaserad intervju med ämnesexpert, samt i en teoribaserad utvärdering. Forskningsresultatet tyder på att det är möjligt att digitalisera studieplanen med hjälp av de framlagda modellerna och databasen. Dessa modeller och databas kan med viss modifikation användas för att bygga ett digitalt gränssnitt och ett fullständigt system för studieplanen.
This thesis aims to illustrate whether it is possible to digitize the individual study plan at an institution at Uppsala university. Digitalization is considered one of the biggest trends that affects today’s society and it has been shown to contribute to increased accessibility and efficiency in public administration. Uppsala university has a need for a digital system for the individual study plan. It is currently handled as a paper in the form of a PDF and is perceived as an inferior planning and monitoring tool, according to previous studies. The research work has been carried out based on the research strategy Design & creation. The result of the research process is an information model, a process model at different levels and a database. These models and database are based on previous research, existing documents and empirical material from interviews. Previous research includes a tool for digitalization, problems regarding identifiers and research regarding modeling. The existing documents comprise a previous study of the ISP, legislation, guidelines and general information regarding the ISP. The interviews were conducted with 9 informants who use the study plan within their role at Uppsala university. The models and database have been evaluated in a criteria-based interview with a subject matter expert and in a theory-based evaluation. The research results indicate that it is possible to digitize the study plan using the presented models and database. These models and database can be used with slight modifications to build a digital interface and a complete system for the ISP.
APA, Harvard, Vancouver, ISO, and other styles
27

Bravo, Serrano Àlex 1984. "BeFree : a text mining system for the extraction of biomedical information from literature." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/398300.

Full text
Abstract:
Current biomedical research needs to leverage the large amount of information reported in scientific publications. Automated text processing, commonly known as text mining, has become an indispensable tool to identify, extract, organize and analyze the relevant biomedical information from the literature. This thesis presents the BeFree system, a text mining tool for the extraction of biomedical information to support research in the genetic basis of disease and drug toxicity. BeFree can identify entities such as genes and diseases from a vast repository of biomedical text sources. Furthermore, by exploiting shallow and deep syntactic information of text, BeFree detects relationships between genes, diseases and drugs with a performance comparable to the state-of-the-art. As a result, BeFree has been used in various applications in the biomedical field, with the aim to provide structured biomedical information for the development of knowledge and corpora resources. Furthermore, these resources are available to the scientific community for the development of novel text mining tools
Avui dia, la recerca biomèdica ha d'aprofitar i explotar la gran quantitat d'informació inclosa en publicacions científiques. El processament automàtic de text, habitualment conegut com mineria de text o text mining, és una eina essencial per tal d'identificar, extreure, organitzar i analitzar la informació biomèdica més rellevant de la literatura. Aquesta tesi presenta el sistema BeFree, una eina de text mining per l’extracció d’informació biomèdica per donar suport a la recerca de les bases genètiques de les malalties i la toxicitat de fàrmacs. BeFree pot identificar gens i malalties des d’un gran repositori de text biomèdic. D’altra banda, mitjançant informació lingüística continguda al text, BeFree pot detectar relacions entre gens, malalties i fàrmacs amb uns resultats comparables a l’estat de l’art. Com a resultat, BeFree ha sigut utilitzat en diverses aplicacions del camp biomèdic, amb l’objectiu d’oferir informació biomèdica estructurada pel desenvolupament de recursos com base de dades i corpora. A més, aquests recursos estan disponibles per la comunitat científica pel desenvolupament de noves eines de text mining.
APA, Harvard, Vancouver, ISO, and other styles
28

Kacíř, Pavel. "Obchodní právo v Čínské lidové republice." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-17070.

Full text
Abstract:
The main objective of this thesis is to map present system of business law in China, identify key factors, that formed and determine its present shape and compare theoretic form with reality, so that this thesis may become a basis for further exploring and studying of Chinese system of business law. The thesis is divided into four sections. Topic of the first section are sources of business law, their hierarchy and scope. Second part describes various types of business entities and their legal forms. Third part describes current state of contract law, while the fourth part studies various means of solving commercial disputes. Scope of this thesis does not cover business law in Taiwan and special administrative regions of Hong-Kong and Macau.
APA, Harvard, Vancouver, ISO, and other styles
29

Rašovská, Lucie. "Systémové pojetí ocenění nemovitosti v Brně ve Starém Lískovci." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-232678.

Full text
Abstract:
This thesis deals with the very topical issue in the sphere of the expert activity, which is requested by the general public at the same time. It sets a task of improvement, or better to saythe simplification in the approach to the appraising of real estate, with the aid of systemic methodology. There is as an example of the correct application of systemic methodology in the preparation of a sample expert opinion used in real estate (apartment house) located in Brno – Starý Lískovec. The property in question is appraised in accordance with the assignment by using the systemic approachonly after the evaluation of the current situation in appraising and discussing systemic methodology. A systemic approach to the appraising of aproperty is appropriately chosen largely because it better ensures the indispensable fact that, in the case of the expert opinion made by an expert, it is practically impossible to omit those important requisites which are inextricably linked to the expert opinion.
APA, Harvard, Vancouver, ISO, and other styles
30

Blom, Oskar, and Novan Kovan. "Implementering av ett bokningssystem med Google Calendar." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-43924.

Full text
Abstract:
Denna rapport redogör för implementationen av ett bokningssystem med integrering av Google Calendar API. Uppdraget var främst till för att utvärdera potentialen av ett bokningssystem där Google Calendar användes som scheman för personalen. Projektet skulle även kunna användas som ett grundsystem för att skräddarsy bokningssystem för olika företagsmodeller. Det slutgiltiga systemet blev en hemsida för tidsbokning, ett Web-API för kommunikation med hemsidan, integration av Google Calendar API för att hämta och lägga till tidsbokningar på personalens scheman samt lagring av data i en databas.
This report describes the implementation of a booking system with the integration of Google Calendar API. The objective was primarily to evaluate the potential of a booking system where Google Calendar was used as schedules for staff. The project could also be used as a base system for customizing booking systems for different business models. The final system consisted of a website for making appointments, a Web API for communicating with the website, integration of Google Calendar API to retrieve and add appointments to the schedules of the staff and storing data in a database.
APA, Harvard, Vancouver, ISO, and other styles
31

Dostál, Jiří. "Systém pro zpracování dat z regulátoru HAWK firmy Honeywell." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316271.

Full text
Abstract:
This thesis deals with developing program components for collection of semantically labeled data from Honeywell's Hawk controller. The basic principles and capabilities of development using Niagara Framework, on which Hawk is based, are explained. Lastly, the specific components and external database application is described.
APA, Harvard, Vancouver, ISO, and other styles
32

Gummus, Fredrik. "Utveckling av bokningssystem för Moridge AB." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-51582.

Full text
Abstract:
Denna rapport beskriver utvecklingen av ett nytt bokningssystem för företaget Moridge AB. Bokningssystemet består av en förardel där förare kan hantera sina bokningar och arbetsschema samt en administrationsdel för hantering av alla förare och statistikvisning avseende kunder och förares bokningar. Rapporten går igenom hela utvecklingens process från systemets krav till designplanering till slutfärdigt resultat och en avslutande diskussion.   Bokningssystemet är en webbapplikation för mobila enheter utvecklad med ASP.NET MVC Razor och jQuery Mobile. Systemet består av ett säkert inloggningssystem med uppkoppling till en databas med hjälp av Entity Framework och bokningshantering via Google Calendar API.   Projektet lade stor vikt vid design för optimal användarupplevelse och säkerhetslösningar. Användargränssnitten togs fram med kontemporära tekniker och användarupplevelsen kontrollerades med användartest. Säkerheten garanteras genom användning av en kraftfull kryptografisk algoritm för lösenordshantering och autentisering via ASP.NET Identity och Forms Authentication som begränsar åtkomst till enbart behöriga användare.
This report describes the development of a new booking system for the company Moridge AB. The booking system is composed of a driver section where drivers can handle their bookings and work schedule as well as an administration section for the management of all drivers and statistics showing customers and drivers bookings. The report goes through the entire development process from system requirements to design planning to the final finished system, and a concluding discussion.   The booking system is a web application for mobile devices developed with ASP.NET MVC Razor and jQuery Mobile. The system consists of a secure login system with a connection to a database using Entity Framework and booking management via Google Calendar API.   The project placed great importance on user experience design (UX) and security. The user interfaces were developed with contemporary technologies and user experience were controlled by user test. Safety is ensured by the use of a powerful cryptographic algorithm for password management and authentication through ASP.NET Identity and Forms Authentication, which limit access to only authorized users.
APA, Harvard, Vancouver, ISO, and other styles
33

Moore, Gaylen Leslie. "From Chaos to Qualia: An Analysis of Phenomenal Character in Light of Process Philosophy and Self-Organizing Systems." Kent State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=kent1271691064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Supiratana, Panon. "Graphical visualization and analysis tool of data entities in embedded systems engineering." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-10428.

Full text
Abstract:
Several decades ago, computer control systems known as Electric Control Units (ECUs) were introduced to the automotive industry. Mechanical hardware units have since then increasingly been replaced by computer controlled systems to manage complex tasks such as airbag, ABS, cruise control and so forth. This has lead to a massive increase of software functions and data which all needs to be managed. There are several tools and techniques for this, however, current tools and techniques for developing real-time embedded system are mostly focusing on software functions, not data. Those tools do not fully support developers to manage run-time data at design time. Furthermore, current tools do not focus on visualization of relationship among data items in the system. This thesis is a part of previous work named the Data Entity approach which prioritizes data management at the top level of development life cycle. Our main contribution is a tool that introduces a new way to intuitively explore run-time data items, which are produced and consumed by software components, utilized in the entire system. As a consequence, developers will achieve a better understanding of utilization of data items in the software system. This approach enables developers and system architects to avoid redundant data as well as finding and removing stale data from the system. The tool also allows us to analyze conflicts regarding run-time data items that might occur between software components at design time.
A Data-Entity Approach for Component-Based Real-Time Embedded Systems Development
APA, Harvard, Vancouver, ISO, and other styles
35

Christoforidis, Constantin. "Optimizing your data structure for real-time 3D rendering in the web : A comparison between object-oriented programming and data-oriented design." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20048.

Full text
Abstract:
Performance is something that is always of concern when developing real-time 3D graphics applications. The way programs are made today with object-oriented programming has certain flaws that are rooted in the methodology itself. By exploring different programming paradigms we can eliminate some of these issues and find what is best for programming in different areas. Because real-time 3D applications need high performance the data-oriented design paradigm that makes the data the center of the application is experimented with. By using data-oriented design we can eliminate certain issues with object-oriented programming and deliver improved applications when it comes to performance, flexibility, and architecture. In this study, an experiment creating the same type of program with the help of different programming paradigms is made to compare the performance of the two. Some additional up- and downsides of the paradigms are also mentioned

Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.

APA, Harvard, Vancouver, ISO, and other styles
36

Dziadzio, Pavel. "Framework pro dynamické vytváření informačního systému." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-235469.

Full text
Abstract:
This thesis analyzes and defines requirements of framework, which helps with quick and effortless development of business information systems. The main goal of the framework is development acceleration, price reduction and overall improvement of product quality. The thesis also compares and evaluates existing tools. The result is detailed design and implementation of own flexible solution that fulfills all defined requirements and removes disadvantages of existing solutions. In the scope for further studies framework development possibilities and directions are listed.
APA, Harvard, Vancouver, ISO, and other styles
37

Nyberg, Frank. "Investigating the effect of implementing Data-Oriented Design principles on performance and cache utilization." Thesis, Umeå universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185247.

Full text
Abstract:
Game engines process a lot of data under strict deadlines. Therefore, measures to increase performance are important in this area. Data-Oriented Design (DOD) promotes principles that are meant to increase performance by better cache utilization. The purpose of this thesis is to examine a selection of these principles to give a better understanding of how DOD affects CPU time and the rate of cache misses, with focus on the area of game development. More specifically, the principles examined are removal of run-time polymorphism, iteration over contiguous data, and lowering the amount of data in hot loops. Also, the Entity-Component-System pattern is examined, which is based upon DOD principles. The approach was to first present a theoretical background on the subject, and then to conduct tests by implementing a simulation of movement and collision detection utilizing said principles. The tests were written in C++ and executed on an Intel Core i7 4770k with no rendering. CPU time was measured in updated entities per μs, and cache utilization was measured in the form of cache miss rate. The results showed that the DOD principles did increase performance. Cache miss rate was also lower, with the exception of when removing run-time polymorphism. The conclusion is that Data-Oriented Design, used in game development, is likely to result in better performance, mostly as a result of better cache utilization.
APA, Harvard, Vancouver, ISO, and other styles
38

Sun, Bowen. "Named entity recognition : Evaluation of Existing Systems." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11223.

Full text
Abstract:
Nowadays, one subfield of information extraction, Named Entity Recognition, becomes more and more important. It helps machine to recognize proper nouns (entities) in text and associating them with the appropriate types. Common types in NER systems are location, person name, date, address, etc. There are several NER systems in the world. What‘s the main core technology of these systems? Which kind of system is better? How to improve this technology in the future? This master thesis will show the basic and detail knowledge about NER.Three existing NER systems will be choose to evaluate in this paper, GATE, CRFClassifier and LbjNerTagger. These systems are based different NER technology. They can stand for the most of NER existing systems in the world now. This paper will present and evaluate these three systems and try to find the advantage and disadvantage of each system.
APA, Harvard, Vancouver, ISO, and other styles
39

Faltys, Miroslav. "Analýza nestátních neziskových organizací v oblasti vzdělávání ve Středočeském kraji." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-71704.

Full text
Abstract:
The topic of this master thesis is focused on the analysis of NGOs in the field of education in the Central Bohemia Region. The thesis points out relationship and development of the private, public and non-profit sector in education. The goal is to capture the development of NGOs active in the field of education in the Central Bohemia Region, to identify their status, analyse the effectiveness of their activities and compare the current non-profit organization with a historic non-profit organization. The first part focuses on the historical development of NGOs in the field of education from 17. century. More specifically, it monitors development and changes in their institutional arrangements, legal environment and financing. Particularly is monitored Piarist college in Slany. Within the second part of work the current state of NGOs in the field of education in the Central Bohemia Region is analysed. It focuses on church schools and publicly beneficial organizations. This analysis is related to the performance of non-profit organizations, economic results and their justification in education. The last part compares historical non-profit organization with current ones.
APA, Harvard, Vancouver, ISO, and other styles
40

Jurevičiūtė, Vilma. "Vaistinės veiklos efektyvumo didinimas klientų lūkesčių tyrimo pagrindu." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20140618_214949-57452.

Full text
Abstract:
Atspindėdama sveikatos apsaugos sistemos pokyčius, vaistinių praktika vis daugiau fokusuojama į individualius paciento poreikius – siekiant teikti kokybišką farmacinę paslaugą pacientas ir jo poreikiai atsiduria vaistininko dėmesio centre. Nors farmacinė paslauga yra reglamentuojama Lietuvos Respublikos Farmacijos įstatymu, Geros vaistinių praktikos nuostatomis ir kitais teisės aktais, Lietuvos ir kitų šalių praktika rodo, kad ne visada šių nuostatų pavyksta laikytis, be to, šių teisės aktų nuostatomis nėra reglamentuojama, kaip turėtų būti garantuojamas klientų individualių poreikių ir lūkesčių tenkinimas vaistinėse. Suprantant pacientų lūkesčius ir žinant, kiek tuos lūkesčius įstaiga sugeba įgyvendinti, galima gerinti farmacinės paslaugos kokybę ir, tuo pačiu, didinti klientų pasitenkinimą bei jų lojalumą vaistinei. Šio darbo tikslas yra parengti vaistinės klientų lūkesčių bei poreikių identifikavimo metodologines gaires, kurios galėtų būti naudojamos identifikuojant farmacinės paslaugos teikimo rizikos veiksnius bei pasirenkant farmacinės veiklos optimizavimo galimybes. Tikslą pasiekti buvo parengtas integralus tyrimo modelis, kurio struktūros pagrindas yra farmacinės paslaugos poreikių bei lūkesčių identifikavimas Lietuvos ir Europos Sąjungos sveikatos politikos gairių bei žmogaus motyvacijos dėsningumų aspektu. Remiantis šiuo modeliu buvo parengti lūkesčių vertinimo įrankiai – klausimynai vaistinės klientams bei personalui. Tyrimo rezultatai atskleidė, kad... [toliau žr. visą tekstą]
Reflecting the changes in the health care system, pharmaceutical practice is getting more and more focused on the individual needs of the patient - providing quality pharmaceutical services to patients and meeting their individual needs come to the spotlight of pharmacists. Even though pharmaceutical service is regulated by the laws of Republic of Lithuania, Good pharmacy practice and other laws, the practice of Lithuania and other countries shows that it is not always easy to follow all these legislations. In addition, these statutory provisions do not regulate how meeting individual needs and expectations of customers should be met. Understanding the expectations of clients and knowing how the institution is able to implement these expectations is essential while improving the quality of pharmaceutical services and while increasing customers’ satisfaction and loyalty to the pharmacy. The aim of this study is to develop methodological guidelines of pharmacy customers’ expectations and needs, which could be used for the identification of pharmaceutical service risk factors and optimization opportunities. To achieve the goal of the study an integral model was created: the structure of the model is based on the identification of pharmaceutical service needs and expectations in Lithuanian and European Union health policy guidelines and patterns of human motivation perspective, was created. Based on this model an assessment tool for expectations was created - questionnaires for... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
41

Mezgebe, Tsegay Tesfay. "Algorithmes inspirés des sociales pour concevoir des nouveaux systèmes de pilotage dans le contexte de l'usine du futur." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0006.

Full text
Abstract:
Le pilotage centralisé traditionnel de système manufacturier n’est plus capable de répondre aux besoins du marché en termes d’attentes des clients, de la variété et du cycle de vie des produits. En particulier, l’émergence des systèmes Cyber-Physiques (CPS), vus comme des réseaux de composants physiques et informatiques en interaction, a impacté le pilotage central prédictif pour contrer les perturbations dans les actuelles caractéristiques dynamiques du marché. Par exemple, la modification urgente des besoins de l’entreprise est l’une des perturbations les plus communes qui impacte le planning central d’une manière significative. A l’heure actuelle, le pilotage distribué, pouvant être basé sur les systèmes multi-agent, permet d’améliorer la réactivité pour répondre à différents types de perturbations afin que celles-ci ne soient plus des facteurs contraignants le pilotage. Les travaux de cette thèse explorent la conception de nouveaux algorithmes de pilotage en se basant sur les approches d’interactions sociales, telles que la négociation ou la théorie du consensus). Ces algorithmes intègrent les avantages des actifs liés à l’industrie du futur et englobent différentes entités intelligentes et hétérogènes, ainsi que des systèmes à évènements discrets. Chaque agent peut avoir des capacités différentes, en termes d’évolution, d’apprentissage ou autre, permettant d’une manière globale d’obtenir des comportements émergeants qui s’adaptent dynamiquement aux perturbations rencontrées. Ainsi, chaque agent intelligent décide quand il transmet son état actuel à son voisinage et la décision locale dépend du comportement de l’état des agents. Afin de vérifier l’efficacité des algorithmes proposés (négociation et consensus), ceux-ci ont été modélisés et formulés en mettant en réseau toutes les entités intelligentes, le tout basé sur deux ateliers industriels. Les algorithmes ont ensuite été testés en simulation et d’une manière expérimentale sur la cellule flexible de production du plateau technique TRACILOGIS. Des comparaisons ont été faites avec l’algorithme réactif actuellement implémenté sur cette plateforme, afin de montrer l’applicabilité des algorithmes dans l’obtention de ré-ordonnancement et re-séquencement de produits par priorité. Par conséquent, les résultats expérimentaux réalisés sur TRACILOGIS ont montré que les algorithmes de négociation et de consensus ont minimisé l’impact des perturbations en obtenant de meilleures performances globales qu’auparavant
The use of traditionally centralized control system does not able to meet the rapidly changing customer expectations, high product varieties, and shorter product life-cycles. In particular, the emergence of Cyber Physical System (CPS) which can be seen as interacting networks of physical and computational components has provided the foundation for many new factories’ infrastructures and improved the quality of products and processes. This Cyber Physical System has dramatically impacted the centrally predictive control system in responding to perturbation(s) in the current dynamic market characteristics. Urgent change for example is one of the common perturbations and has significant perturbing ability to a central predictive control system. Accordingly, it is now accepted that using agent-based control system improves the reactivity to treat these perturbation(s) until they are no longer limiting factors. In this study, the use of human-inspired interaction approach (by means of negotiation and consensus-based decision-making algorithms) is explored to design and propose a new control system. It has taken advantages from Industry 4.0 assets and encompassed heterogeneous and intelligent entities (mainly the product entities and resource entities) and discrete event systems. Each entity could have different capability (evolution, learning, etc.) and the whole physical and control system may lead emerging behaviors to dynamically adapt the perturbation(s). Hence, every intelligent entity decides when to broadcast its current state to neighbor entities and the controlling decision depends on the behavior of this state. The negotiation and consensus-based decision-making algorithms were initially formulated and modeled by networking all the contributing and heterogeneous entities considering two real industrial shop floors. Then after, simulation and application implementation tests on the basis of full-sized academic platform called TRACILOGIS platform have been conducted to verify and validate these decision-making algorithms. This has been done with expectations that the applicability of these algorithms will be more adaptable to set best priority-based product sequencing and rescheduling than another decision-making approach called pure reactive control approach. Accordingly, the experimental results have shown that the negotiation and consensus among the decisional entities have significantly minimized the impact of perturbation(s) on a production process launched on the TRACILOGIS platform. Meanwhile, using these two decision-making algorithms has conveyed better global performance (e.g., minimized makespan) over the pure reactive control approach
APA, Harvard, Vancouver, ISO, and other styles
42

Wu, Christopher James. "SKEWER: Sentiment Knowledge Extraction With Entity Recognition." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1615.

Full text
Abstract:
The California state legislature introduces approximately 5,000 new bills each legislative session. While the legislative hearings are recorded on video, the recordings are not easily accessible to the public. The lack of official transcripts or summaries also increases the effort required to gain meaningful insight from those recordings. Therefore, the news media and the general population are largely oblivious to what transpires during legislative sessions. Digital Democracy, a project started by the Cal Poly Institute for Advanced Technology and Public Policy, is an online platform created to bring transparency to the California legislature. It features a searchable database of state legislative committee hearings, with each hearing accompanied by a transcript that was generated by an internal transcription tool. This thesis presents SKEWER, a pipeline for building a spoken-word knowledge graph from those transcripts. SKEWER utilizes a number of natural language processing tools to extract named entities, phrases, and sentiments from the transcript texts and aggregates the results of those tools into a graph database. The resulting graph can be queried to discover knowledge regarding the positions of legislators, lobbyists, and the general public towards specific bills or topics, and how those positions are expressed in committee hearings. Several case studies are presented to illustrate the new knowledge that can be acquired from the knowledge graph.
APA, Harvard, Vancouver, ISO, and other styles
43

McManigal, Chris A. "Towards More Comprehensive Information Retrieval Systems: Entity Extraction Using XSLT." UNF Digital Commons, 2005. http://digitalcommons.unf.edu/etd/222.

Full text
Abstract:
One problem that exists in today's document management arena is the issue of retrieving information from electronic documents such as images, Microsoft Office documents, and e-mail. Specific data entities must be extracted from these documents so that the data can be searched and queried. This study presents a unique approach to extracting these entities: using Extensible Stylesheet Language Transformations (XSLT) to match patterns in text. Because XSLT is processed at run time, new XSLT templates can be created and used without having to recompile and redeploy the application. The specific implementation addressed in this project extracts entities from an image file. The data in the image file is converted to Extensible Markup Language (XML) text via optical character recognition (OCR), and then this XML text is transformed into an organized, well-formed XML output file using an XSLT template. We show this approach can accurately retrieve the correct data and this method can be extended to other electronic document sources.
APA, Harvard, Vancouver, ISO, and other styles
44

Higa, Kunihiko. "End user logical database design: The structured entity model approach." Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184539.

Full text
Abstract:
We live in the Information Age. The effective use of information to manage organizational resources is the key to an organization's competitive power. Thus, a database plays a major role in the Information Age. A well designed database contains relevant, nonredundant, and consistent data. However, a well designed database is rarely achieved in practice. One major reason for this problem is the lack of effective support for logical database design. Since the late 1980s, various methodologies for database design have been introduced, based on the relational model, the functional model, the semantic database model, and the entity structure model. They all have, however, a common drawback: the successful design of database systems requires the experience, skills, and competence of a database analyst/designer. Unfortunately, such database analyst/designers are a scarce resource in organizations. The Structured Entity Model (SEM) method, as an alternative diagrammatic method developed by this research, facilitates the logical design phases of database system development. Because of the hierarchical structure and decomposition constructs of SEM, it can help a novice designer in performing top-down structured analysis and design of databases. SEM also achieves high semantic expressiveness by using a frame representation for entities and three general association categories (aspect, specialization, and multiple decomposition) for relationships. This also enables SEM to have high potential as a knowledge representation scheme for an integrated heterogeneous database system. Unlike most methods, the SEM method does not require designers to have knowledge of normalization theory in order to design a logical database. Thus, an end-user will be able to complete logical database design successfully using this approach. In this research, existing data models used for a logical database design were first studied. Second, the framework of SEM and the design approach using SEM were described and then compared with other data models and their use. Third, the effectiveness of the SEM method was validated in two experiments using novice designers and by a case analysis. In the final chapter of this dissertation, future research directions, such as the design of a logical database design expert system based on the SEM method and applications of this approach to other problems, such as the problems in integration of multiple databases and in an intelligent mail system, are discussed.
APA, Harvard, Vancouver, ISO, and other styles
45

Wallentin, Olof. "Component-Based Entity Systems : Modular Object Construction and High Performance Gameplay." Thesis, Uppsala universitet, Institutionen för speldesign, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-227240.

Full text
Abstract:
This bachelor thesis examines design implementation and differences between game entity systems, with a focus on a component-based structure. How each will affect the other will be discussed from both a technical and design point of view, including possible drawbacks or advantages regarding game design iteration and performance. Since the focus is on component-based entity systems, a clarification on traditional entity systems are required, thus this thesis focuses on entity systems that are traditional, property-based, container-based, and aggravated component-based. The design and implementation of each system was founded from different generations of programming paradigms which resulted in specific compositional structure based on each specific era of hardware configuration. This thesis analyses the progress of hardware alongside game entity system design to further understand its progression and evolution into today’s standards and implementation. Details on each system is provided from a design perspective for the traditional entity system and with an in-depth view for the component-based entity systems.
APA, Harvard, Vancouver, ISO, and other styles
46

Lekberg, Daniel, and Patrik Danielsson. "Designing and Implementing Generic database Systems Based on the entity-attribute-value Model." Thesis, Karlstads universitet, Avdelningen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-27963.

Full text
Abstract:
The goal of this thesis is to produce a generic database system for an IT-company calledNinetech. In this context generic means the database system can accept any input, withinreason, without one having to create a specific database schema beforehand. The prototypewill consist of a database, web service and a dynamic-link library. The three aforementionedcomponent’s design and function will be explained in chapter 2, alongside with the difficul-ties surrounding manipulating arbitrary data. The prototype’s performance is evaluated inchapter 3 and conclusions are drawn in chapter 4.
APA, Harvard, Vancouver, ISO, and other styles
47

Fu-Tang, Zhang, and Cheng Nong. "A PPK RE-entry Telemetry System." International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614706.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California
This article introduces a new re-entry telemetry system. It is based on the theory of PCM/PPK and applies wait and receive mode of pulse modulation system. It has the capabilities of telemetering slow changing, fast changing, super fast changing and height parameters. With the memory and replay technique, the system can telemeter full range characteristic data and data in black area. Microcomputers are used in both airborne equipment and ground equipment of the system. It features digitalized interface, high precision, simple hardware and high reliabilty.
APA, Harvard, Vancouver, ISO, and other styles
48

Bartánus, Radovan. "Návrh a implementace dílčí části IS pro advokátní kanceláře." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2019. http://www.nusl.cz/ntk/nusl-399352.

Full text
Abstract:
This diploma thesis covers the design and implementation of changes and extension modules of a chosen information system, used in JUDr. Ján Bartánus's attorney office in Ružomberok. The system, implemented as a web application based on the Symfony framework, is dedicated to the simplyfing, monitoring and organization of the work processes in a generic attorney office located in the Slovak Republic. For the purposes of a correct design, the thesis contains an analysis of the current state of the given attorney office, the environment in which it operates, its information system and existing solutions. Tools that were used to conduct the analysis were the SLEPT method, Porter's five forces, model 7S and SWOT matrix. Based on the results of the analyses, a change of the system has been designed, which entails an upgrade of the system, its user interface, integration of the register of companies of Slovak Republic and an extension module intended for the financial valuation of work performed by an attorney. The application has been developed utilizing the PHP and JavaScript programming language. The valuation module integrates the Google Maps API. The stability and speed of the system were tested by functional tests and profiling of the duration of HTTP requests. The main contribution of this thesis is the design and implementation of an extension of the information system, which can be used in any arbitrary attorney office situated in the Slovak Republic, and its practical application in the chosen company. This thesis builds on a previous thesis dedicated to the design and implementation of the information system, which is being extended in this diploma thesis.
APA, Harvard, Vancouver, ISO, and other styles
49

Teters, James C. II. "Enhancing entity level knowledge representation and environmental sensing in COMBATXXI using unmanned aircraft systems." Thesis, Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/37732.

Full text
Abstract:
Approved for public release; distribution is unlimited
Current modeling and simulation techniques may not adequately represent military operations using unmanned aircraft systems (UAS). A method to represent these conditions in a combat model can offer insight to the use and application of UAS operations, as well as understanding the sensitivity of simulation outcomes to the variability of UAS performance. Additionally, using combat model simulations that do not represent UAS behavior and conditions that cause this variability may return misleading or incomplete results. Current approaches include explicit scripting of behaviors and events. We develop a proof of principle search, targeting, and acquisition (STA) model for use with UAS within COMBATXXI, leveraging existing STA research conducted at the MOVES Institute at the Naval Postgraduate School. These dynamic behaviors are driven by events as they unfold during the simulation run rather than relying on preplanned events as in the scripted approach. This allows these behaviors to be highly reusable since they do not contain scenario or incident specific information. We demonstrate the application of the new STA model in a tactical convoy scenario in COMBATXXI. A design of experiments and post analysis quantifies the sensitivity of the measures of effectiveness of success to conditions contributing to variability in UAS performance.
APA, Harvard, Vancouver, ISO, and other styles
50

Appleton, Robert, Jose Casillas, Gregory Scales, Robert Green, Mellissa Niehoff, David Fitzgerald, and David Ouellette. "Advanced Restricted Area Entry Control System (ARAECS)." Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/42720.

Full text
Abstract:
Approved for public release; distribution is unlimited
The Navy requires a capability for effective and efficient entry control for restricted areas that house critical assets. This thesis describes an Advanced Restricted Area Entry Control System (ARAECS) to meet this requirement. System requirements were obtained from existing governing documentation as well as stakeholder inputs. A functional architecture was developed and then modeled using the Imagine That Inc. ExtendSim tool. Factors affecting ARAECS operation were binned into physical, technology, Concept of Operations (CONOPS), and noise. An Overall Measure of Effectiveness was developed and a Design of Experiments (DOE) was conducted to measure the affects of these factors on ARAECS performance. The two main drivers were minimizing security violations while also maximizing personnel and vehicle throughput. Based on the modeling, an architecture was selected that best met system objectives—this architecture relied on the ability to pre-screen 40% of the workforce based on security clearance and thus subject them to reduced random screening. The architecture was documented using the Vitech CORE tool, and use cases were developed and documented. A test and evaluation plan was developed and discussed. Risk was then examined, including technical, schedule, and cost risks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography