To see the other types of publications on this topic, follow the link: Information system architecture model.

Dissertations / Theses on the topic 'Information system architecture model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Information system architecture model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Walsh, Daniel S. "A Conceptual Framework & Enterprise Architecture Model To Support Information Systems Technology." NSUWorks, 1992. http://nsuworks.nova.edu/gscis_etd/908.

Full text
Abstract:
The purpose of this dissertation is to present a conceptual framework and enterprise architectural model to support information systems technology. The dissertation first discusses several information technology (IT) problems facing a typical enterprise in today's dynamic business environment, such as ineffective data management, non-integrated and fragmented systems, excessive system delivery times, user dissatisfaction, availability management, connectivity issues, Poor capacity and performance planning, inadequate data storage, ineffective security, and the question of centralization versus decentralization. Next, the rationale for the development of an effective enterprise architecture, together with a description of a new vision, is presented. An extensive literature review which addresses the significant work and findings of subject matter experts (SME) in the IT field are discussed. The dissertation then critiques and evaluates, with the assistance of these experts, fifteen already deployed information systems frameworks and architectures. Next, a view of information engineering (IE) as it relates to the enterprise architecture is addressed, because IE provides many of the foundations for the proposed enterprise architecture. The expanding role of IE has forced strategic systems planners to change the scope, objectives, style, and sources of expertise in planning. The dissertation then builds upon these concepts and proposes an enterprise architecture which provides enterprises with a structure that should allow them to support their visions, missions, objectives and goals. The enterprise architecture permits enterprises, by using open systems, to move computer application systems across different environments and platforms to various work groups and geographic locations within the various enterprises, and makes it possible for them to share processes and information with external business partners. After presenting the environmental impacts and driving forces which influence the enterprise architecture, the dissertation subsequently details each of the building blocks which constitute the enterprise architecture. The dissertation concludes by addressing several of the peripheral considerations which impact the enterprise architecture. Among these are a discussion of: business strategy development and the alignment of an organization's business strategies with information systems strategies ; the importance of perceiving information systems from a strategic perspective; strategic business initiatives; competitive positioning; an assessment of how risk analysis can be used to establish the business case for effective information systems; the development of a transition plan, and how an enterprise migrates from its embedded base of current information systems to a targeted portfolio of systems; a possible interoperability architecture; and closes with concluding comments on the enterprise architecture and its impact on the enterprise. A definition of terms, concepts, acronyms and abbreviations is presented in Appendix A. Appendix B depicts some mappings to assist with functional data modeling. Appendix C presents a taxonomy of emerging technologies which should be considered by enterprises when addressing their technical architecture. Appendix D presents an enterprise architecture example and template.
APA, Harvard, Vancouver, ISO, and other styles
2

Essien, Joe. "Model driven validation approach for enterprise architecture and motivation extensions." Thesis, University of West London, 2015. https://repository.uwl.ac.uk/id/eprint/1269/.

Full text
Abstract:
As the endorsement of Enterprise Architecture (EA) modelling continues to grow in diversity and complexity, management of its schema, artefacts, semantics and relationships has become an important business concern. To maintain agility and flexibility within competitive markets, organizations have also been compelled to explore ways of adjusting proactively to innovations, changes and complex events also by use of EA concepts to model business processes and strategies. Thus the need to ensure appropriate validation of EA taxonomies has been considered severally as an essential requirement for these processes in order to exert business motivation; relate information systems to technological infrastructure. However, since many taxonomies deployed today use widespread and disparate modelling methodologies, the possibility to adopt a generic validation approach remains a challenge. The proliferation of EA methodologies and perspectives has also led to intricacies in the formalization and validation of EA constructs as models often times have variant schematic interpretations. Thus, disparate implementations and inconsistent simulation of alignment between business architectures and heterogeneous application systems is common within the EA domain (Jonkers et al., 2003). In this research, the Model Driven Validation Approach (MDVA) is introduced. MDVA allows modelling of EA with validation attributes, formalization of the validation concepts and transformation of model artefacts to ontologies. The transformation simplifies querying based on motivation and constraints. As the extended methodology is grounded on the semiotics of existing tools, validation is executed using ubiquitous query language. The major contributions of this work are the extension of a metamodel of Business Layer of an EAF with Validation Element and the development of EAF model to ontology transformation Approach. With this innovation, domain-driven design and object-oriented analysis concepts are applied to achieve EAF model’s validation using ontology querying methodology. Additionally, the MDVA facilitates the traceability of EA artefacts using ontology graph patterns.
APA, Harvard, Vancouver, ISO, and other styles
3

Junkert, Levi Daniel. "The grid overlay system model." Thesis, Montana State University, 2009. http://etd.lib.montana.edu/etd/2009/junkert/JunkertL0509.pdf.

Full text
Abstract:
The grid overlay system model is a new technique for forming a grid computing model for research computing. In this method we construct a grid that is dynamically allocated from a set of resources in a unique and progressive manner. This new system model allows for construction of virtual environments for execution of applications on many diverse shared resources. The system can dynamically scale to create a range of resources from a single machine to a virtual cluster of machines. This model provides a virtual container that can run legacy and customized software in an emulated environment or directly on the host's hardware through virtualization. Using this model on current consumer hardware allows for a unique blend of software containment with dynamic resource allocation. Our model, in combination with commercial off the shelf (COTS) hardware and software, is able to create a large grid system with multiple combinations of hardware and software environments. In our model we propose a unique set of abstraction layers for systems. The combination of our model with current consumer hardware and software provides a unique design principle for addressing grid implementation, hardware reusability, operating system deployment and implementation, virtualization in the grid, and user control techniques. This provides a robust and simple framework that allows for the construction of computational research solutions in which performance can be traded for flexibility, and vice versa. Our model can be applied to computational research grids, service oriented grids, and even scales to collections of mobile or embedded system grids.
APA, Harvard, Vancouver, ISO, and other styles
4

Chivukula, Venkata Ramakrishna. "Detecting Cyber Security Anti-Patterns in System Architecture Models." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-293027.

Full text
Abstract:
Organizations across the world have been on the receiving end of large-scale cyber-attacks. Over time, the number and the success of these attacks have grown to a high level. To prepare for these attacks, organizations have to test the resilience of their infrastructures. One way to manage the risk of these attacks and to ensure security is the use of threat modeling. Through threat modeling, organizations can analyze their infrastructure and identify vulnerabilities. The vulnerabilities then have to be patched to improve the overall security posture of the organization. When modeled, these vulnerabilities can occur in different forms. Certain vulnerabilities are specific to certain components in the system. On the other hand, some deficiencies occur in conjunction with multiple assets in the infrastructure. These are called structural deficiencies. Identifying and mitigating these structural deficiencies is very important. In this thesis, structural deficiencies are described and a catalog of some deficiencies is built through a survey. The deficiencies and the catalog are developed towork with Foreseeti AB’s securiCADmodeling software. Further, a deficiency model is defined that can enable description and search of these deficiencies in securiCAD models. Using the description model, all occurrences of the deficiency can be found. These occurrences then can be replaced with structural improvements. The improved securiCAD models are then tested with simulations. The results from the simulations show that the structural improvements are useful in significantly reducing the Time-To-Compromise (TTC) of important assets. Using the catalog and the deficiency model, system administrators can identify deficiencies and test the effect of different improvements in the securiCAD model which can then be applied to the actual infrastructure.<br>Organisationer över hela världen har blivit måltavlor för storskaliga cyberattacker. Över tid har antalet framgångsrika attacker vuxit till en hög nivå. Som en förberedelse för dessa attacker måste organisationer testa sin infrastrukturs motståndskraft. Ett sätt att hantera risken för dessa attacker och säkerställa säkerhet är användningen av hotmodellering och attacksimuleringar. Genom hotmodellering och attacksimuleringar kan organisationer analysera egenskaperna för informationssäkerhet i sin infrastruktur och identifiera svaga punkter. Svagheterna måste sedan hanteras för att förbättra organisationens övergripande säkerhetsposition. När de modelleras kan dessa svagheter förekomma i olika former. Vissa är komponentspecifika och lokala till ett objekt i infrastrukturen. Dessa kan hanteras med hjälp av försvar som definieras i securi- CAD. Andra svagheter kan uppstå genom relationerna mellan flera objekt i infrastrukturen. Dessa kallas strukturella svagheter. Att identifiera och mildra dessa strukturella svagheter är mycket viktigt. I denna avhandling beskrivs strukturella svagheter och en katalog med svagheter har byggts upp. Vidare definieras en modell som möjliggör beskrivning av dessa svagheter och möjliggör identifiering av svagheter i securiCADmodeller. Med hjälp av beskrivningsmodellen kan alla förekomster av bristen hittas. Dessa händelser kan sedan ersättas med strukturella förbättringar. De förbättrade securiCAD-modellerna analyseras sedan. Resultaten visar att de strukturella förbättringarna är användbara för att avsevärt minska Time-To- Compromise (TTC) för viktiga tillgångar. Med hjälp av katalogen och modellen kan systemadministratörer identifiera svagheter och testa effekten av olika förbättringar i securiCAD-modellen som sedan kan tillämpas på den faktiska infrastrukturen.
APA, Harvard, Vancouver, ISO, and other styles
5

Bernardo, Tomás. "A model for information architecture of government web sites in Southern Africa." Thesis, Nelson Mandela Metropolitan University, 2006. http://hdl.handle.net/10948/459.

Full text
Abstract:
The main purpose of this research is to investigate the Information Architecture (IA) of government web sites in Southern Africa. The government web sites of Mozambique and South Africa were selected for the purpose of this research. A further aim of this research was to derive a model for the IA of government web sites. The model was based on IA components and guidelines as well as on web site components and E-government requirements. The IA guidelines in the model were derived from general design guidelines and guidelines for government web sites. The IA guidelines in the proposed model were used to conduct an analytical and empirical evaluation of the selected sites. The selection of the sites was based on similarities in the Information and Communication Technology (ICT) strategies and policies, the annual Internet growth rates, the Internet user profiles and the Egovernment initiatives in both countries. Differences between the sites also contributed to their selection. Mozambique is one of the least developed countries in the world, while South Africa is one of the most developed countries in Africa. Heuristic evaluation was used for the analytical evaluation while questionnaires and user testing were used for the empirical evaluation. Some of the usability problems identified in the heuristic evaluation, such as incorrect organisation of information and navigation issues, were also identified in the empirical evaluation, confirming to the existence of these usability problems. The results of this research show that the IA of government web sites has an impact on user performance and satisfaction and that the proposed model can be used to design and evaluate the IA of government web sites in Southern Africa.
APA, Harvard, Vancouver, ISO, and other styles
6

Yuan, Yulan. "Vista scenic beauty estimation model: An application of integrating neural net and geographic information system." Thesis, The University of Arizona, 1998. http://hdl.handle.net/10150/278676.

Full text
Abstract:
There are some issues that have to be addressed for further understanding and improving scenic beauty management. First, the conventional model, preference rating based on fixed scene and direction, may not sufficiently reflect the reality of visual experience. Rather, visual and scenic preference is construed of a spatial experience. Second, the predictors are chosen based on measuring the composition of landscape features shown in the image. The measurement may not necessarily represent the contents of the physical environment. Third, judgements of scenic preference are complicated tasks. Simple linear regression analysis, with limited degree of freedom and some statistical constraints, may not represent the complexity of human judgments. An integrated model was developed by integrating the Scenic Beauty Estimation (SBE) model (Terry, 1976), the geographic information system (GIS) and, the artificial neural network (ANN). The results suggested the integrated model might be utilized as an automatic scenic preference mechanism for policy making. Implications for future research are also suggested.
APA, Harvard, Vancouver, ISO, and other styles
7

Holbert, Sally Beth 1962. "Development of a geographic information system based hydrologic model for stormwater management and landuse planning." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277108.

Full text
Abstract:
The HYDROPAC model was developed to improve the technology transfer from the science of hydrology to environmental planning disciplines by initiating advanced spatial analysis techniques for predicting rainfall-runoff relationships. This model integrates the Soil Conservation Service (SCS) equations for calculating runoff and a Geographic Information System (Map Analysis Package) in a framework that allows the simulation of runoff processes over a digital elevation model. The simulations are done in discrete time steps allowing the generation of a hydrograph at any desired point in the watershed and the overland flow patterns are displayed in maps. This framework addresses some of the current limitations of hydrologic model for stormwater management planning in terms of capabilities for analysis and communication of results. This manuscript describes the methods used to develop the framework of the HYDROPAC model and its usefulness for analyzing potential runoff problems during the planning process.
APA, Harvard, Vancouver, ISO, and other styles
8

Per, Närman. "Enterprise Architecture for Information System Analysis : Modeling and assessing data accuracy, availability, performance and application usage." Doctoral thesis, KTH, Industriella informations- och styrsystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101494.

Full text
Abstract:
Decisions concerning IT systems are often made without adequate decision-support. This has led to unnecessary IT costs and failures to realize business benefits. The present thesis presents a framework for analysis of four information systems properties relevant to IT decision-making. The work is founded on enterprise architecture, a model-based IT and business management discipline. Based on the existing ArchiMate framework, a new enterprise architecture framework has been developed and implemented in a software tool. The framework supports modeling and analysis of data accuracy, service performance, service availability and application usage. To analyze data accuracy, data flows are modeled, the service availability analysis uses fault tree analysis, the performance analysis employs queuing networks and the application usage analysis combines the Technology Acceptance Model and Task-Technology Fit model. The accuracy of the framework's estimates was empirically tested. Data accuracy and service performance were evaluated in studies at the same power utility. Service availability was tested in multiple studies at banks and power utilities. Data was collected through interviews with system development or maintenance staff. The application usage model was tested in the maintenance management domain. Here, data was collected by means of a survey answered by 55 respondents from three power utilities, one manufacturing company and one nuclear power plant. The service availability studies provided estimates that were accurate within a few hours of logged yearly downtime. The data accuracy estimate was correct within a percentage point when compared to a sample of data objects. Deviations for four out of five service performance estimates were within 15 % from measured values. The application usage analysis explained a high degree of variation in application usage when applied to the maintenance management domain. During the studies of data accuracy, service performance and service availability, records were kept concerning the required modeling and analysis effort. The estimates were obtained with a total effort of about 20 man-hours per estimate. In summary the framework should be useful for IT decision-makers requiring fairly accurate, but not too expensive, estimates of the four properties.<br><p>QC 20120912</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Pořádek, Jiří. "Implementace informačního modelu v prostředí systémové architektury." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-198090.

Full text
Abstract:
This thesis deals with a field of information modeling and its relation to systems architecture. It is divided into two notional parts -- theoretical (first and second chapter) and practical (third and fourth chapter). First chapter explains the meaning of the term 'information model', compares it to the term 'data model' and then introduce its practical use in an enterprise modeling. It also defines the term of 'systems architecture' in its broad meaning and reduces it to three narrow meaning -- enterprise architecture, information architecture and information systems architecture, while it explains their relation to the information modeling. Finally this chapter provides the base for information model implementation methodology created later in the practical part of this thesis. Second chapter introduces architecture framework called TM Forum Frameworx specialized for a telecommunication services provider. It consists of three standards for business process, information and application architecture. The second one defines specific information model which is described partly in the second chapter and partly in the appendix B of this thesis. Third chapter describes the first of two contributions of this thesis - the information model implementation methodology. This methodology is applicable to any implementation of an information model. Three sections of this chapter contains diagrams and description of three phases of the methodology - pre-implementation phase, implementation phase and post-implementation phase. In the end of the chapter there is a table containing outputs of every single activity performed during all the phases. Fourth and last chapter then describes and evaluates real implementation of the information model from TM Forum Frameworx in the systems architecture department of an enterprise providing telecommunication services. This implementation based on the created methodology and its successful results then became the second contribution of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
10

Thomas, Manoj. "An Ontology Centric Architecture For Mediating Interactions In Semantic Web-Based E-Commerce Environments." VCU Scholars Compass, 2008. http://scholarscompass.vcu.edu/etd/1598.

Full text
Abstract:
Information freely generated, widely distributed and openly interpreted is a rich source of creative energy in the digital age that we live in. As we move further into this irrevocable relationship with self-growing and actively proliferating information spaces, we are also finding ourselves overwhelmed, disheartened and powerless in the presence of so much information. We are at a point where, without domain familiarity or expert guidance, sifting through the copious volumes of information to find relevance quickly turns into a mundane task often requiring enormous patience. The realization of accomplishment soon turns into a matter of extensive cognitive load, serendipity or just plain luck. This dissertation describes a theoretical framework to analyze user interactions based on mental representations in a medium where the nature of the problem-solving task emphasizes the interaction between internal task representation and the external problem domain. The framework is established by relating to work in behavioral science, sociology, cognitive science and knowledge engineering, particularly Herbert Simon’s (1957; 1989) notion of satisficing on bounded rationality and Schön’s (1983) reflective model. Mental representations mediate situated actions in our constrained digital environment and provide the opportunity for completing a task. Since assistive aids to guide situated actions reduce complexity in the task environment (Vessey 1991; Pirolli et al. 1999), the framework is used as the foundation for developing mediating structures to express the internal, external and mental representations. Interaction aids superimposed on mediating structures that model thought and action will help to guide the “perpetual novice” (Borgman 1996) through the vast digital information spaces by orchestrating better cognitive fit between the task environment and the task solution. This dissertation presents an ontology centric architecture for mediating interactions is presented in a semantic web based e-commerce environment. The Design Science approach is applied for this purpose. The potential of the framework is illustrated as a functional model by using it to model the hierarchy of tasks in a consumer decision-making process as it applies in an e-commerce setting. Ontologies are used to express the perceptual operations on the external task environment, the intuitive operations on the internal task representation, and the constraint satisfaction and situated actions conforming to reasoning from the cognitive fit. It is maintained that actions themselves cannot be enforced, but when the meaning from mental imagery and the task environment are brought into coordination, it leads to situated actions that change the present situation into one closer to what is desired. To test the usability of the ontologies we use the Web Ontology Language (OWL) to express the semantics of the three representations. We also use OWL to validate the knowledge representations and to make rule-based logical inferences on the ontological semantics. An e-commerce application was also developed to show how effective guidance can be provided by constructing semantically rich target pages from the knowledge manifested in the ontologies.
APA, Harvard, Vancouver, ISO, and other styles
11

Alt, Raphael, Peter Wintersohle, Hartmut Schweizer, Martin Wollschläger, and Katharina Schmitz. "Interoperable information model of a pneumatic handling system for plug-and-produce." Technische Universität Dresden, 2020. https://tud.qucosa.de/id/qucosa%3A71175.

Full text
Abstract:
Commissioning of a machine is still representing a very challenging operation and most steps are still executed manually by commissioning engineers. A future goal is to support the commissioning engineers and further automate the entire integration process of a newly installed system with a minimum of manual effort. This use case is known as plug-and-produce (PnP). In this contribution a concept of the Industrial Internet of Things is presented to improve the commissioning task for a pneumatic handling system. The system is based on a service-oriented architecture. Within this context, information models are developed to meet the requirements of PnP to provide relevant information via virtual representations, e.g. the asset administration shell, of the components to the commissioning process. Finally, a draft of the entire PnP process is shown, providing a general understanding of Industrial Internet of Things fluid power systems.
APA, Harvard, Vancouver, ISO, and other styles
12

Lopez, Stephen Joseph. "Designing the Intelligent Data Network: An Architectural Model for The Enterprise Information System." NSUWorks, 1996. http://nsuworks.nova.edu/gscis_etd/685.

Full text
Abstract:
The National Board of Medical Examiners (NBME) is a non-profit, independent organization which, since 1915, has prepared and administered qualifying test examinations that have been accepted by the legal agencies governing the practice of medicine within each state as sufficient proof of a candidate's depth of medical knowledge. The Federation of State Medical Boards of the United States (FSMB), Inc. is a non-profit organization that was founded in 1912. Its membership is comprised of the medical licensing boards of all the states, the District of Columbia, Guam, Puerto Rico, and the Virgin Islands. The United States Medical Licensing Examination (USMLE), which began in 1992 and is administered by both the NBME and the FSMB, provides a common evaluation system for all candidates for medical licensure. The USMLE is a single examination with three steps that together assess a physician's ability to apply knowledge important for effective patient care. The results of the USMLE are reported to medical licensing authorities for use in granting physicians the initial license to practice medicine in the United States, Guam, Puerto Rico, and the Virgin Islands. The NBME and the FSMB are connected by a leased line link this link allows both organizations to exchange data and share databases of information relevant to the medical licensure process. In order to accommodate current and future information needs, a scalable, expandable, and secure network infrastructure was required. The network architecture that resulted supported document imaging, multimedia, client/server applications, and interconnections from other networks. Using the FDDI and Ethernet protocols coupled with switching technology, over a structured cable system, a network was created facilitating access to network resources of all interconnected computing platforms from any point within the network. The dissertation designing the Intelligent Network: An Architectural Model for the Enterprise Information System chronicles the design and installation of a switched virtual network infrastructure. Using multiple media access protocols concurrently, a network was created which supported 250 workstations, 15 PC based file servers, 3 V AX systems, asynchronous connections, and a direct connection to the Internet, with less than 5% of the total network capacity utilized.
APA, Harvard, Vancouver, ISO, and other styles
13

Faruqui, Saif Ahmed. "Utility computing: Certification model, costing model, and related architecture development." CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2756.

Full text
Abstract:
The purpose of the thesis was to propose one set of solutions to some of the challenges that are delaying the adoption of utility computing on a wider scale. These components enable effective deployment of utility computing, efficient look-up, and comparison of service offerings of different utility computing resource centers connected to the utility computing network.
APA, Harvard, Vancouver, ISO, and other styles
14

Kerzhner, Aleksandr A. "Using logic-based approaches to explore system architectures for systems engineering." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44748.

Full text
Abstract:
This research is focused on helping engineers design better systems by supporting their decision making. When engineers design a system, they have an almost unlimited number of possible system alternatives to consider. Modern systems are difficult to design because of a need to satisfy many different stakeholder concerns from a number of domains which requires a large amount of expert knowledge. Current systems engineering practices try to simplify the design process by providing practical approaches to managing the large amount of knowledge and information needed during the process. Although these methods make designing a system more practical, they do not support a structured decision making process, especially at early stages when designers are selecting the appropriate system architecture, and instead rely on designers using ad hoc frameworks that are often self-contradictory. In this dissertation, a framework for performing architecture exploration at early stages of the design process is presented. The goal is to support more rational and self-consistent decision making by allowing designers to explicitly represent their architecture exploration problem and then use computational tools to perform this exploration. To represent the architecture exploration problem, a modeling language is presented which explicitly models the problem as an architecture selection decision. This language is based on the principles of decision-based design and decision theory, where decisions are made by picking the alternative that results in the most preferred expected outcome. The language is designed to capture potential alternatives in a compact form, analysis knowledge used to predict the quality of a particular alternative, and evaluation criteria to differentiate and rank outcomes. This language is based on the Object Management Group's System Modeling Language (SysML). Where possible, existing SysML constructs are used; when additional constructs are needed, SysML's profile mechanism is used to extend the language. Simply modeling the selection decision explicitly is not sufficient, computational tools are also needed to explore the space of possible solutions and inform designers about the selection of the appropriate alternative. In this investigation, computational tools from the mathematical programming domain are considered for this purpose. A framework for modeling an architecture selection decision in mixed-integer linear programming (MIP) is presented. MIP solvers can then solve the MIP problem to identify promising candidate architectures at early stages of the design process. Mathematical programming is a common optimization domain, but it is rarely used in this context because of the difficulty of manually formulating an architecture selection or exploration problem as a mathematical programming optimization problem. The formulation is presented in a modular fashion; this enables the definition of a model transformation that can be applied to transform the more compact SysML representation into the mathematical programming problem, which is also presented. A modular superstructure representation is used to model the design space; in a superstructure a union of all potential architectures is represented as a set of discrete and continuous variables. Algebraic constraints are added to describe both acceptable variable combinations and system behavior to allow the solver to eliminate clearly poor alternatives and identify promising alternatives. The overall framework is demonstrated on the selection of an actuation subsystem for a hydraulic excavator. This example is chosen because of the variety of potential architecture embodiments and also a plethora of well-known configurations which can be used to verify the results.
APA, Harvard, Vancouver, ISO, and other styles
15

Nageba, Ebrahim. "Personalizable architecture model for optimizing the access to pervasive ressources and services : Application in telemedicine." Phd thesis, INSA de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00694445.

Full text
Abstract:
The growing development and use of pervasive systems, equipped with increasingly sophisticated functionalities and communication means, offer fantastic potentialities of services, particularly in the eHealth and Telemedicine domains, for the benifit of each citizen, patient or healthcare professional. One of the current societal challenges is to enable a better exploitation of the available services for all actors involved in a given domain. Nevertheless, the multiplicity of the offered services, the systems functional variety, and the heterogeneity of the needs require the development of knowledge models of these services, systems functions, and needs. In addition, the distributed computing environments heterogeneity, the availability and potential capabilities of various human and material resources (devices, services, data sources, etc.) required by the different tasks and processes, the variety of services providing users with data, the interoperability conflicts between schemas and data sources are all issues that we have to consider in our research works. Our contribution aims to empower the intelligent exploitation of ubiquitous resources and to optimize the quality of service in ambient environment. For this, we propose a knowledge meta-model of the main concepts of a pervasive environment, such as Actor, Task, Resource, Object, Service, Location, Organization, etc. This knowledge meta-model is based on ontologies describing the different aforementioned entities from a given domain and their interrelationships. We have then formalized it by using a standard language for knowledge description. After that, we have designed an architectural framework called ONOF-PAS (ONtology Oriented Framework for Pervasive Applications and Services) mainly based on ontological models, a set of rules, an inference engine, and object oriented components for tasks management and resources processing. Being generic, extensible, and applicable in different domains, ONOF-PAS has the ability to perform rule-based reasoning to handle various contexts of use and enable decision making in dynamic and heterogeneous environments while taking into account the availability and capabilities of the human and material resources required by the multiples tasks and processes executed by pervasive systems. Finally, we have instantiated ONOF-PAS in the telemedicine domain to handle the scenario of the transfer of persons victim of health problems during their presence in hostile environments such as high mountains resorts or geographically isolated areas. A prototype implementing this scenario, called T-TROIE (Telemedicine Tasks and Resources Ontologies for Inimical Environments), has been developed to validate our approach and the proposed ONOF-PAS framework.
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Yujiang. "Architecting model driven system integration in production engineering." Doctoral thesis, KTH, Datorsystem för konstruktion och tillverkning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207156.

Full text
Abstract:
System integration is a key enabler to maximize information value in an engineering context. The valuable information is normally represented by information models which play a decisive role in the implementation of system integration. The information models are designed to efficiently and effectively capture, process and communicate information among different functional and business units. However, use of the information models in implementing system integration is challenged by insufficient support from current settings of modeling architectures. This situation calls for new strategies to ease the use of information models. To address this challenge, this study presents a new twofold solution: Model driven system integration. It includes 1) a modeling architecture to guide the development of information models and 2) an integrated implementation process to guide the use of information models. Thus, this work improves practical applicability of an information model in its entire modeling lifecycle. The results contribute not only to the performance of modeling practices but also to improved understanding of information modeling in system integration. Implementation contexts and implementation models are introduced to develop an implementation-oriented modeling architecture. Further, the potential of information models as a knowledge base to sup-port implementation practices is identified. To concretely discuss behaviors and structures of information models, this study adopts ISO 10303 and the related standards as major references of existing information models. Case studies on model driven system integration validate this research in scenarios concerning kinematic modeling, kinematic error modeling, cutting tools classification and product catalogue modeling. Model driven system integration exhibits high efficiency in implementation, enhanced interoperability and increased value of information models.<br><p>QC 20170519</p><br>MPQP - Model driven process and quality planning<br>FBOP - Feature Based Process Planning<br>DFBB - Digital factory building blocks
APA, Harvard, Vancouver, ISO, and other styles
17

Rajsiri, Vatcharaphun. "Knowledge-based system for collaborative process specification." Thesis, Toulouse, INPT, 2009. http://www.theses.fr/2009INPT014G/document.

Full text
Abstract:
Le marché industriel est aujourd’hui de plus en plus dynamique et compétitif. Cette tendance évolutive de l’écosystème amène les entreprises à prendre part à un nombre croissant de réseaux industriels, dans l’optique de maintenir leur activité et d’accroître leur compétitivité. La qualité d’interaction et de collaboration de partenaires de ces réseaux dépend grandement de la capacité de leurs systèmes d’information (SIs) respectifs à gérer et à partager les informations. Le projet MISE (Mediation Information System Engineering) relève pleinement de cette problématique en proposant une approche de conception d’une solution (conceptuelle et technologique) pour le support de l’interopérabilité d’entreprises au travers de leurs SIs. Ce projet s’appuie sur la notion de MDE (Model-Driven Engineering) et s’articule autour de trois niveaux : métier, logique et technologique. Les travaux de! thèse dont il est ici question relèvent du niveau métier en présentant une démarche d’obtention d’un modèle indépendant de toute implémentation (CIM pour Computer Independent Model). Il s’agit en particulier de s’appuyer sur un système basé sur la gestion de connaissance pour concevoir des processus collaboratifs en BPMN (Business Process Modelling Notation). En se positionnant à un niveau d’abstraction au dessus de celui du CIM, on peut capitaliser, manipuler et raisonner une connaissance permettant d’une part de caractériser des collaborations et d’autre part de mettre en place des mécanismes de déduction pour descendre au niveau de CIM. Ces principes sont en outre illustrés par le biais d’un prototype développé pour valider l’approche<br>Enterprises are now operating in an environment where market is more open, globalized, and competitive. Changes in market conditions are obliging enterprises to become involved in various kinds of industrial networks in order to maintain their business efficiency. The integration of business partners depends deeply on the ability to capture and share information seamlessly amongst the information systems (ISs) of different enterprises. The MISE (Mediation Information System Engineering) project was evolved in order to tackle this problem by providing an information technology solution for supporting the enterprise interoperability through ISs. It is developed on the basis of the MDE (Model Driven Engineering). This dissertation addresses the business level of the interoperability, and the CIM (Computer Independent Model) of the MDE. Its main objective is to develop a knowledge-based system for supporting the design of collabora! tive processes that conform to the BPMN (Business Process Modeling Notation). We propose to work at the upper level of the CIM to capture knowledge that allows us to characterize collaboration by basing on the perspectives and experiences of business partners. We use this knowledge together with the existing knowledge (instances about business processes) from the MIT Process Handbook for moving down to the CIM level. The prototype of our knowledge-based system is also developed in order to validate and evaluate the approach
APA, Harvard, Vancouver, ISO, and other styles
18

Шендрик, Віра Вікторівна, Вера Викторовна Шендрик, Vira Viktorivna Shendryk, A. Boiko, and Y. Mashyn. "Mechanisms of Data Integration in Information Systems." Thesis, Sumy State University, 2016. http://essuir.sumdu.edu.ua/handle/123456789/47066.

Full text
Abstract:
The effectiveness of enterprise management is providing by common use of information systems, which can only increase through flexible data management. The creating mechanisms for data integration are one of the most pressing issues in the field of information system. The nature and complexity of integrating methods are greatly depends on the level of integration, the features of various data sources and their set as a whole, and the identified ways of integration.
APA, Harvard, Vancouver, ISO, and other styles
19

Hampshire, Maria Cláudia Santiago. "O modelo do sistema viável na concepção da arquitetura de sistemas de informação: aplicação no contexto de incidentes em instalação de pesquisa na área nuclear." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3135/tde-09022009-183414/.

Full text
Abstract:
O trabalho apresenta um estudo com a finalidade de verificar a aplicabilidade do Modelo do Sistema Viável (VSM Viable System Model) no projeto de uma arquitetura robusta de um Sistema de Informação voltado à área naval nuclear. A ênfase do estudo está em avaliar uma modelagem alternativa para a especificação da arquitetura do Sistema de Informação, incorporando o conjunto de funcionalidades especificadas pelo VSM, com o objetivo de fortalecer esta arquitetura. A estratégia desta pesquisa baseia-se em uma revisão bibliográfica relacionada ao VSM, aos Sistemas de Informação e a sua arquitetura, e às influências destes elementos na sobrevivência das organizações diante das mudanças constantes no ambiente. É apresentado um estudo de caso onde são mostrados os elementos teóricos do VSM e da arquitetura de SI aplicados na elaboração da arquitetura de um SI. O Sistema escolhido para esta aplicação é o Sistema de informação de incidentes nucleares (SIN) nas instalações onde são feitas pesquisas e desenvolvimento de tecnologia nuclear a ser aplicada na propulsão de um submarino.<br>The present work presents a study aiming to verify the applicability of the Viable System Model (VSM) in a robust architecture for an Information System targeting nuclear naval area. The emphasis of the study is in assessing an alternative modeling for the Information System (IS) architecture specification, incorporating a set of functionalities defined by the VSM, with the purpose of strengthening this architecture. The strategy of this research is based on a bibliographic revision on VSM, Information System and its architecture, and the influence of those elements in the survival of the organizations in a ever changing environment. It is presented one case study where it is showed the theoretical elements of the VSM and IS architecture applied on the development of a IS architecture. The selected system for this application is the IS for nuclear incidents (SIN) on the installations dedicated to research and development on nuclear technology applied to submarine propulsion system.
APA, Harvard, Vancouver, ISO, and other styles
20

Abler, Daniel Jakob Silvester. "Software architecture for capturing clinical information in hadron therapy and the design of an ion beam for radiobiology." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:c2d9cf79-7b2d-4feb-bb17-53f003a8557c.

Full text
Abstract:
Hadron Therapy (HT) exploits properties of ion radiation to gain therapeutic advantages over existing photon-based forms of external radiation therapy. However, its relative superiority and cost-effectiveness have not been proven for all clinical situations. Establishing a robust evidence base for the development of best treatment practices is one of the major challenges for the field. This thesis investigates two research infrastructures for building this essential evidence. First, the thesis develops main components of a metadata-driven software architecture for the collection of clinical information and its analysis. This architecture acknowledges the diversity in the domain and supports data interoperability by sharing information models. Their compliance to common metamodels guarantees that primary data and analysis results can be interpreted outside of the immediate production context. This is a fundamental necessity for all aspects of the evidence creation process. A metamodel of data capture forms is developed with unique properties to support data collection and documentation in this architecture. The architecture's potential to support complex analysis processes is demonstrated with the help of a novel metamodel for Markov model based simulations, as used for the synthesis of evidence in health-economic assessments. The application of both metamodels is illustrated on the example of HT. Since the biological effect of particle radiation is a major source of uncertainty in HT, in its second part, this thesis undertakes first investigations towards a new research facility for bio-medical experiments with ion beams. It examines the feasibility of upgrading LEIR, an existing accelerator at the European Organisation for Nuclear Research (CERN), with a new slow extraction and investigates transport of the extracted beam to future experiments. Possible configurations for the slow-resonant extraction process are identified, and designs for horizontal and vertical beam transport lines developed. The results of these studies indicate future research directions towards a new ion beam facility for biomedical research.
APA, Harvard, Vancouver, ISO, and other styles
21

Deserranno, Allen Ronald. "Enhancing the Internet of Things Architecture with Flow Semantics." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/7016.

Full text
Abstract:
Internet of Things (‘IoT’) systems are complex, asynchronous solutions often comprised of various software and hardware components developed in isolation of each other. These components function with different degrees of reliability and performance over an inherently unreliable network, the Internet. Many IoT systems are developed within silos that do not provide the ability to communicate or be interoperable with other systems and platforms. Literature exists on how these systems should be designed, how they should interoperate, and how they could be improved, but practice does not always consult literature. The work brings together a proposed reference architecture for the IoT and engineering practices for flow semantics found in existing literature with a commercial implementation of an IoT platform. It demonstrates that the proposed IoT reference architecture and flow-service-quality engineering practices when integrated together can produce a more robust system with increased functionality and interoperability. It shows how such practices can be implemented into a commercial solution, and explores the value provided to the system when implemented. This work contributes to the current understanding of how complex IoT systems can be developed to be more reliable and interoperable using reference architectures and flow semantics. The work highlights the value of integration of academic solutions with commercial implementations of complex systems.
APA, Harvard, Vancouver, ISO, and other styles
22

Kabaso, Boniface. "Health information systems interoperability in Africa: service oriented architectural model for interoperability in African context." Thesis, Cape Peninsula University of Technology, 2014. http://hdl.handle.net/20.500.11838/1413.

Full text
Abstract:
Africa has been seeing a steady increase in the Information and Communication Technology (ICT) systems deployed in health care institutions. This is evidenced by the funding that has been going into health information systems from both the government and the donor organisations. Large numbers of national and international agencies, research organisations, Non- Governmental Organisations(NGOs) etc continue to carry out studies and develop systems and procedures to exploit the power of Information and Communication Technology (ICT) in public and private health institutions. This uncoordinated mass migration to electronic medical record systems in Africa has created a heterogeneous and complex computing environment in health care institutions, where most of the deployed systems have technologies that are local, proprietary and insular. Furthermore, the electronic infrastructure in Africa meant to facilitate the electronic exchange of information has a number of constraints. The infrastructure connectivity on which ICT applications run, is still segmented. Most parts of Africa lack the availability of a reliable connectivity infrastructure. In some cases, there is no connectivity at all. This work aims at using Service Oriented Architectures (SOA) to address the problems of interoperability of systems deployed in Africa and suggest design architectures that are able to deal with the state of poor connectivity. SOA offers to bring better interoperability of systems deployed and re-usability of existing IT assets, including those using different electronic health standards in a resource constrained environment like Africa.<br>Thesis submitted in fulfilment of the requirements for the degree Doctor of Technology: Information Technology in the Faculty of Informatics And Design at the Cape Peninsula University of Technology 2014
APA, Harvard, Vancouver, ISO, and other styles
23

Nordström, Lars. "Use of the CIM framework for data management in maintenance of electricity distribution networks." Doctoral thesis, KTH, Industriella informations- och styrsystem, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3985.

Full text
Abstract:
Aging infrastructure and personnel, combined with stricter financial constraints has put maintenance, or more popular Asset Management, at the top of the agenda for most power utilities. At the same time the industry reports that this area is not properly supported by information systems. Today’s power utilities have very comprehensive and complex portfolios of information systems that serve many different purposes. A common problem in such heterogeneous system architectures is data management, e.g. data in the systems do not represent the true status of the equipment in the power grid or several sources of data are contradictory. The research presented in this thesis concerns how this industrial problem can be better understood and approached by novel use of the ontology standardized in the Common Information Model defined in IEC standards 61970 &amp; 61968. The theoretical framework for the research is that of data management using ontology based frameworks. This notion is not new, but is receiving renewed attention due to emerging technologies, e.g. Service Oriented Architectures, that support implementation of such ontological frameworks. The work presented is empirical in nature and takes its origin in the ontology available in the Common Information Model. The scope of the research is the applicability of the CIM ontology, not as it was intended i.e. in systems integration, but for analysis of business processes, legacy systems and data. The work has involved significant interaction with power distribution utilities in Sweden, in order to validate the framework developed around the CIM ontology. Results from the research have been published continuously, this thesis consists of an introduction and summary and papers describing the main contribution of the work. The main contribution of the work presented in this thesis is the validation of the proposition to use the CIM ontology as a basis for analysis existing legacy systems. By using the data models defined in the standards and combining them with established modeling techniques we propose a framework for information system management. The framework is appropriate for analyzing data quality problems related to power systems maintenance at power distribution utilities. As part of validating the results, the proposed framework has been applied in a case study involving medium voltage overhead line inspection. In addition to the main contribution, a classification of the state of the practice system support for power system maintenance at utilities has been created. Second, the work includes an analysis and classification of how high performance Wide Area communication technologies can be used to improve power system maintenance including improving data quality.<br>QC 20100614
APA, Harvard, Vancouver, ISO, and other styles
24

Blom, Rikard. "Advanced metering infrastructure reference model with automated cyber security analysis." Thesis, KTH, Elkraftteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204910.

Full text
Abstract:
European Union has set a target to install nearly 200 million smart metersspread over Europe before 2020, this leads into a vast increase of sensitiveinformation flow for Distribution System Operators (DSO’s), simultaneously thisleads to raised cyber security threats. The in and outgoing information of the DSOneeds to be processed and stored by different Information technology (IT)- andOperational Technology (OT)-systems depending on the information. High demandsare therefore required of the enterprise cyber security to be able to protect theenterprise IT- and OT-systems. Sensitive customer information and a variety ofservices and functionality is examples that could be fatal to a DSO if compromised.For instance, if someone with bad intentions has the possibility to tinker with yourelectricity, while you’re away on holiday. If they succeed with the attack and shuttingdown the house electricity, your food stored in your fridge and freezer would mostlikely to be rotted, additionally damage from defrost water leaking could cause severedamaging on walls and floors. In this thesis, a detailed reference model of theadvanced metering architecture (AMI) has been produced to support enterprisesinvolved in the process of implementing smart meter architecture and to adapt to newrequirements regarding cyber security. This has been conduct using foreseeti's toolsecuriCAD, foreseeti is a proactive cyber security company using architecturemanagement. SecuriCAD is a modeling tool that can conduct cyber security analysis,where the user can see how long time it would take for a professional penetrationtester to penetrate the systems in the model depending of the set up and defenseattributes of the architecture. By varying defense mechanisms of the systems, fourscenarios have been defined and used to formulate recommendations based oncalculations of the advanced meter architecture. Recommendation in brief: Use smalland distinct network zones with strict communication rules between them. Do diligentsecurity arrangements for the system administrator PC. The usage of IntrusionProtection System (IPS) in the right fashion can delay the attacker with a percentageof 46% or greater.<br>Europeiska Unionen har satt upp ett mål att installera nära 200miljoner smarta elmätare innan år 2020, spritt utöver Europa, implementeringen ledertill en rejäl ökning av känsliga dataflöden för El-distributörer och intresset av cyberattacker ökar. Både ingående och utgående information behöver processas och lagraspå olika IT- och OT-system beroende på informationen. Höga krav gällande ITsäkerhet ställs för att skydda till exempel känslig kundinformation samt en mängdvarierande tjänster och funktioner som är implementerade i systemen. Typer avattacker är till exempel om någon lyckats få kontroll over eltillgängligheten och skullestänga av elektriciteten till hushåll vilket skulle till exempel leda till allvarligafuktskador till följd av läckage från frysen. I den här uppsatsen så har en tillräckligtdetaljerad referens modell för smart elmätar arkitektur tagits fram för att möjliggörasäkerhetsanalyser och för att underlätta för företag i en potentiell implementation avsmart elmätare arkitektur. Ett verktyg som heter securiCAD som är utvecklat avforeseeti har använts för att modellera arkitekturen. securiCAD är ett modelleringsverktyg som använder sig av avancerade beräknings algoritmer för beräkna hur långtid det skulle ta för en professionell penetrationstestare att lyckats penetrera de olikasystem med olika sorters attacker beroende på försvarsmekanismer och hurarkitekturen är uppbyggd. Genom att variera systemens försvar och processer så harfyra scenarion definierats. Med hjälp av resultaten av de fyra scenarierna så harrekommendationer tagits fram. Rekommendationer i korthet: Använd små ochdistinkta nätverkszoner med tydliga regler som till exempel vilka system som fårkommunicera med varandra och vilket håll som kommunikationen är tillåten.Noggranna säkerhetsåtgärder hos systemadministratörens dator. Användningen avIPS: er, genom att placera och använda IPS: er på rätt sätt så kan man fördröjaattacker med mer än 46% enligt jämförelser mellan de olika scenarier.
APA, Harvard, Vancouver, ISO, and other styles
25

Hellbe, Simon, and Peter Leung. "DIGITAL TRANSFORMATION : HOW APIS DRIVE BUSINESS MODEL CHANGE AND INNOVATION." Thesis, Linköpings universitet, Industriell ekonomi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119506.

Full text
Abstract:
Over the years, information technology has created opportunities to improve and extend businesses and to start conducting business in new ways. With the evolution of IT, all businesses and industries are becoming increasingly digitized. This process, or coevolution, of IT and business coming together is called digital transformation. One of the recent trends in this digital transformation is the use of application programmable interfaces (APIs). APIs are standardized digital communication interfaces, used for communication and exchange of information between systems, services and devices (such as computers, smartphones and connected machines). API communication is one of the foundational building blocks in recent disruptive technology trends such as mobile and cloud computing. The purpose of this study is to gain an understanding of the business impact that is created in digital transformation related to the use of APIs. To investigate this novel area, an exploratory study is performed where a frame of reference with an exploratory framework is created based on established academic literature. The exploratory framework consists of three main parts which cover the research questions, including Business Drivers, Business Model Change &amp; Innovation and Challenges &amp; Limitations related to API-enabled digital transformation. The framework is used to gather empirical data consisting of two types, interviews (primary data) and contemporary reports (secondary data). Interviews are performed with API-utilizing companies, consulting firms and IT solution providers and contemporary reports are published by consulting and technology research and advisory firms. Two main business drivers are identified in the study. The first is Understanding &amp; Satisfying Customer Needs which is derived from companies experiencing stronger and changing demands for automated, personalized value-adding services. This requires higher degree of integration across channels and organizations. The second driver is Business Agility, which derives from higher requirements on adapting to changing environments while maintaining operational efficiency. Cost Reduction is also mentioned as a third and secondary driver, as a positive side-effect in combination with the other drivers. The identified impact on business models is that business model innovation is mostly happening in the front-end of business model towards customers. Several examples also exist of purely API-enabled businesses that sell services or manage information exchanges over APIs. The challenges and limitations identified are mostly classic challenges of using IT in businesses and not specific to use of APIs, where the general consensus is that IT and business need to become more integrated, and that strategy and governance for API-initiatives need to be established.
APA, Harvard, Vancouver, ISO, and other styles
26

Šebelík, Ondřej. "Návrh integrovaného systému pro podporu marketingového řízení produktového portfolia." Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-192703.

Full text
Abstract:
Information is the currency of our world. Making right decisions requires having reliable, comprehensive and thorough information still at hand. The tools that can provide it? Information systems. However, the purpose of today's information systems is more than to just collect and store information: they need to integrate numerous data sources (both internal and external), identify essential pieces of information and set them apart from the rest, and provide the people relying on them a clear picture of the situation. In short: they should offer unified, user-friendly, well-structured and comprehensive view of business reality. This paper focuses on the first stage of an information system design. Concerned primarily with the business analytical phases of an IT project, it presents an insight into the process of business reality exploration and description through conceptual data model and architecture vision articulation.
APA, Harvard, Vancouver, ISO, and other styles
27

Kaldis, Emmanuel. "Designing a knowledge management architecture to support self-organization in a hotel chain." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/designing-a-knowledge-management-architecture-to-support-selforganization-in-a-hotel-chain(d7e51581-b634-49e1-8d34-4bb29b958774).html.

Full text
Abstract:
Models are incredibly insidious; they slide undetected into discussions and then dominate the way people think. Since Information Systems (ISs) and particularly Knowledge Management Systems (KMSs) are socio-technical systems, they unconsciously embrace the characteristics of the dominant models of management thinking. Thus, their limitations can often be attributed to the deficiencies of the organizational models they aim to support. Through the case study of a hotel chain, this research suggests that contemporary KMSs in the hospitality sector are still grounded in the assumptions of the mechanistic organizational model which conceives an organization as a rigid hierarchical entity governed from the top. Despite the recent technological advances in terms of supporting dialogue and participation between members, organizational knowledge is still transferred vertically; from the top to the bottom or from the bottom to the top. A number of limitations still exist in terms of supporting effectively the transfer of knowledge horizontally between the geographically distributed units of an organization. Inspired from the key concepts of the more recent complex systems model, referred frequently as complexity theories, a Knowledge Management Architecture (KMA) is proposed aiming to re-conceptualize the existing KMSs towards conceiving an organization as a set self-organizing communities of practice (CoP). In every such CoP, order is created from the dynamic exchange of knowledge between the structurally similar community members. Thus, the focus of the KMA is placed on capturing systematically for reuse the architectural knowledge created upon every initiative for change and share such knowledge with the rest of the members of the CoP. A KMS was also developed to support the dynamic dimensions that the KMA proposes. The KMS was then applied in the case of the hotel chain, where it brought significant benefits which constitute evidence of an improved self-organizing ability. The previously isolated hotel units residing in distant regions could now trace but also reapply easily changes undertaken by the other community members. Top-management’s intervention to promote change was reduced, while the pace of change increased. Moreover, the organizational cohesion, the integration of new members as well as the level of management alertness was enhanced. The case of the hotel chain is indicative. It is believed that the KMA proposed can be applicable to geographically distributed organizations operating in different sectors too. At the same time, this research contributes to the recent discourse between the fields of IS and complexity by demonstrating how fundamental concepts from complexity such as self-organization, emergence and edge-of-chaos can be embraced by contemporary KMSs.
APA, Harvard, Vancouver, ISO, and other styles
28

Mukherjee, Somshree. "Ranking System for IoT Industry Platform." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204571.

Full text
Abstract:
The Internet of Things (IoT) has seen a huge growth spurt in the last few years which has resulted in the need for more standardised IoT technology. Because of this, numerous IoT platforms have sprung up that offer a variety of features and use different technologies which may not necessarily be compliant with each other or with other technologies. Companies that wish to enter theIoT market are in constant need to find the most suitable IoT platform for their business and have a certain set of requirements that need to be fulfilled by the IoT platforms in order for the application to be fully functional. The problem that this thesis project is trying to address is a standardised procedure for selecting the IoT platforms. The project aims to suggest a list of requirements derived from the available IoT architecture models, that must be followed by IoT applications in general, and a subset of these requirements may be specified by the companies as essentials for their application. This thesis project also aims at development of a Web platform to automate this process, by listing the requirements on this website and allowing companies to input their choices,and accordingly show them the list of IoT platforms that comply with their input requirements. A simple Weighted Sum Model is used to rank the search result in order to prioritise the IoT platforms in order of the features that they provide. This thesis project also infers the best IoT architectural model available based on a comparative study of three major IoT architectures with respect to the requirements proposed. Hence the project concludes that this Web platform will ease the process of searching for the right IoT platform andthe companies can therefore make an informed decision about the kind of IoT platform that they should use, thereby reducing their time spent on market research and hence their time-to-market.
APA, Harvard, Vancouver, ISO, and other styles
29

Burns, Jessie. "An Investigation of Multiple Integration Techniques for Information Systems: A Model for Integrating Data Warehousing, ERP, and SOA in Practice." NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/110.

Full text
Abstract:
This study addressed issues associated with implementing multiple data integrators. The process included implementing the same data integrator multiple times or implementing multiple different data integrators. It was shown that single data integrators implementations were failing at high rates. Environments were complicated because there were more acquisitions and mergers now than in the past, companies were required to be competitive while implementations were taking place, and possibly had short implementation time frames. The main goal of this study was to develop an agile, sturdy, and rigorous model that addressed the needs of companies that found themselves in situations where they needed to implement two or more data integrators and possibly business initiatives. This researcher developed a model and a questionnaire. The questionnaire was used to gather data from managers and practitioners in the information technology industry. The collected information was expected to show a consensus about the proposed JBurns Systems Implementation Model (JBSIM). Analysis showed various industry participants used different combinations of the proposed data integrators supported and they supported the proposed JBSIM. This model allowed for successful implementation of multiple data integrators used in this research. As this model matured, it had the potential that allowed the use of other data integrators not used in this research. It had potential that allowed corporate initiatives to be used.
APA, Harvard, Vancouver, ISO, and other styles
30

Hromádková, Pavla. "Multiplatformní přístup k databázovým aplikacím." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235915.

Full text
Abstract:
This thesis deals with possibilities of accessing database systems from different devices. It contains analysis, concept and description of the following information system implementation with draft model Model-View-Controller. Concept of the system is made in UML language. The PHP programming language and MySQL relational database server were used during the progression of the system. User interface was implemented by HTML and WML technologies. Final look of the application was achieved by using CSS web technology. Technical report describes multilayer architecture and its advantages in the creation of the information system. It deals with MVC draft model, which is useful in the cases, where´s the need of the separation of the application look, its communication with database and application logic. The work oncemore describes analysis, concept and implementation of multiplatform application for the Drak travel agency. Systém could be currently accessed by handy or web browser. The application is easily extensible for the use in another platform.
APA, Harvard, Vancouver, ISO, and other styles
31

Macagnano, Marco. "An integrated systems-design methodology and revised model of sustainable development for the built environment in the Information Age." Thesis, University of Pretoria, 2018. http://hdl.handle.net/2263/66045.

Full text
Abstract:
This thesis was developed to investigate the current models of sustainable development and architectural working and design practice and process to respond to the challenges of the current era defined as the Information Age. This thesis proposes a new model of sustainable development aligned to architecture and the Information Age, and a new integrated systems-design methodology to support it. Buildings were defined by le Corbusier in 1927 as ‘machines for living in’1 on the premise that these buildings facilitated our day-to-day user experience. The role of architecture as a facilitator for a sustainable existence is therefore subject to continued investigation. While there has been an increasing interest in environmental issues and ‘green building’, built environments have consequently failed to effectively holistically integrate core sustainable development principles in architecture. When compared to the definition of sustainable development in the UN Brundlandt Report of 1987, further research into an architectural design methodology is required to enable and plan for the long-term success of our built environments for current and, importantly, future generations. The practices and production of architecture risk being limited to reactively monitoring the design and construction processes for fixed moments in time, usually after the problem has presented itself. This is representative of localised, yet much publicised trends involving quantifiable rating systems for building performance. This does not contribute to long-term sustainability of the architectural product, nor the core principle of sustainable development to adequately meet the needs of current and future generations. The gravitation towards these easily-followed, yet limited-in-scope checklist processes is symptomatic of concepts of sustainable development remaining too broad and fragmented to facilitate focused, industry-appropriate implementation and design. The digital and information-based revolution has arrived, and humankind has now progressed to the point where constant and pervasive access to information and communication in a world of connected systems has changed the way we live and work. This is occurring at an exponential rate within what have been termed ‘knowledge-based societies’. Furthermore, the influence of the Information Age continues to manifest itself in the built environment through advancement of concepts and initiatives such as Smart Cities, intelligent buildings, and the Internet of Things. However, architectural approach and its emphasis on the building as a finite product comes at the expense of a holistic and integrated systems approach, and therefore requires investigation towards a revised design methodology. This thesis will begin by investigating the concept of sustainable development from its original inception to existing interpretations, and will interrogate its continued significance as a decades-old concept to the Information Age. This will be undertaken on the basis that sustainable development primarily aligns itself to the needs of humankind (current and future generations) and as such remains timeless as a core concept. However, the criteria that define sustainable development require investigation based on: a) their suitability towards human need in the context of knowledge-based societies and the Information Age, as well as b) their appropriateness for focused implementation in the scope of the built environment. In this aim, newly proposed criteria will be assimilated into a revised model for sustainable development, from which a methodology for design is developed. This will address the nature of the architectural process towards the creation of sustainable building solutions as a function of a systems approach, rather than a product approach. An integrated systems-design methodology is proposed, promoting the evolution of sustainable development theory in architecture for greater applicability to the Information Age. This systems-design methodology proactively identifies criteria for solving a given problem and the development of alternative solutions, while the proposed revised model for sustainable development is integrated to achieve a holistic building solution based on a systems process. This is inclusive of product (systems solution) delivery into the operation phase. The designer and project information model therefore transition into ‘information custodian’ and repository for knowledge gathering and exchange respectively, to the benefit of current and future stakeholders. This is addressed through various stages in design development and implementation, which apply contextually-based requirements of proposed sustainable development criteria, while catering for aspects of future flexibility, user experience, and knowledge-based development. This methodology expects the design practitioner to apply multi-dimensional evaluation and assessment tools at their discretion, and accommodate changing project dynamics over its life cycle. This implementation will benefit from future research and the introduction of new technologies to aid the process. This may furthermore be affected by new regulatory policy and guidelines affecting architects and the built environment.<br>Thesis (PhD)--University of Pretoria, 2018.<br>Architecture<br>PhD<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
32

Andréasson, Magnus. "Towards a Digital Analytics Maturity Model : A Design Science Research Approach." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-13884.

Full text
Abstract:
Digital analytics kallas den samling teknologier som med olika teknikeranalyserar digitala kanaler (webbsidor, email och även offline data) för attsöka förståelse för kunders beteenden och intentioner. Digital Analytics harblivit en mycket viktig komponent till en stor del webbaserade systemmiljöer,där den stödjer och underlättar affärer och beslutsfattande för organisationer.Men hur väl tillämpas dessa teknologier och hur ser den digitalatransformationen ut som utspelar sig inom organisationer, och hur kan manmäta denna digitala mognadsprocess?Denna studie tillämpar en Design Science Research-approach för att uppfyllamålet om att utveckla en Digital Analytics Maturity Model (DAMM) lämpligför små till medelstora företag, varav en expertpanel bestående av 6 st ledandeforskare inom mognadsforskning och Digital Analytic är tillsatt i formen av enDelphi-undersökning. Resultaten från studien visar bl.a att organisatoriskaaspekter spelar en viktig roll för Digital Analytics samt att utvecklingen av enfunktionsduglig DAMM som är redo att tas i burk är möjligt.
APA, Harvard, Vancouver, ISO, and other styles
33

Jonsson, Kerstin. "Systemmetaforik : Språk och metafor som verktyg i systemarkitektens praktik." Thesis, Södertörns högskola, Institutionen för kultur och lärande, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-24251.

Full text
Abstract:
En systemarkitekts praktik består till stor del av att tolka, beskriva och strukturera verksamhetsprocesser och -information som underlag för förändrings- och utvecklingsarbete, oftast med stöd av it-system. Professionen betraktas traditionellt som en teknisk ingenjörskonst. Men de problem jag ställs inför som arkitekt handlar inte enbart om att designa tekniska system och kommunikation mellan maskiner, utan minst lika ofta om att hantera utmaningar relaterade till mellanmänsklig kommunikation i komplexa situationer. Vad händer om vi fokuserar på denna andra del av arkitektens praktiska kunskap? Denna magister- uppsats handlar om språkets och kommunikationens roll i kontexten av ett systemutvecklingsprojekt. Författaren använder sig av metaforer i en gestaltande skönlitterär kontext som kreativ metod för att visualisera och förmedla olika aspekter på systemarkitektens yrkesroll och praktik. På så vis utnyttjar uppsatsen den mer experimentella form som essän erbjuder för att även utforska sina egna uttrycksmöjligheter. Essäns teoretiska material baserar sig på den språkfilosofiska tradition som utvecklats av Ludwig Wittgenstein och Gilbert Ryle. Utifrån dessa båda tänkares verk förs ett resonemang runt språkets och den kontextuella förståelsens betydelse för systemarkitektens praktiska kunskap. Essän väver även in tankegångar från Thomas Kuhn, Peter Naur och Donald Schön i syfte att utforska just metaforens, improvisationens och den kreativa kommunikationens roll som verktyg i systemarkitektens praktik.<br>The system architect ́s practice is mainly about interpreting, describing and structuring the processes and information of an enterprise in order to create a foundation for change and development, often supported by IT systems. The profession is traditionally regarded as an art of technical engineering. But the problems I face as architect is not exclusively about designing technical systems and communication between machines, but just as much about handling challenges related to inter-subjective communi- cation between human beings in situations of complex interaction. What happens if we focus on this second aspect of the practical knowledge of the architect? This essay is about the role of language and communication in the context of a system development project. The author uses metaphors in fictional context as a creative method to visualize and mediate different aspects on the architect ́s professional role and practice. In that sense the text utilizes the more experimental form offered by the essay in order to explore its own expressive possibilities. The theoretical material of this essay is based on the language philosophical tradition developed by Ludwig Wittgenstein and Gilbert Ryle. Starting out from these two thinkers, the author reasons around the importance language and contextual understanding has for the practical knowledge of the system architect. Further on the essay weaves in thoughts from Thomas Kuhn, Peter Naur and Donald Schön with the purpose of exploring the role of the metaphor, improvisation and creative communication as tools in the practice of the system architect.
APA, Harvard, Vancouver, ISO, and other styles
34

Lyu, Minhu. "Towards Control as a Service models and architecture for the Industry 4.0." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI048.

Full text
Abstract:
Pour s’adapter au contexte de l’économie globalisée, les entreprises, et principalement les PMEs doivent développer de nouvelles stratégies de collaboration. Ces stratégies sont axées sur la création de valeur en réseau en remplacement de l’organisation classique de chaîne de valeur, s’adaptant ainsi au modèle dit de « Blue Ocean » qui conduit au développement de produits et services innovants. Bien que les organisations collaboratives aient été étudiées depuis des décennies, l’Industrie 4.0, actuellement largement développée par les principales industries en Europe, Amérique ou Asie, impose une intégration plus poussée des Systèmes d’Information pour y inclure des processus opérationnels collaboratifs, tant pour les activités administratives que pour la production. En outre, ces processus doivent être adaptatifs pour s’adapter au contexte. Actuellement, ces écosystèmes de produits/services sont principalement implémentés par des services logiciels, déployés sur le Cloud et utilisables par différentes organisations. Pour répondre aux besoins d’agilité, d’ouverture, d’interopérabilité et de confiance, ces services utilisent largement les architectures orientées services, les technologies Web2.0 et XaaS. Tirant parti de l’IoT, des technologies de services et du Cloud, le Cloud des objets (Cloud of Things ou CoT) change la manière de concevoir des applications de contrôle, passant d’une ingénierie traditionnelle à une vision de composition de services. Cette vision suppose de définir une nouvelle architecture pour connecter les objets physiques, leur double virtuel et intégrer des patrons de contrôle pour composer et orchestrer ces services pour répondre aux besoins. Pour répondre à ce défi, nous proposons uns architecture multi-niveaux de Control as a Service permettant de décrire les systèmes de contrôle selon une vision holistique. Notre modèle de service de contrôle est construit pour permettre une exécution pilotée par les événements. L’intégration d’un gestionnaire de contexte, analysant continuellement l’environnement d’exécution et le comportement du système, permet d’assurer le déploiement de services de contrôle contextualisables<br>To fit the renewed globalized economic environment, enterprises, and mostly SMEs, have to develop new networked and collaborative strategies, focusing on networked value creation (instead of the classical value chain vision), fitting the blue ocean context for innovative products and service development. Even if collaborative organizations have been studied for decades, the closer connection of information systems involved by the so-called “Industry 4.0” developed by leading industries in Europe, US and Asia requires to set new IT models to support agile and evolving collaborative Business Process (BP) enactment, integrating both traditional Information Systems (IS) and production control processes. By now, these product/service ecosystems are mostly supported by software services, which span multiple organizations and providers, and on multiple cloud-based execution environments, increasing the call for openness, agility, interoperability and trust for both production and Information System organization. These requirements are well supported by SOA, Web 2.0 and XaaS technologies for Information Systems. Taking advantage of IoT, services and Cloud technologies, the development of Cloud of Things (CoT) changes the way control application are engineered and developed moving from a dedicated design and development of control applications to a Control as a Service vision. This vision requires developing a new architecture to connect physical and logical objects as well as integrating basic control patterns to organize a consistent control service orchestration. To fit this challenge, we propose a multi-layer Control as a Service architecture to describe control systems in a holistic way. Our Control service model is built according to an event-driven orchestration strategy. Thanks to the integration of a context manager, analyzing continuously the system environment as well as the control system behavior, these context-aware control services can be deployed
APA, Harvard, Vancouver, ISO, and other styles
35

Bengtsson, David, and Johan Lindberg. "Enterprise Architecture - En modell för dess innehåll och uppbyggnad." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-5852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Štacha, Jan. "Analýza a návrh informačního systému elektronického vzdělávání." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-236840.

Full text
Abstract:
This thesis is focused on IBM Rational Unified Process methodology, which represents complex and robust aproach to software development and software lifecycle. This methodology is well described and every step of software lifecycle is predictible. Thats the reason why is becoming used in many software development organizations. The main goal of this thesis is deep description of this methodology and creation common outputs of inception and elaboration phase of e-learning information system. IBM Rational Unified Process is called use-case driven aproach, thats the reason why is emphasized descripton of all use-cases.
APA, Harvard, Vancouver, ISO, and other styles
37

España, Cubillo Sergio. "METHODOLOGICAL INTEGRATION OF COMMUNICATION ANALYSIS INTO A MODEL-DRIVEN SOFTWARE DEVELOPMENT FRAMEWORK." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/14572.

Full text
Abstract:
It is widely recognised that information and communication technologies development is a risky activity. Despite the advances in software engineering, many software development projects fail to satisfy the clients' needs, to deliver on time or to stay within budget. Among the various factors that are considered to cause failure, an inadequate requirements practice stands out. Model-driven development is a relatively recent paradigm with the potential to solve some of the dragging problems of software development. Models play a paramount role in model-driven development: several modelling layers allow defining views of the system under construction at different abstraction levels, and model transformations facilitate the transition from one layer to the other. However, how to effectively integrate requirements engineering within model-driven development is still an open research challenge. This thesis integrates Communication Analysis, a communication-oriented business process modelling and requirements engineering method for information systems development, and the OO Method, an object-oriented model-driven software development method provides automatic software generation from conceptual models. We first provide a detailed specification of Communication Analysis intended to facilitate the integration; among other improvements to the method, we build an ontology-based set of concept definitions in which to ground the method, we provide precise methodological guidelines, we create a metamodel for the modelling languages included in the method, and we provide tools to support the creation of Communication Analysis requirements models. Then we perform the integration by providing a technique to systematically derive OO-Method conceptual models from Communication Analysis requirements models. The derivation technique is offered in two flavours: a set of rules to be manually applied by a human analyst, and an ATL model transformation that automates this task.<br>España Cubillo, S. (2011). METHODOLOGICAL INTEGRATION OF COMMUNICATION ANALYSIS INTO A MODEL-DRIVEN SOFTWARE DEVELOPMENT FRAMEWORK [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/14572<br>Palancia
APA, Harvard, Vancouver, ISO, and other styles
38

Kosa, Nenadić. "Razvoj modularnih arhitektura veb aplikacija u pametnim mrežama." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2018. https://www.cris.uns.ac.rs/record.jsf?recordId=107530&source=NDLTD&language=en.

Full text
Abstract:
U ovom radu su istraživane semantičke veb tehnologije i njihovaprimena u elektroenergetskim sistemima. Analizirani su CIMstandardi usvojeni od strane IEC-a (61970-501 i 61970-552) i njihovaograničenja, razmatrana postojeća CIM metodologija zasnovana naovim standardima i dati predlozi za unapređenje ove metodologijeu skladu sa tekućim dostignućima u polju semantičkih vebtehnologija. Istovremeno je razmatrana modularnost veb aplikacijabaziranih na CIM modelu podataka sa stanovišta semantičkog veba idat predlog modularne arhitekture veb aplikacija koja se oslanja naCIM semantiku.<br>This paper explores the semantic web technologies and their applicationin power systems. It also analyzes CIM standards adopted by IEC (61970-501 and 61970-552) and their limitations, assesses the existing CIMmethodology based on these standards and provides suggestions forimproving this methodology in accordance with the current achievementsin the field of semantic web technologies. At the same time, the modularityof web applications based on the CIM data model was considered from thesemantic web standpoint and the proposal of a modular web applicationarchitecture based on the CIM semantics was given.
APA, Harvard, Vancouver, ISO, and other styles
39

Chitic, Stefan-Gabriel. "Middleware and programming models for multi-robot systems." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI018/document.

Full text
Abstract:
Malgré de nombreuses années de travail en robotique, il existe toujours un manque d’architecture logicielle et de middleware stables pour les systèmes multi-robot. Un intergiciel robotique devrait être conçu pour faire abstraction de l’architecture matérielle de bas niveau, faciliter la communication et l’intégration de nouveaux logiciels. Cette thèse se concentre sur le middleware pour systèmes multi-robot et sur la façon dont nous pouvons améliorer les frameworks existantes dans un contexte multi-robot en ajoutant des services de coordination multi-robot, des outils de développement et de déploiement massif. Nous nous attendons à ce que les robots soient de plus en plus utiles car ils peuvent tirer profit des données provenant d’autres périphériques externes dans leur prise de décision au lieu de simplement réagir à leur environnement local (capteurs, robots coopérant dans une flotte, etc.). Cette thèse évalue d’abord l’un des intergiciels les plus récents pour robot(s) mobile(s), Robot operating system (ROS), suivi par la suite d’un état de l’art sur les middlewares couramment utilisés en robotique. Basé sur les conclusions, nous proposons une contribution originale dans le contexte multi-robots, appelé SDfR (Service discovery for Robots), un mécanisme de découverte des services pour les robots. L’objectif principal est de proposer un mécanisme permettant aux robots de garder une trace des pairs accessibles à l’intérieur d’une flotte tout en utilisant une infrastructure ad-hoc. A cause de la mobilité des robots, les techniques classiques de configuration de réseau pair à pair ne conviennent pas. SDfR est un protocole hautement dynamique, adaptatif et évolutif adapté du protocole SSDP (Simple Service Discovery Protocol). Nous conduisons un ensemble d’expériences, en utilisant une flotte de robots Turtlebot, pour mesurer et montrer que le surdébit de SDfR est limité. La dernière partie de la thèse se concentre sur un modèle de programmation basé sur un automate temporisé. Ce type de programmation a l’avantage d’avoir un modèle qui peut être vérifié et simulé avant de déployer l’application sur de vrais robots. Afin d’enrichir et de faciliter le développement d’applications robotiques, un nouveau modèle de programmation basé sur des automates à états temporisés est proposé, appelé ROSMDB (Robot Operating system Model Driven Behaviour). Il fournit une vérification de modèle lors de la phase de développement et lors de l’exécution. Cette contribution est composée de plusieurs composants : une interface graphique pour créer des modèles basés sur un automate temporisé, un vérificateur de modèle intégré basé sur UPPAAL et un générateur de squelette de code. Enfin, nous avons effectué deux expériences : une avec une flotte de drones Parrot et l’autre avec des Turtlebots afin d’illustre le modèle proposé et sa capacité à vérifier les propriétés<br>Despite many years of work in robotics, there is still a lack of established software architecture and middleware for multi-robot systems. A robotic middleware should be designed to abstract the low-level hardware architecture, facilitate communication and integration of new software. This PhD thesis is focusing on middleware for multi-robot system and how we can improve existing frameworks for fleet purposes by adding multi-robot coordination services, development and massive deployment tools. We expect robots to be increasingly useful as they can take advantage of data pushed from other external devices in their decision making instead of just reacting to their local environment (sensors, cooperating robots in a fleet, etc). This thesis first evaluates one of the most recent middleware for mobile robot(s), Robot operating system (ROS) and continues with a state of the art about the commonly used middlewares in robotics. Based on the conclusions, we propose an original contribution in the multi-robot context, called SDfR (Service discovery for Robots), a service discovery mechanism for Robots. The main goal is to propose a mechanism that allows highly mobile robots to keep track of the reachable peers inside a fleet while using an ad-hoc infrastructure. Another objective is to propose a network configuration negotiation protocol. Due to the mobility of robots, classical peer to peer network configuration techniques are not suitable. SDfR is a highly dynamic, adaptive and scalable protocol adapted from Simple Service Discovery Protocol (SSDP). We conduced a set of experiments, using a fleet of Turtlebot robots, to measure and show that the overhead of SDfR is limited. The last part of the thesis focuses on programming model based on timed automata. This type of programming has the benefits of having a model that can be verified and simulated before deploying the application on real robots. In order to enrich and facilitate the development of robotic applications, a new programming model based on timed automata state machines is proposed, called ROSMDB (Robot Operating system Model Driven Behaviour). It provides model checking at development phase and at runtime. This contribution is composed of several components: a graphical interface to create models based on timed automata, an integrated model checker based on UPPAAL and a code skeleton generator. Moreover, a ROS specific framework is proposed to verify the correctness of the execution of the models and to trigger alerts. Finally, we conduct two experiments: one with a fleet of Parrot drones and second with Turtlebots in order to illustrates the proposed model and its ability to check properties
APA, Harvard, Vancouver, ISO, and other styles
40

Torzynski, Marc. "Reseaux de neurones formels : proprietes du modele de hopfield, realisations electroniques et optiques." Université Louis Pasteur (Strasbourg) (1971-2008), 1988. http://www.theses.fr/1988STR13230.

Full text
Abstract:
On etudie un modele neuro-mimetique de traitement d'information, susceptible d'etre utilise comme une memoire associative. En premiere partie, on etudie les proprietes du modele et l'on quantifie la capacite memoire. Par simulation numerique, la tolerance du reseau a ses propres deficiences et ses facultes d'associativite. On montre theoriquement plusieurs points relatifs a l'information a l'aide d'une approche probabiliste du modele. En seconde partie, on etudie les possibilites d'implantation du modele
APA, Harvard, Vancouver, ISO, and other styles
41

Macedo, Bruno Armindo Rodrigues de Sousa. "Um modelo de arquitectura de Sistemas de Informação." Master's thesis, Instituto Superior de Economia e Gestão, 2009. http://hdl.handle.net/10400.5/1855.

Full text
Abstract:
Mestrado em Gestão de Sistemas de Informação<br>As universidades públicas devido às suas características funcionais e ao seu posicionamento na sociedade, estão sujeitas a possíveis alterações tanto internas, ao nível organizacional, como externas ao nível do ambiente socioeconómico onde estão inseridas. A globalização dos mercados e a transformação da sociedade académica, com as mudanças decorrentes do Processo de Bolonha, pressionam as universidades a reorganizar e redesenhar continuamente os seus processos e estratégias de negócio. Os sistemas tradicionais de informação para a gestão correm o risco de não darem as respostas necessárias a esse desafio, dado que apresentam várias debilidades e insuficiências estudadas academicamente. Neste estudo, após uma revisão da literatura, procura-se analisar o papel preponderante do uso de métodos para o desenvolvimento de uma Arquitectura de Sistemas de Informação. Tendo como base um estudo de caso na Direcção de Sistemas de Informação do Instituto Superior de Economia e Gestão, pretende-se mostrar que o desenvolvimento de uma Arquitectura de Sistemas de Informação, bem como a utilização das ferramentas informáticas neste processo permitiram uma melhor representação da realidade existente, assim como possibilitaram a melhoria sustentada dos respectivos processos organizacionais. O método utilizado apresenta também alguns aspectos inovadores relativamente à literatura existente, nomeadamente a integração da dimensão competências no modelo de Arquitectura de Empresa.<br>Public Universities, due to their functional characteristics and position in society, are subject to possible internal changes at organization level, and external changes in terms of the social-economic environment they are in. The market globalization and the transformation of the academic society, allied with the changes originated by the Bolonha Process, pressured on the on the Universities to reorganize and redraw continuously their processes and business strategies. The traditional information systems for management are in risk of not giving the necessary answers to this challenge, due to the fact that they show multiple insufficiencies and weaknesses studied academically. In this study, after a literature revision, it is seek to analyze the relevant part played by the use of new methods for the development of an Information Systems Architecture. Having a case study on the ISEG's Divisão de Sistemas de Informação (DSI) as the main bases for the study, it is intended to show that the development of an Information Systems Architecture, as well as the use of computer tools in this process, have led to a better representation of the existent reality, and made it possible for a improvement of the respective organizational processes. The method used presents also some innovating aspects relatively to the existent literature, namely the integration of the competences dimension in the Enterprise Architecture model.
APA, Harvard, Vancouver, ISO, and other styles
42

Conte, Simone Ivan. "The Sea of Stuff : a model to manage shared mutable data in a distributed environment." Thesis, University of St Andrews, 2019. http://hdl.handle.net/10023/16827.

Full text
Abstract:
Managing data is one of the main challenges in distributed systems and computer science in general. Data is created, shared, and managed across heterogeneous distributed systems of users, services, applications, and devices without a clear and comprehensive data model. This technological fragmentation and lack of a common data model result in a poor understanding of what data is, how it evolves over time, how it should be managed in a distributed system, and how it should be protected and shared. From a user perspective, for example, backing up data over multiple devices is a hard and error-prone process, or synchronising data with a cloud storage service can result in conflicts and unpredictable behaviours. This thesis identifies three challenges in data management: (1) how to extend the current data abstractions so that content, for example, is accessible irrespective of its location, versionable, and easy to distribute; (2) how to enable transparent data storage relative to locations, users, applications, and services; and (3) how to allow data owners to protect data against malicious users and automatically control content over a distributed system. These challenges are studied in detail in relation to the current state of the art and addressed throughout the rest of the thesis. The artefact of this work is the Sea of Stuff (SOS), a generic data model of immutable self-describing location-independent entities that allow the construction of a distributed system where data is accessible and organised irrespective of its location, easy to protect, and can be automatically managed according to a set of user-defined rules. The evaluation of this thesis demonstrates the viability of the SOS model for managing data in a distributed system and using user-defined rules to automatically manage data across multiple nodes.
APA, Harvard, Vancouver, ISO, and other styles
43

Vianden, Matthias [Verfasser]. "Systematic Metric Systems Engineering : Reference Architecture and Process Model / Matthias Vianden." Aachen : Shaker, 2017. http://d-nb.info/113817825X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Tabatabaie, Malihe. "Towards process models for goal-based development of enterprise information systems architectures." Thesis, University of York, 2011. http://etheses.whiterose.ac.uk/2159/.

Full text
Abstract:
Enterprises are organisations with multiple business processes; they often use Enterprise Information Systems (EIS) to support these business processes. The concept of an EIS has arisen from the need to deal with the increasingly volatile requirements of modern large{scale organisations. EIS are growing in use and are now being used to support government, health care, and non-profit / non-governmental organisations. The development of EIS has been affected significantly by the complexity and size of enterprises and their business processes, in addition to the influences of economical, social, and governmental factors. There are many challenges associated with building EIS. Three critical ones identied in the literature are: adequately satisfying organisational requirements; building valid and stakeholder-acceptable business processes; and providing repeatable and rigorous approaches to establish shared understanding of EIS goals. These challenges are difficult to cope with because of the need to deal with different goals, changes in goals, and the problem of how to transform these goals into system requirements and, ultimately, to an EIS architecture. This thesis contributes a rigorous approach for identifying and describing the enterprise-level requirements of IT developers, managers, and other stakeholders of an enterprise. The approach provides two modelling and tool-supported processes to help establish a rigorous model of EIS goals. It also provides support for transforming goals to a strategic EIS architecture. The approach presented in the thesis is based on the concepts of Goal-oriented software engineering. The thesis presents a novel Process Model, KAOS-B that extends goal-oriented software engineering approaches with new concepts and techniques for EIS. Further, to support the transition from requirements to an EIS architecture, an EIS Architecture Process Model (EAPM), is designed and evaluated. Using KAOS-B and EAPM in concert provides a rigorous, repeatable and tool-supported approach for analysing, and designing a strategic EIS architecture. The thesis illustrates the approach with two substantial examples from the health informatics and critical systems domain.
APA, Harvard, Vancouver, ISO, and other styles
45

Molinari, Wilian. "Mental models for decision-making in remote healthcare services : A case study." Thesis, Internationella Handelshögskolan, Jönköping University, IHH, Informatik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-50216.

Full text
Abstract:
Mental models are an important theme within information systems. They show how people understand reality, make decisions and how the information flows in order for them to do so. It is particularly challenging to make such decisions when it comes to determining the health of someone, making it a particularly delicate matter. In that context, this thesis takes BetterDoc as object of study. It is an organization situated in Cologne, Germany, with an increasingly positive record of providing remote healthcare services and allowing patients to have the adequate treatment for their condition. The study was based on the theory of mental models to bring to light the implicit patterns present in making decision in that context. That was done by conducting qualitative interviews with the staff of the organization, across different teams, and synthetizing the findings in a common model that shows points of decision and the supporting information. Those findings are useful for identifying points that need to be structured to provide clarity and understanding, increasing the synergy and transparence of a socio-technical system that can influences the outcomes of healthcare for many people.
APA, Harvard, Vancouver, ISO, and other styles
46

Hruška, David. "Návrh změn identity managementu v podniku." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2018. http://www.nusl.cz/ntk/nusl-378329.

Full text
Abstract:
This diploma thesis focuses on the proposal to implement changes of identity management into a particular company. In the theoretical part are the basic concepts and a detailed description of the identity management. There is also described an analysis of the current state of information security in the company, risk analysis and selection of measures to minimize the risks found. At the end of this thesis are proposed changes, their procedure and timetable for implementation of selected measures.
APA, Harvard, Vancouver, ISO, and other styles
47

Müller, Robert. "Event-Oriented Dynamic Adaptation of Workflows: Model, Architecture and Implementation." Doctoral thesis, Universitätsbibliothek Leipzig, 2004. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-37372.

Full text
Abstract:
Workflow management is widely accepted as a core technology to support long-term business processes in heterogeneous and distributed environments. However, conventional workflow management systems do not provide sufficient flexibility support to cope with the broad range of failure situations that may occur during workflow execution. In particular, most systems do not allow to dynamically adapt a workflow due to a failure situation, e.g., to dynamically drop or insert execution steps. As a contribution to overcome these limitations, this dissertation introduces the agent-based workflow management system AgentWork. AgentWork supports the definition, the execution and, as its main contribution, the event-oriented and semi-automated dynamic adaptation of workflows. Two strategies for automatic workflow adaptation are provided. Predictive adaptation adapts workflow parts affected by a failure in advance (predictively), typically as soon as the failure is detected. This is advantageous in many situations and gives enough time to meet organizational constraints for adapted workflow parts. Reactive adaptation is typically performed when predictive adaptation is not possible. In this case, adaptation is performed when the affected workflow part is to be executed, e.g., before an activity is executed it is checked whether it is subject to a workflow adaptation such as dropping, postponement or replacement. In particular, the following contributions are provided by AgentWork: A Formal Model for Workflow Definition, Execution, and Estimation: In this context, AgentWork first provides an object-oriented workflow definition language. This language allows for the definition of a workflow’s control and data flow. Furthermore, a workflow’s cooperation with other workflows or workflow systems can be specified. Second, AgentWork provides a precise workflow execution model. This is necessary, as a running workflow usually is a complex collection of concurrent activities and data flow processes, and as failure situations and dynamic adaptations affect running workflows. Furthermore, mechanisms for the estimation of a workflow’s future execution behavior are provided. These mechanisms are of particular importance for predictive adaptation. Mechanisms for Determining and Processing Failure Events and Failure Actions: AgentWork provides mechanisms to decide whether an event constitutes a failure situation and what has to be done to cope with this failure. This is formally achieved by evaluating event-condition-action rules where the event-condition part describes under which condition an event has to be viewed as a failure event. The action part represents the necessary actions needed to cope with the failure. To support the temporal dimension of events and actions, this dissertation provides a novel event-condition-action model based on a temporal object-oriented logic. Mechanisms for the Adaptation of Affected Workflows: In case of failure situations it has to be decided how an affected workflow has to be dynamically adapted on the node and edge level. AgentWork provides a novel approach that combines the two principal strategies reactive adaptation and predictive adaptation. Depending on the context of the failure, the appropriate strategy is selected. Furthermore, control flow adaptation operators are provided which translate failure actions into structural control flow adaptations. Data flow operators adapt the data flow after a control flow adaptation, if necessary. Mechanisms for the Handling of Inter-Workflow Implications of Failure Situations: AgentWork provides novel mechanisms to decide whether a failure situation occurring to a workflow affects other workflows that communicate and cooperate with this workflow. In particular, AgentWork derives the temporal implications of a dynamic adaptation by estimating the duration that will be needed to process the changed workflow definition (in comparison with the original definition). Furthermore, qualitative implications of the dynamic change are determined. For this purpose, so-called quality measuring objects are introduced. All mechanisms provided by AgentWork include that users may interact during the failure handling process. In particular, the user has the possibility to reject or modify suggested workflow adaptations. A Prototypical Implementation: Finally, a prototypical Corba-based implementation of AgentWork is described. This implementation supports the integration of AgentWork into the distributed and heterogeneous environments of real-world organizations such as hospitals or insurance business enterprises.
APA, Harvard, Vancouver, ISO, and other styles
48

Ahmed, Adnan, and Syed Shahram Hussain. "Meta-Model of Resilient information System." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5130.

Full text
Abstract:
The role of information systems has become very important in today’s world. It is not only the business organizations who use information systems but the governments also posses’ very critical information systems. The need is to make information systems available at all times under any situation. Information systems must have the capabilities to resist against the dangers to its services,performance &amp; existence, and recover to its normal working state with the available resources in catastrophic situations. The information systems with such a capability can be called resilient information systems. This thesis is written to define resilient information systems, suggest its meta-model and to explain how existing technologies can be utilized for the development of resilient information system.
APA, Harvard, Vancouver, ISO, and other styles
49

Xekalakis, Polychronis. "Mixed speculative multithreaded execution models." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/3282.

Full text
Abstract:
The current trend toward chip multiprocessor architectures has placed great pressure on programmers and compilers to generate thread-parallel programs. Improved execution performance can no longer be obtained via traditional single-thread instruction level parallelism (ILP), but, instead, via multithreaded execution. One notable technique that facilitates the extraction of parallel threads from sequential applications is thread-level speculation (TLS). This technique allows programmers/compilers to generate threads without checking for inter-thread data and control dependences, which are then transparently enforced by the hardware. Most prior work on TLS has concentrated on thread selection and mechanisms to efficiently support the main TLS operations, such as squashes, data versioning, and commits. This thesis seeks to enhance TLS functionality by combining it with other speculative multithreaded execution models. The main idea is that TLS already requires extensive hardware support, which when slightly augmented can accommodate other speculative multithreaded techniques. Recognizing that for different applications, or even program phases, the application bottlenecks may be different, it is reasonable to assume that the more versatile a system is, the more efficiently it will be able to execute the given program. As mentioned above, generating thread-parallel programs is hard and TLS has been suggested as an execution model that can speculatively exploit thread-level parallelism (TLP) even when thread independence cannot be guaranteed by the programmer/ compiler. Alternatively, the helper threads (HT) execution model has been proposed where subordinate threads are executed in parallel with a main thread in order to improve the execution efficiency (i.e., ILP) of the latter. Yet another execution model, runahead execution (RA), has also been proposed where subordinate versions of the main thread are dynamically created especially to cope with long-latency operations, again with the aim of improving the execution efficiency of the main thread (ILP). Each one of these multithreaded execution models works best for different applications and application phases. We combine these three models into a single execution model and single hardware infrastructure such that the system can dynamically adapt to find the most appropriate multithreaded execution model. More specifically, TLS is favored whenever successful parallel execution of instructions in multiple threads (i.e., TLP) is possible and the system can seamlessly transition at run-time to the other models otherwise. In order to understand the tradeoffs involved, we also develop a performance model that allows one to quantitatively attribute overall performance gains to either TLP or ILP in such combined multithreaded execution model. Experimental results show that our combined execution model achieves speedups of up to 41.2%, with an average of 10.2%, over an existing state-of-the-art TLS system and speedups of up to 35.2%, with an average of 18.3%, over a flavor of runahead execution for a subset of the SPEC2000 Integer benchmark suite. We then investigate how a common ILP-enhancingmicroarchitectural feature, namely branch prediction, interacts with TLS.We show that branch prediction for TLS is even more important than it is for single core machines. Unfortunately, branch prediction for TLS systems is also inherently harder. Code partitioning and re-executions of squashed threads pollute the branch history making it harder for predictors to be accurate. We thus propose to augment the hardware, so as to accommodate Multi-Path (MP) execution within the existing TLS protocol. Under the MP execution model, all paths following a number of hard-to-predict conditional branches are followed. MP execution thus, removes branches that would have been otherwise mispredicted helping in this way the processor to exploit more ILP. We show that with only minimal hardware support, one can combine these two execution models into a unified one, which can achieve far better performance than both TLS and MP execution. Experimental results show that our combied execution model achieves speedups of up to 20.1%, with an average of 8.8%, over an existing state-of-the-art TLS system and speedups of up to 125%, with an average of 29.0%, when compared with multi-path execution for a subset of the SPEC2000 Integer benchmark suite. Finally, Since systems that support speculative multithreading usually treat all threads equally, they are energy-inefficient. This inefficiency stems from the fact that speculation occasionally fails and, thus, power is spent on threads that will have to be discarded. We propose a profitability-based power allocation scheme, where we “steal” power from non-profitable threads and use it to speed up more useful ones. We evaluate our techniques for a state-of-the-art TLS system and show that, with minimalhardware support, we achieve improvements in ED of up to 25.5% with an average of 18.9%, for a subset of the SPEC 2000 Integer benchmark suite.
APA, Harvard, Vancouver, ISO, and other styles
50

Harvey, Connie Winfield. "IISMA, an interactive information system /." Online version of thesis, 1993. http://hdl.handle.net/1850/11221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography