To see the other types of publications on this topic, follow the link: System architecture modeling.

Dissertations / Theses on the topic 'System architecture modeling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'System architecture modeling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Iacobucci, Joseph Vincent. "Rapid Architecture Alternative Modeling (RAAM): a framework for capability-based analysis of system of systems architectures." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43697.

Full text
Abstract:
The current national security environment and fiscal tightening make it necessary for the Department of Defense to transition away from a threat based acquisition mindset towards a capability based approach to acquire portfolios of systems. This requires that groups of interdependent systems must regularly interact and work together as systems of systems to deliver desired capabilities. Technological advances, especially in the areas of electronics, computing, and communications also means that these systems of systems are tightly integrated and more complex to acquire, operate, and manage. In response to this, the Department of Defense has turned to system architecting principles along with capability based analysis. However, because of the diversity of the systems, technologies, and organizations involved in creating a system of systems, the design space of architecture alternatives is discrete and highly non-linear. The design space is also very large due to the hundreds of systems that can be used, the numerous variations in the way systems can be employed and operated, and also the thousands of tasks that are often required to fulfill a capability. This makes it very difficult to fully explore the design space. As a result, capability based analysis of system of systems architectures often only considers a small number of alternatives. This places a severe limitation on the development of capabilities that are necessary to address the needs of the war fighter. The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular problem domain by establishing an effective means to communicate the semantics from the RAAM framework. These techniques make it possible to include diverse multi-metric models within the RAAM framework in addition to system and operational level trades. A canonical example was used to explore the uses of the methodology. The canonical example contains all of the features of a full system of systems architecture analysis study but uses fewer tasks and systems. Using RAAM with the canonical example it was possible to consider both system and operational level trades in the same analysis. Once the methodology had been tested with the canonical example, a Suppression of Enemy Air Defenses (SEAD) capability model was developed. Due to the sensitive nature of analyses on that subject, notional data was developed. The notional data has similar trends and properties to realistic Suppression of Enemy Air Defenses data. RAAM was shown to be traceable and provided a mechanism for a unified treatment of a variety of metrics. The SEAD capability model demonstrated lower computer runtimes and reduced model creation complexity as compared to methods currently in use. To determine the usefulness of the implementation of the methodology on current computing hardware, RAAM was tested with system of system architecture studies of different sizes. This was necessary since system of systems may be called upon to accomplish thousands of tasks. It has been clearly demonstrated that RAAM is able to enumerate and evaluate the types of large, complex design spaces usually encountered in capability based design, oftentimes providing the ability to efficiently search the entire decision space. The core algorithms for generation and evaluation of alternatives scale linearly with expected problem sizes. The SEAD capability model outputs prompted the discovery a new issue, the data storage and manipulation requirements for an analysis. Two strategies were developed to counter large data sizes, the use of portfolio views and top `n' analysis. This proved the usefulness of the RAAM framework and methodology during Pre-Milestone A capability based analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Rivera, Joey. "Software system architecture modeling methodology for naval gun weapon systems." Monterey, California. Naval Postgraduate School, 2010. http://hdl.handle.net/10945/10504.

Full text
Abstract:
This dissertation describes the development of an architectural modeling methodology that supports the Navy's requirement to evaluate potential changes to gun weapon systems in order to identify potential software safety risks. The modeling methodology includes a tool (Eagle6) that is based on the Monterey Phoenix (MP) modeling methodology, and has the capability to create and verify MP models, execute formal assertions via pre-defined macro commands, and a visualization tool that generates graphical representations of model scenarios. The Eagle6 toolset has two scenario generation modes, Exhaustive Search for model verification within scope, and Random trace generation for statistical estimates of nonfunctional properties, such as performance. The dissertation demonstrates how the Eagle6 tool may improve the SSSTRP evaluation process by including a methodology to use formal assertions to test for software states that are considered unsafe.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Lu. "New Method for Robotic Systems Architecture Analysis, Modeling, and Design." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1562595008913311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Melouk, Sharif. "Transportation system modeling using the High Level Architecture." Diss., Texas A&M University, 2003. http://hdl.handle.net/1969.1/440.

Full text
Abstract:
This dissertation investigates the High Level Architecture (HLA) as a possible distributed simulation framework for transportation systems. The HLA is an object-oriented approach to distributed simulations developed by the Department of Defense (DoD) to handle the issues of reuse and interoperability of simulations. The research objectives are as follows: (1) determine the feasibility of making existing traffic management simulation environments HLA compliant; (2) evaluate the usability of existing HLA support software in the transportation arena; (3) determine the usability of methods developed by the military to test for HLA compliance on traffic simulation models; and (4) examine the possibility of using the HLA to create Internet-based virtual environments for transportation research. These objectives were achieved in part via the development of a distributed simulation environment using the HLA. Two independent traffic simulation models (federates) comprised the environment (federation). A CORSIM federate models a freeway feeder road with an on-ramp while an Arena federate models a tollbooth exchange.
APA, Harvard, Vancouver, ISO, and other styles
5

KUO, FENG-YANG. "AN ARCHITECTURE FOR DIALOGUE MANAGEMENT SUPPORT IN INFORMATION SYSTEMS (FRAMEWORK, MODELING DYNAMIC, METHODOLOGY)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187932.

Full text
Abstract:
The management of man-computer dialogues involves policies, procedures, and methodologies that enable users and designers to control, monitor, and enhance the user-computer interface. Effective dialogue management can be facilitated by a computer-aided work-bench of dialogue management tools that integrate pertinent environmental attributes into executable dialogue forms. Consequently, a methodology for generating dialogue designs is required. This research presents a framework for modeling user-computer interactions, or dialogues. The approach taken herein focuses on analysis of task, user, and information technology attributes. This analytical framework isolates dialogue entities and entity groupings. Together, these entities and their groupings suggest a language for information presentation and elicitation in the user-computer dialogue process. As a result, alternative dialogue models can be specified independent of hardware and software technologies. Furthermore, these models can be evaluated to ensure completeness, consistency, and integrity. Under this framework, various dialogue management functions can be integrated into a generalized dialogue management environment. Such an environment facilitates the transformation of task, user, and information technology attributes into executable dialogue definitions. The architecture of this environment is characterized by functionally layered and modularized software tools for dialogue management. The implementation of the proposed methodologies and the dialogue management architecture results in a set of dialogue management design facilities. These facilities foster effective management of dialogues within organizations and lead to a better understanding of the dialogue process.
APA, Harvard, Vancouver, ISO, and other styles
6

Welling, Karen Noiva. "Modeling the water consumption of Singapore using system dynamics." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65749.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 217-226).
Water resources are essential to life, and in urban areas, the high demand density and finite local resources often engender conditions of relative water scarcity. To overcome this scarcity, governments intensify infrastructure and project demand into the future. Growth in the economy, population, and affluence of cities increase water demand, and water demand for many cities will increase into the future, requiring additional investments in water infrastructure. More sustainable policies for water will require capping socioeconomic water demand and reducing the associated demand for non-renewable energy and material resources. The thesis consists of the formulation of a System Dynamics model to replicate historic trends in water consumption for the growing city of Singapore. The goal of the model is to provide a platform for assessing socioeconomic demand trends relative to current water resources and water management policies and for examining how changes in climate and infrastructure costs might impact water availability over time. The model was calibrated to historical behavior and scenarios examined the vulnerability of supply to changing demand, climate, and cost. The outcome is a qualitative dynamic assessment of the circumstances under which Singapore's current policies allow them to meet their goals. Singapore was chosen as the case study to demonstrate the methodology, but in the future, the model will be applied to other cities to develop a typology of cities relative to water resources.
by Karen Noiva Welling.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Oh, Byong Mok 1969. "A system for image-based modeling and photo editing." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8511.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2002.
Includes bibliographical references (p. 169-178).
Traditionally in computer graphics, a scene is represented by geometric primitives composed of various materials and a collection of lights. Recently, techniques for modeling and rendering scenes from a set of pre-acquired images have emerged as an alternative approach, known as image-based modeling and rendering. Much of the research in this field has focused on reconstructing and rerendering from a set of photographs, while little work has been done to address the problem of editing and modifying these scenes. On the other hand, photo-editing systems, such as Adobe Photoshop, provide a powerful, intuitive, and practical means to edit images. However, these systems are limited by their two-dimensional nature. In this thesis, we present a system that extends photo editing to 3D. Starting from a single input image, the system enables the user to reconstruct a 3D representation of the captured scene, and edit it with the ease and versatility of 2D photo editing. The scene is represented as layers of images with depth, where each layer is an image that encodes both color and depth. A suite of user-assisted tools are employed, based on a painting metaphor, to extract layers and assign depths. The system enables editing from different viewpoints, extracting and grouping of image-based objects, and modifying the shape, color, and illumination of these objects. As part of the system, we introduce three powerful new editing tools. These include two new clone brushing tools: the non-distorted clone brush and the structure-preserving clone brush. They permit copying of parts of an image to another via a brush interface, but alleviate distortions due to perspective foreshortening and object geometry.
(cont.) The non-distorted clone brush works on arbitrary 3D geometry, while the structure-preserving clone brush, a 2D version, assumes a planar surface, but has the added advantage of working directly in 2D photo-editing systems that lack depth information. The third tool, a texture-illuminance decoupling filter, discounts the effect of illumination on uniformly textured areas by decoupling large- and small-scale features via bilateral filtering. This tool is crucial for relighting and changing the materials of the scene. There are many applications for such a system, for example architectural, lighting and landscape design, entertainment and special effects, games, and virtual TV sets. The system allows the user to superimpose scaled architectural models into real environments, or to quickly paint a desired lighting scheme of an interior, while being able to navigate within the scene for a fully immersive 3D experience. We present examples and results of complex architectural scenes, 360-degree panoramas, and even paintings, where the user can change viewpoints, edit the geometry and materials, and relight the environment.
by Byong Mok Oh.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
8

Zuckerman, Oren 1970. "System blocks : learning about systems concepts through hands-on modeling and simulation." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/26922.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.
Includes bibliographical references (leaves 98-101).
The world is complex and dynamic. Our lives and environment are constantly changing. We are surrounded by all types of interconnected, dynamic systems: ecosystems, financial markets, business processes, and social systems. Nevertheless, research has shown that people's understanding of dynamic behavior is extremely poor. In this thesis I present System Blocks, a new learning technology that facilitates hands-on modeling and simulation of dynamic behavior. System Blocks, by making processes visible and manipulable, can help people learn about the core concepts of systems. System Blocks provide multiple representations of system behavior (using lights, sounds, and graphs), in order to support multiple learning styles and more playful explorations of dynamic processes. I report on an exploratory study I conducted with ten 5th grade students and five preschool students. The students used System Blocks to model and simulate systems, and interacted with concepts that are traditionally considered "too hard" for pre-college students, such as net-flow dynamics and positive feedback. My findings suggest that using System Blocks as a modeling and simulation platform can provide students an opportunity to confront their misconceptions about dynamic behavior, and help students revise their mental models towards a deeper understanding of systems concepts.
by Oren Zuckerman.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
9

Quinn, David James Ph D. Massachusetts Institute of Technology. "Modeling the resource consumption of Housing in New Orleans using System Dynamics." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43745.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 2008.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 130-136).
This work uses Systems Dynamics as a methodology to analyze the resource requirements of New Orleans as it recovers from Hurricane Katrina. It examines the behavior of the city as a system of stocks, flows and time delays at a macro-level. The models used to simulate this behavior are compared to historic data. The construction materials, energy and labor required to construct several different types of housing systems are examined and these data are combined with the macro-scale analysis of the city. Several alternative scenarios are proposed based on the interactions of feedback loops identified.
by David James Quinn.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
10

Armstrong, Michael James. "A process for function based architecture definition and modeling." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26631.

Full text
Abstract:
Thesis (M. S.)--Aerospace Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Mavris, Dimitri; Committee Member: Garcia, Elena; Committee Member: Soban, Danielle. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
11

Ismail, Ayman (Ayman Adel) 1973. "A distributed system architecture for spatial data management to support engineering modeling." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/67524.

Full text
Abstract:
Thesis (M.C.P.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 1999.
Includes bibliographical references (leaves 48-50).
This research seeks ways to manage the process of analysis and synthesis of geographic data to support collaboration among researchers, planners, and engineers working on a spatial problem. This question is addressed on two levels. The first level examines the abstraction and representation of the analysis process, using the Unified Modeling Language. The second level examines the distributed environment that enables such collaboration, and proposes a three-tier distributed system architecture. The interdisciplinary Urban Respiration project provides a context and examples illustrating the need for such design. A prototype application is developed to test and understand the applicability of the proposed designs.
by Ayman Ismail.
M.C.P.
APA, Harvard, Vancouver, ISO, and other styles
12

Shimmin, Kyle. "An Architecture for Rapid Modeling and Simulation of an Air-Vehicle System." University of Dayton / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1469453436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Renzhong. "Executable system architecting using systems modeling language in conjunction with Colored Petri Nets - a demonstration using the GEOSS network centric system." Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.umr.edu/thesis/pdf/Wang_09007dcc803a6d5e.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Missouri--Rolla, 2007.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed November 30, 2007) Includes bibliographical references (p. 199-209).
APA, Harvard, Vancouver, ISO, and other styles
14

Sachs, Gregory (Gregory Dennis). "A principle based system architecture framework applied for defining, modeling & designing next generation smart grid systems." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62773.

Full text
Abstract:
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 81).
A strong and growing desire exists, throughout society, to consume electricity from clean and renewable energy sources, such as solar, wind, biomass, geothermal, and others. Due to the intermittent and variable nature of electricity from these sources, our current electricity grid is incapable of collecting, transmitting, and distributing this energy effectively. The "Smart Grid" is a term which has come to represent this 'next generation' grid, capable of delivering, not only environmental benefits, but also key economic, reliability and energy security benefits as well. Due to the high complexity of the electricity grid, a principle based System Architecture framework is presented as a tool for analyzing, defining, and outlining potential pathways for infrastructure transformation. Through applying this framework to the Smart Grid, beneficiaries and stakeholders are identified, upstream and downstream influences on design are analyzed, and a succinct outline of benefits and functions is produced. The first phase of grid transformation is establishing a robust communications and measurement network. This network will enable customer participation and increase energy efficiency through smart metering, real time pricing, and demand response programs. As penetration of renewables increases, the high variability and uncontrollability of additional energy sources will cause significant operation and control challenges. To mitigate this variability reserve margins will be adjusted and grid scale energy storage (such as compressed air, flow batteries, and plugin hybrid electric vehicles or PHEV's) will begin to be introduced. Achieving over 15% renewable energy penetration marks the second phase of transformation. The third phase is enabling mass adoption, whereby over 40% of our energy will come from renewable sources. This level of penetration will only be achieved through fast supply and demand balancing controls and large scale storage. Robust modeling must be developed to test various portfolio configurations.
by Gregory Sachs.
S.M.in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
15

Valdes, Francisco Javier. "Manufacturing compliance analysis for architectural design: a knowledge-aided feature-based modeling framework." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/54973.

Full text
Abstract:
Given that achieving nominal (all dimensions are theoretically perfect) geometry is challenging during building construction, understanding and anticipating sources of geometric variation through tolerances modeling and allocation is critical. However, existing building modeling environments lack the ability to support coordinated, incremental and systematic specification of manufacturing and construction requirements. This issue becomes evident when adding multi-material systems produced off site by different vendors during building erection. Current practices to improve this situation include costly and time-consuming operations that challenge the relationship among the stakeholders of a project. As one means to overcome this issue, this research proposes the development of a knowledge-aided modeling framework that integrates a parametric CAD tool with a system modeling application to assess variability in building construction. The CAD tool provides robust geometric modeling capabilities, while System Modeling allows for the specification of feature-based manufacturing requirements aligned with construction standards and construction processes know-how. The system facilitates the identification of conflicting interactions between tolerances and manufacturing specifications of building material systems. The expected contributions of this project are the representation of manufacturing knowledge and tolerances interaction across off-site building subsystems to identify conflicting manufacturing requirements and minimize costly construction errors. The proposed approach will store and allocate manufacturing knowledge as Model-Based Systems Engineering (MBSE) design specifications for both single and multiple material systems. Also, as new techniques in building design and construction are beginning to overlap with engineering methods and standards (e.g. in-factory prefabrication), this project seeks to create collaborative scenarios between MBSE and Building Information Modeling (BIM) based on parametric, simultaneous, software integration to reduce human-to-data translation errors, improving model consistency among domains. Important sub-stages of this project include the comprehensive review of modeling and allocation of tolerances and geometric deviations in design, construction and engineering; an approach for model integration among System Engineering models, mathematical engines and BIM (CAD) models; and finally, a demonstration computational implementation of a System-level tolerances modeling and allocation approach.
APA, Harvard, Vancouver, ISO, and other styles
16

Rosell, Peter. "Enterprise Architecture Modeling of Core Administrative Systems at KTH : A Modifiability Analysis." Thesis, KTH, Industriella informations- och styrsystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-99166.

Full text
Abstract:
This project presents a case study of modifiability analysis on the Information Systems which are central to the core business processes of Royal Institution of Technology in Stockholm, Sweden by creating, updating and using models. The case study was limited to modifiability regarding only specified Information Systems. The method selected was Enterprise Architecture together with Enterprise Architecture Analysis research results and tools from the Industrial Information and Control Systems department of the same University. Jointly used with the ArchiMate modelling language, to create the models and perform the analysis. The results demonstrated to be very varied in regards to system models and modifiability. The Alumni Commu-nity system seemed to have very high modifiability whereas the Ladok på Webben system seemed to have the low modifiability, and other systems ranging differently or in between. The case study results found three slightly more critical systems of all the systems analysed: Ladok på Webben, Nya Antagningen & La-dok Nouveau. The first two showed to have either very low or low modifiability while being highly coupled to the other systems. Therefore any modification to these two systems would most likely cause effects that would require change in interconnected systems. Whereas Ladok Nouveau, while having average modifia-bility, has a critical position to process activities, is nearly isolated from all other systems, making them indi-rectly dependent on the system through the interconnected LADOK database. The study showed that the systems developed at KTH are comparable with systems developed by commercial enterprises in terms of modifiability. The study also provided insight into an Enterprise Architecture where the systems have dif-ferent development origins and how this could affect modifiability and analysis.
APA, Harvard, Vancouver, ISO, and other styles
17

Sandberg, Andreas. "Understanding Multicore Performance : Efficient Memory System Modeling and Simulation." Doctoral thesis, Uppsala universitet, Avdelningen för datorteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-220652.

Full text
Abstract:
To increase performance, modern processors employ complex techniques such as out-of-order pipelines and deep cache hierarchies. While the increasing complexity has paid off in performance, it has become harder to accurately predict the effects of hardware/software optimizations in such systems. Traditional microarchitectural simulators typically execute code 10 000×–100 000× slower than native execution, which leads to three problems: First, high simulation overhead makes it hard to use microarchitectural simulators for tasks such as software optimizations where rapid turn-around is required. Second, when multiple cores share the memory system, the resulting performance is sensitive to how memory accesses from the different cores interleave. This requires that applications are simulated multiple times with different interleaving to estimate their performance distribution, which is rarely feasible with today's simulators. Third, the high overhead limits the size of the applications that can be studied. This is usually solved by only simulating a relatively small number of instructions near the start of an application, with the risk of reporting unrepresentative results. In this thesis we demonstrate three strategies to accurately model multicore processors without the overhead of traditional simulation. First, we show how microarchitecture-independent memory access profiles can be used to drive automatic cache optimizations and to qualitatively classify an application's last-level cache behavior. Second, we demonstrate how high-level performance profiles, that can be measured on existing hardware, can be used to model the behavior of a shared cache. Unlike previous models, we predict the effective amount of cache available to each application and the resulting performance distribution due to different interleaving without requiring a processor model. Third, in order to model future systems, we build an efficient sampling simulator. By using native execution to fast-forward between samples, we reach new samples much faster than a single sample can be simulated. This enables us to simulate multiple samples in parallel, resulting in almost linear scalability and a maximum simulation rate close to native execution.
CoDeR-MP
UPMARC
APA, Harvard, Vancouver, ISO, and other styles
18

Lagerström, Robert. "Enterprise Systems Modifiability Analysis : An Enterprise Architecture Modeling Approach for Decision Making." Doctoral thesis, KTH, Industriella informations- och styrsystem, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-12341.

Full text
Abstract:
Contemporary enterprises depend to great extent on software systems. During the past decades the number of systems has been constantly increasing and these systems have become more integrated with one another. This has lead to a growing complexity in managing software systems and their environment. At the same time business environments today need to progress and change rapidly to keep up with evolving markets. As the business processes change, the systems need to be modified in order to continue supporting the processes. The complexity increase and growing demand for rapid change makes the management of enterprise systems a very important issue. In order to achieve effective and efficient management, it is essential to be able to analyze the system modifiability (i.e. estimate the future change cost). This is addressed in the thesis by employing architectural models. The contribution of this thesis is a method for software system modifiability analysis using enterprise architecture models. The contribution includes an enterprise architecture analysis formalism, a modifiability metamodel (i.e. a modeling language), and a method for creating metamodels. The proposed approach allows IT-decision makers to model and analyze change projects. By doing so, high-quality decision support regarding change project costs is received. This thesis is a composite thesis consisting of five papers and an introduction. Paper A evaluatesa number of analysis formalisms and proposes extended influence diagrams to be employed for enterprise architecture analysis. Paper B presents the first version of the modifiability metamodel. InPaper C, a method for creating enterprise architecture metamodels is proposed. This method aims to be general, i.e. can be employed for other IT-related quality analyses such as interoperability, security, and availability. The paper does however use modifiability as a running case. The second version of the modifiability metamodel for change project cost estimation is fully described in Paper D. Finally, Paper E validates the proposed method and metamodel by surveying 110 experts and studying 21 change projects at four large Nordic companies. The validation indicates that the method and metamodel are useful, contain the right set of elements and provide good estimation capabilities.
QC20100716
APA, Harvard, Vancouver, ISO, and other styles
19

Vogt, Aline. "Applying Grid-Partitioning To The Architecture of the Disaster Response Mitigation (DISarm) System." ScholarWorks@UNO, 2007. http://scholarworks.uno.edu/td/593.

Full text
Abstract:
The need for a robust system architecture to support software development is well known. In enterprise software development, this must be realized in a multi-tier environment for deployment to a software framework. Many popular integrated development environment (IDE) tools for component-based frameworks push multi-tier partitioning by assisting developers with convenient code generation tools and software deployment tools which package the code. However, if components are not packaged wisely, modifying and adding components becomes difficult and expensive. To help manage change, vertical partitioning can be applied to compartmentalize components according to function and role, resulting in a grid partitioning. This thesis is to advocate a design methodology that enforces vertical partitioning on top of the horizontal multitier partitioning, and to provide guidelines that document the grid partitioning realization in enterprise software development processes as applied in the J2EE framework.
APA, Harvard, Vancouver, ISO, and other styles
20

Per, Närman. "Enterprise Architecture for Information System Analysis : Modeling and assessing data accuracy, availability, performance and application usage." Doctoral thesis, KTH, Industriella informations- och styrsystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101494.

Full text
Abstract:
Decisions concerning IT systems are often made without adequate decision-support. This has led to unnecessary IT costs and failures to realize business benefits. The present thesis presents a framework for analysis of four information systems properties relevant to IT decision-making. The work is founded on enterprise architecture, a model-based IT and business management discipline. Based on the existing ArchiMate framework, a new enterprise architecture framework has been developed and implemented in a software tool. The framework supports modeling and analysis of data accuracy, service performance, service availability and application usage. To analyze data accuracy, data flows are modeled, the service availability analysis uses fault tree analysis, the performance analysis employs queuing networks and the application usage analysis combines the Technology Acceptance Model and Task-Technology Fit model. The accuracy of the framework's estimates was empirically tested. Data accuracy and service performance were evaluated in studies at the same power utility. Service availability was tested in multiple studies at banks and power utilities. Data was collected through interviews with system development or maintenance staff. The application usage model was tested in the maintenance management domain. Here, data was collected by means of a survey answered by 55 respondents from three power utilities, one manufacturing company and one nuclear power plant. The service availability studies provided estimates that were accurate within a few hours of logged yearly downtime. The data accuracy estimate was correct within a percentage point when compared to a sample of data objects. Deviations for four out of five service performance estimates were within 15 % from measured values. The application usage analysis explained a high degree of variation in application usage when applied to the maintenance management domain. During the studies of data accuracy, service performance and service availability, records were kept concerning the required modeling and analysis effort. The estimates were obtained with a total effort of about 20 man-hours per estimate. In summary the framework should be useful for IT decision-makers requiring fairly accurate, but not too expensive, estimates of the four properties.

QC 20120912

APA, Harvard, Vancouver, ISO, and other styles
21

Parsons, Mark Allen. "Network-Based Naval Ship Distributed System Design and Mission Effectiveness using Dynamic Architecture Flow Optimization." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104198.

Full text
Abstract:
This dissertation describes the development and application of a naval ship distributed system architectural framework, Architecture Flow Optimization (AFO), and Dynamic Architecture Flow Optimization (DAFO) to naval ship Concept and Requirements Exploration (CandRE). The architectural framework decomposes naval ship distributed systems into physical, logical, and operational architectures representing the spatial, functional, and temporal relationships of distributed systems respectively. This decomposition greatly simplifies the Mission, Power, and Energy System (MPES) design process for use in CandRE. AFO and DAFO are a network-based linear programming optimization methods used to design and analyze MPES at a sufficient level of detail to understand system energy flow, define MPES architecture and sizing, model operations, reduce system vulnerability and improve system reliability. AFO incorporates system topologies, energy coefficient component models, preliminary arrangements, and (nominal and damaged) steady state scenarios to minimize the energy flow cost required to satisfy all operational scenario demands and constraints. DAFO applies the same principles as AFO and adds a second commodity, data flow. DAFO also integrates with a warfighting model, operational model, and capabilities model that quantify tasks and capabilities through system measures of performance at specific capability nodes. This enables the simulation of operational situations including MPES configuration and operation during CandRE. This dissertation provides an overview of design tools developed to implement this process and methods, including objective attribute metrics for cost, effectiveness and risk, ship synthesis model, hullform exploration and MPES explorations using design of experiments (DOEs) and response surface models.
Doctor of Philosophy
This dissertation describes the development and application of a warship system architectural framework, Architecture Flow Optimization (AFO), and Dynamic Architecture Flow Optimization (DAFO) to warship Concept and Requirements Exploration (CandRE). The architectural framework decomposes warship systems into physical, logical, and operational architectures representing the spatial, functional, and time-based relationships of systems respectively. This decomposition greatly simplifies the Mission, Power, and Energy System (MPES) design process for use in CandRE. AFO and DAFO are a network-based linear programming optimization methods used to design and analyze MPES at a sufficient level of detail to understand system energy usage, define MPES connections and sizing, model operations, reduce system vulnerability and improve system reliability. AFO incorporates system templates, simple physics and energy-based component models, preliminary arrangements, and simple undamaged/damaged scenarios to minimize the energy flow usage required to satisfy all operational scenario demands and constraints. DAFO applies the same principles and adds a second commodity, data flow representing system operation. DAFO also integrates with a warfighting model, operational model, and capabilities model that quantify tasks and capabilities through system measures of performance. This enables the simulation of operational situations including MPES configuration and operation during CandRE. This dissertation provides an overview of design tools developed to implement this process and methods, including optimization objective attribute metrics for cost, effectiveness and risk.
APA, Harvard, Vancouver, ISO, and other styles
22

Dobslaw, Felix. "An Adaptive, Searchable and Extendable Context Model,enabling cross-domain Context Storage, Retrieval and Reasoning : Architecture, Design, Implementation and Discussion." Thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12179.

Full text
Abstract:
The specification of communication standards and increased availability of sensors for mobile phones and mobile systems are responsible for a significantly increasing sensor availability in populated environments. These devices are able to measure physical parameters and make this data available via communication in sensor networks. To take advantage of the so called acquiring information for public services, other parties have to be able to receive and interpret it. Locally measured datacould be seen as a means of describing user context. For a generic processing of arbitrary context data, a model for the specification ofenvironments, users, information sources and information semantics has to be defined. Such a model would, in the optimal case, enable global domain crossing context usage and hence a broader foundation for context interpretation and integration.This thesis proposes the CII-(Context Information Integration) model for the persistence and retrieval of context information in mobile, dynamically changing, environments. It discusses the terms context and context modeling under the analysis of former publications in thefield. Further-more an architecture and prototype are presented.Live and historical data are stored and accessed by the same platform and querying processor, but are treated in an optimized fashion.Optimized retrieval for closeness in n-dimensional context-spaces is supported by a dedicated method. The implementation enables self-aware,shareable agents that are able to reason or act based upon the global context,including their own. These agents can be considered as being a part of the wholecontext, being movable and executable for all context-aware applications.By applying open source technology, a gratifying implementation of CII is feasible. The document contains a thorough discussion concerning the software design and further prototype development. The use cases at the end of the document show the flexibility and extendability of the model and its implementation as a context-base for three entirely different applications.
MediaSense
APA, Harvard, Vancouver, ISO, and other styles
23

Arndt, Grégory. "System architecture and circuit design for micro and nanoresonators-based mass sensing arrays." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112358/document.

Full text
Abstract:
Le sujet de thèse porte sur des micro/nanorésonateurs ainsi que leurs électroniques de lecture. Les composants mécaniques sont utilisés pour mesurer des masses inférieures à l'attogramme (10-18 g) ou de très faibles concentrations de gaz. Ces composants peuvent ensuite être mis en réseau afin de réaliser des spectromètres de masse ou des détecteurs de gaz. Afin d'atteindre les résolutions nécessaires, il a été choisi d'utiliser une détection harmonique de résonance détectant les variations de la fréquence de résonance d'une nanostructure mécanique. Les dimensions du résonateur sont réduites afin d'augmenter sensibilité en masse, cependant le niveau du signal électrique en sortie du composant est également réduit. Ce faible signal nécessite donc de concevoir de nouvelles transductions électromécaniques ainsi que des architectures électroniques qui minimisent le bruit, les couplages parasites et qui peuvent être mise en réseau
The PhD project focuses on micro or nanomechanical resonators and their surrounding electronics environment. Mechanical components are employed to sense masses in the attogram range (10−18 g) or extremely low gas concentrations. The components can then be implemented in arrays in order to construct cutting-edge mass spectrometers or gas chromatographs. To reach the necessary resolutions, a harmonic detection of resonance technique is employed that measures the shift of the resonant frequency of a tiny mechanical structure due to an added mass or a gas adsorption. The need of shrinking the resonator's dimensions to enhance the sensitivity also reduces the signal delivered by the component. The resonator low output signal requires employing new electromechanical resonator topologies and electronic architectures that minimize the noise, the parasitic couplings and that can be implemented in arrays
APA, Harvard, Vancouver, ISO, and other styles
24

Saylor, Kase J., William A. Malatesta, and Ben A. Abbott. "TENA in a Telemetry Network System." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606198.

Full text
Abstract:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California
The integrated Network Enhanced Telemetry (iNET) and Test and Training Enabling Architecture (TENA) projects are working to understand how TENA will perform in a Telemetry Network System. This paper discusses a demonstration prototype that is being used to investigate the use of TENA across a constrained test environment simulating iNET capabilities. Some of the key elements being evaluated are throughput, latency, memory utilization, memory footprint, and bandwidth. The results of these evaluations will be presented. Additionally, the paper briefly discusses modeling and metadata requirements for TENA and iNET.
APA, Harvard, Vancouver, ISO, and other styles
25

Gleizes, Marie-Pierre. "Spécification d'une architecture de système multi-expert." Toulouse 3, 1987. http://www.theses.fr/1987TOU30256.

Full text
Abstract:
Presentation d'un modele d'architecture permettant de supporter les differents types de connaissances qui interviennent au cours de la resolution de problemes interdisciplinaires. Les deux principales preocupations pour l'elaboration de ce modele ont ete la mise en oeuvre de la cooperation et la structuration des diverses connaissances qui interviennent afin d'avoir un systeme homogene, modulaire, simple pour la verification de la coherence et rapide au cours d'une resolution
APA, Harvard, Vancouver, ISO, and other styles
26

Wronski, Jacob (Jacob Andrzej). "A design tool architecture for the rapid evaluation of product design tradeoffs in an Inernet-based system modeling environment." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32375.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.
Includes bibliographical references (leaves 117-122).
This thesis presents a computer-aided design tool for the rapid evaluation of design tradeoffs in an integrated product modeling environment. The goal of this work is to provide product development organizations with better means of exploring product design spaces so as to identify promising de- sign candidates early in the concept generation phase. Ultimately, such practices would streamline the product development process. The proposed design tool is made up of two key components: an optimization engine, and the Distributed Object-based Modeling Environment. This modeling environment is part of an ongoing research initiative at the Computer-Aided Design Lab. The optimization engine consists of a multi- objective evolutionary algorithm developed at the Ecole Polytechnique F6d6rale de Lausanne. The first part of this thesis provides a comprehensive survey of all topics relevant to this work. Traditional product development is discussed along with some of the challenges inherent in this process. Integrated modeling tools are surveyed. Finally, a variety of optimization methods and algorithms are discussed, along with a review of commercially available optimization packages. The second part discusses the developed design tool and the implications of this work on traditional product development. After a detailed description of the optimization algorithm, use of the design tool is illustrated with a trivial design example.
(cont.) Enabled by this work, a new "target-driven" design approach is introduced. In this approach, individuals select optimal tradeoffs between competing design objectives and use them, as design targets, to configure the integrated product model so as to achieve best-overall product performance. Validation of this design approach is done through the design of a hybrid PV-diesel energy system for two different applications. It is shown that the design tool effectively evaluates design tradeoffs and allows for the rapid derivation of optimal design alternatives.
by Jacob Wronski.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
27

Seghiri, Rachida. "Modélisation et simulation d’une architecture d’entreprise - Application aux Smart Grids." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC053/document.

Full text
Abstract:
Les Smart Grids sont des réseaux électriques intelligents permettant d’optimiser la production, la distribution et la consommation d’électricité grâce à l’introduction des technologies de l’information et de la communication sur le réseau électrique. Les Smart Grids impactent fortement l’ensemble de l’architecture d’entreprise des gestionnaires de réseaux électriques. Simuler une architecture d’entreprise permet aux acteurs concernés d’anticiper de tels impacts.Dès lors, l’objectif de cette thèse est de fournir des modèles, méthodes et outils permettant de modéliser puis de simuler une architecture d’entreprise afin de la critiquer ou de la valider.Dans ce contexte, nous proposons un framework multi-vues, nommé ExecuteEA, pour faciliter la modélisation des architectures d’entreprise en automatisant l’analyse de leurs structures et de leurs comportements par la simulation. ExecuteEA traite chacune des vues métier, fonctionnelle et applicative selon trois aspects : informations, processus et objectifs. Pour répondre au besoin d’alignement métier/IT, nous introduisons une vue supplémentaire : la vue intégration. Dans cette vue nous proposons de modéliser les liens de cohérence inter et intra vues.Nous mettons, par ailleurs, à profit des techniques issues de l’ingénierie dirigée par les modèles en tant que techniques support pour la modélisation et la simulation d’une architecture d’entreprise. Notre validons ensuite notre proposition à travers un cas métier Smart Grid relatif à la gestion d’une flotte de véhicules électriques
In this thesis, we propose a framework that facilitates modeling Enterprise Architectures (EA) by automating analysis, prediction, and simulation, in order to address the key issue of business/IT alignment. We present our approach in the context of Smart Grids, which are power grids enabled with Information and Communication Technologies. Extensive studies try to foresee the impact of Smart Grids on electric components, telecommunication infrastructure, and industrial automation and IT. However, Smart Grids also have an impact on the overall EA of grids operators. Therefore, our framework enables stakeholders to validate and criticize their modeling choices for the EA in the context of Smart Grids. What we propose is a multi-view framework with three aspects – information, processes, and goals – for each view. In addition to thebusiness, functional and application views, we add an integration view to ensure inter and intra-view consistency. We rely on Model Driven Engineering (MDE) techniques to ease the holistic modeling and simulation of enterprise systems. Finally, we show the utility of our approach by applying it on a Smart Grid case study: the management of an electric vehicles fleet
APA, Harvard, Vancouver, ISO, and other styles
28

Gunay, Serkan. "Spatial Information System For Conservation Ofhistoric Buildings Case Study: Doganlar Church Izmir." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608388/index.pdf.

Full text
Abstract:
Conservation of historic buildings requires comprehensive and correct information of buildings to be analyzed in conservation decision making process in a systematic and rational approach. Geographical Information Systems (GIS) are advantageous in such cases which can be defined as computer based systems for handling geographical and spatial data. GIS have the potential to support the conservation decision making process with their storing, analyzing and monitoring capabilities. Therefore, information systems like GIS can be seen as a potential significant instrument for dealing with the conservation projects. This thesis aims to analyze the transformation process of the data collected in conservation process into practical information in order to adapt this process to a spatial information system. In this context, use of Geographical Information Systems is tested in the process of historic building conservation on spatial information system designed for Doganlar Church izmir chosen as the case study. Hence the advantages and disadvantages of local information systems in conservation decision making process of historic buildings can be criticized.
APA, Harvard, Vancouver, ISO, and other styles
29

Luo, Meiling. "Indoor radio propagation modeling for system performance prediction." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00961244.

Full text
Abstract:
This thesis aims at proposing all the possible enhancements for the Multi-Resolution Frequency-Domain ParFlow (MR-FDPF) model. As a deterministic radio propagation model, the MR-FDPF model possesses the property of a high level of accuracy, but it also suffers from some common limitations of deterministic models. For instance, realistic radio channels are not deterministic but a kind of random processes due to, e.g. moving people or moving objects, thus they can not be completely described by a purely deterministic model. In this thesis, a semi-deterministic model is proposed based on the deterministic MR-FDPF model which introduces a stochastic part to take into account the randomness of realistic radio channels. The deterministic part of the semi-deterministic model is the mean path loss, and the stochastic part comes from the shadow fading and the small scale fading. Besides, many radio propagation simulators provide only the mean power predictions. However, only mean power is not enough to fully describe the behavior of radio channels. It has been shown that fading has also an important impact on the radio system performance. Thus, a fine radio propagation simulator should also be able to provide the fading information, and then an accurate Bit Error Rate (BER) prediction can be achieved. In this thesis, the fading information is extracted based on the MR-FDPF model and then a realistic BER is predicted. Finally, the realistic prediction of the BER allows the implementation of the adaptive modulation scheme. This has been done in the thesis for three systems, the Single-Input Single-Output (SISO) systems, the Maximum Ratio Combining (MRC) diversity systems and the wideband Orthogonal Frequency-Division Multiplexing (OFDM) systems.
APA, Harvard, Vancouver, ISO, and other styles
30

Weitz, Noah. "Analysis of Verification and Validation Techniques for Educational CubeSat Programs." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1854.

Full text
Abstract:
Since their creation, CubeSats have become a valuable educational tool for university science and engineering programs. Unfortunately, while aerospace companies invest resources to develop verification and validation methodologies based on larger-scale aerospace projects, university programs tend to focus resources on spacecraft development. This paper looks at two different types of methodologies in an attempt to improve CubeSat reliability: generating software requirements and utilizing system and software architecture modeling. Both the Consortium Requirements Engineering (CoRE) method for software requirements and the Monterey Phoenix modeling language for architecture modeling were tested for usability in the context of PolySat, Cal Poly's CubeSat research program. In the end, neither CoRE nor Monterey Phoenix provided the desired results for improving PolySat's current development procedures. While a modified version of CoRE discussed in this paper does allow for basic software requirements to be generated, the resulting specification does not provide any more granularity than PolySat's current institutional knowledge. Furthermore, while Monterey Phoenix is a good tool to introduce students to model-based systems engineering (MBSE) concepts, the resulting graphs generated for a PolySat specific project were high-level and did not find any issues previously discovered through trial and error methodologies. While neither method works for PolySat, the aforementioned results do provide benefits for university programs looking to begin developing CubeSats.
APA, Harvard, Vancouver, ISO, and other styles
31

Nageba, Ebrahim. "Personalizable architecture model for optimizing the access to pervasive ressources and services : Application in telemedicine." Phd thesis, INSA de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00694445.

Full text
Abstract:
The growing development and use of pervasive systems, equipped with increasingly sophisticated functionalities and communication means, offer fantastic potentialities of services, particularly in the eHealth and Telemedicine domains, for the benifit of each citizen, patient or healthcare professional. One of the current societal challenges is to enable a better exploitation of the available services for all actors involved in a given domain. Nevertheless, the multiplicity of the offered services, the systems functional variety, and the heterogeneity of the needs require the development of knowledge models of these services, systems functions, and needs. In addition, the distributed computing environments heterogeneity, the availability and potential capabilities of various human and material resources (devices, services, data sources, etc.) required by the different tasks and processes, the variety of services providing users with data, the interoperability conflicts between schemas and data sources are all issues that we have to consider in our research works. Our contribution aims to empower the intelligent exploitation of ubiquitous resources and to optimize the quality of service in ambient environment. For this, we propose a knowledge meta-model of the main concepts of a pervasive environment, such as Actor, Task, Resource, Object, Service, Location, Organization, etc. This knowledge meta-model is based on ontologies describing the different aforementioned entities from a given domain and their interrelationships. We have then formalized it by using a standard language for knowledge description. After that, we have designed an architectural framework called ONOF-PAS (ONtology Oriented Framework for Pervasive Applications and Services) mainly based on ontological models, a set of rules, an inference engine, and object oriented components for tasks management and resources processing. Being generic, extensible, and applicable in different domains, ONOF-PAS has the ability to perform rule-based reasoning to handle various contexts of use and enable decision making in dynamic and heterogeneous environments while taking into account the availability and capabilities of the human and material resources required by the multiples tasks and processes executed by pervasive systems. Finally, we have instantiated ONOF-PAS in the telemedicine domain to handle the scenario of the transfer of persons victim of health problems during their presence in hostile environments such as high mountains resorts or geographically isolated areas. A prototype implementing this scenario, called T-TROIE (Telemedicine Tasks and Resources Ontologies for Inimical Environments), has been developed to validate our approach and the proposed ONOF-PAS framework.
APA, Harvard, Vancouver, ISO, and other styles
32

Werner, Quentin. "Model-based optimization of electrical system in the early development stage of hybrid drivetrains." Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0109.

Full text
Abstract:
Cette thèse analyse les challenges auxquels sont confrontés les composants électriques pour les systèmes de traction hybrides. L’analyse de ces composants et de leurs interactions en tant qu’entité indépendante est un sujet de recherche important afin de dimensionner de manière optimale le système au lieu de combiner des composants optimaux. Les véhicules hybrides sont un domaine de recherche qui suscite un grand intérêt parce qu’il s’agit d’une solution efficace à court terme afin de préparer la transition énergétique vers les véhicules à zéro émission. Malgré les avantages de cette solution, c’est un sujet de recherche complexe car les composants électriques doivent être intégrés dans un système de propulsion conventionnel. Ainsi le but de ce travail de recherche est axé sur la détermination de méthodes appropriées pour étudier les composants électriques et les contributions apportées par cette thèse visent à répondre à la problématique suivante : déterminer le niveau suffisant de détails pour modéliser les systèmes électriques pour les systèmes de traction pour véhicules hybrides afin d’identifier le dimensionnement idéal des composants pour différents systèmes pendant la phase de développement. Afin de résoudre cette problématique, ce rapport est divisé en quatre parties au sein de six chapitres. D’abord l’état de l’art des véhicules hybrides, des composants électriques ainsi que des méthodes d’optimisation associées sont présentés (chapitre 1). Ensuite, pour chaque composant (chapitre 2 à 4), des méthodes de modélisation appropriées sont déterminées afin de les modéliser mais aussi afin d’évaluer leur intégration dans le système de propulsion. Puis, une solution pour l’étude du système globale est déterminée à partir de l’analyse de travaux précédents (chapitre 5). Finalement, une approche d’optimisation est développée et permet d’analyser différents systèmes ainsi que l’influence de différents paramètres sur le dimensionnement (chapitre 6). Grâce à l’analyse du développement actuel et des travaux précédents sur le sujet ainsi qu’au développement d’outils de simulation, cette thèse étudie et analyse les relations entre le niveau de tension et de courant, et les performances du système dans différents cas. Les résultats permettent de déterminer l’influence de ces paramètres sur les composants ainsi que l’impact de l’environnement industriel sur les résultats. En tenant compte du cadre législatif actuel, les résultats convergent globalement tous dans la même direction : une réduction du niveau de tension, respectivement une augmentation du courant, entraine une amélioration du système global par rapport aux méthodes de dimensionnent actuelles. Ces observations sont liées à l’architecture, au cycle d’évaluation et à l’environnement considérés mais les méthodes et l’approche développée ont posé les bases pour étendre les connaissances dans le domaine de l’optimisation des véhicules hybrides. En plus de l’optimisation générale, des cas particuliers sont analysés afin de montrer la modularité des méthodes et l’influence de paramètres supplémentaires (système 48V ou convertisseur Boost). Afin de conclure, cette thèse a mis en place les bases pour l’étude des composants électriques pour les véhicules hybrides. De part un environnement fluctuant et les nombreuses technologies possibles, ce sujet suscite encore un grand intérêt et les points suivants peuvent être encore étudiés de manière plus détaillée : * Application des méthodes pour d’autres systèmes de propulsion (autre architectures hybrides, véhicule à pile à combustible ou tout électrique), * Étude de nouvelles technologies comme le carbure de silicium pour l’électronique de puissance, la machine à reluctance variable ou le sulfure de lithium pour les batteries, * Analyse d’autre cycle d’évaluation ainsi que leur cadre législatif, * Mise en place de structures additionnelles pour l’électronique de puissance, * Validations supplémentaires avec d’autres composants
This work analyses the challenges faced by the electric components for traction purpose in hybrid drivetrains. It investigates the components and their interactions as an independent entity in order to refine the scope of investigation and to find the best combinations of components instead of the best components combinations. Hybrid vehicle is currently a topic of high interest because it stands for a suitable short-term solution towards zero emission vehicle. Despite its advantages, it is a challenging topic because the components need to be integrated in a conventional drivetrain architecture. Therefore, the focus of this work is set on the determination of the right methods to investigate only the electric components for traction purpose. The aim and the contributions of this work lies thereby in the resolution of the following statement: Determine the sufficient level of details in modeling electric components at the system level and develop models and tools to perform dynamic simulations of these components and their interactions in a global system analysis to identify ideal designs of various drivetrain electric components during the design process. To address these challenges, this work is divided in four main parts within six chapters. First the current status of the hybrid vehicle, the electric components and the associated optimization methods and simulation are presented (first chapter). Then for each component, the right modeling approach is defined in order to investigate the electrical, mechanical and thermal behavior of the components as well as methods to evaluate their integration in the drivetrain (second to fourth chapter). After this, a suitable method is defined to evaluate the global system and to investigate the interactions between the components based on the review of relevant previous works (chapter five). Finally, the last chapter presents the optimization approach considered in this work and the results by analyzing different system and cases (chapter six). Thanks to the analysis of the current status, previous works and the development of the simulations tools, this work investigates the relationships between the voltage, the current and the power in different cases. The results enable, under the considered assumptions of the work, to determine the influence of these parameters on the components and of the industrial environment on the optimization results. Considering the current legislative frame, all the results converge toward the same observation referred to the reference systems: a reduction of the voltage and an increase of the current leads to an improvement of the integration and the performance of the system. These observations are linked with the considered architecture, driving cycle and development environment but the developed methods and approaches have set the basis to extend the knowledge for the optimization of the electric system for traction purpose. Beside the main optimization, special cases are investigated to show the influence of additional parameters (increase of the power, 48V-system, machine technology, boost-converter…) In order to conclude, this work have set the basis for further investigations about the electric components for traction purpose in more electrified vehicle. Due to the constantly changing environment, the new technologies and the various legislative frame, this topic remains of high interest and the following challenges still need to be deeper investigated: * Application of the methods for other drivetrain architecture (series hybrid, power-split hybrid, fuel-cell vehicle, full electric vehicle), * Investigation of new technologies such as silicon-carbide for the power electronics, lithium–sulfur battery or switch reluctance machine, * Investigation of other driving cycle, legislative frame, * Integration of additional power electronics structure, * Further validation of the modeling approaches with additional components
APA, Harvard, Vancouver, ISO, and other styles
33

Gréboval, Catherine. "Aide : une approche et une architecture pour rendre opérationnels des modèles conceptuels." Compiègne, 1992. http://www.theses.fr/1992COMP0525.

Full text
Abstract:
Afin de combiner l'apport des modèles conceptuels pour l'aide à l'acquisition des connaissances, et l'apport des systèmes experts de seconde génération pour concevoir des résolveurs de problèmes plus robustes et plus faciles à expliquer, nous proposons une approche consistant à rendre opérationnels des modèles conceptuels. Cette approche repose sur le générateur (shell) AIDE qui permet à l'ingénieur de la connaissance de modéliser à un haut niveau d'abstraction. Le générateur est basé sur un mécanisme de traduction pour coder automatiquement le modèle conceptuel, complètement formalisé en un modèle de plus bas niveau, directement exécutable. De cette façon, le lien entre le modèle conceptuel et le système informatique est conservé. Outre les avantages liés au prototypage au niveau connaissance, l'approche AIDE permet ainsi de valider et d'expliquer à ce même haut niveau d'abstraction. Le système de diagnostic médical SATIN a permis de fournir une première validation du générateur AIDE
APA, Harvard, Vancouver, ISO, and other styles
34

Capdevila, Ibañez Bruno. "Serious game architecture and design : modular component-based data-driven entity system framework to support systemic modeling and design in agile serious game developments." Paris 6, 2013. http://www.theses.fr/2013PA066727.

Full text
Abstract:
Depuis une dizaine d’années, on constate que les propriétés d’apprentissage inhérentes aux jeux-vidéo incitent de nombreux développeurs à explorer leur potentiel en tant que moyen d’expression pour des buts divers et innovateurs (sérieux). L’apprentissage est au cœur de l’expérience de jeu, mais il prend normalement place dans les domaines affectifs et psychomoteurs. Quand l’apprentissage cible un contenu sérieux, des designers cognitifs/pédagogiques doivent veiller à son effectivité dans le domaine cognitif. Dans des équipes éminemment multidisciplinaires (jeu, technologie, cognition et art), la compréhension et la communication sont indispensables pour une collaboration efficace dès la première étape de conception. Dans une approche génie logiciel, on s’intéresse aux activités (multidisciplinaires) du processus de développement plutôt qu’aux disciplines elles-mêmes, dans le but d’uniformiser et clarifier le domaine. Puis, nous proposons une fondation logicielle qui renforce ce modèle multidisciplinaire grâce à une approche d’underdesign qui favorise la création des espaces de design collaboratifs. Ainsi, Genome Engine peut être vu comme une infrastructure sociotechnique dirigée-donnée qui permet à des développeurs non-programmeurs, comme le game designer et éventuellement le designer cognitif, de participer activement dans la construction du design du produit, plutôt que de l’évaluer déjà en temps d’utilisation. Son architecture est fondée sur un style de système de systèmes d’entités, ce qui contribue à sa modularité, sa réutilisabilité et adaptabilité, ainsi qu’à fournir des abstractions qui favorisent la communication. Plusieurs projets réels nous ont permis de tester notre approche
For the last ten years, we witness how the inherent learning properties of videogames entice several creators into exploring their potential as a medium of expression for diverse and innovative (serious) purposes. Learning is at the core of the play experience, but it usually takes place at the affective and psychomotor domains. When the learning targets the serious content, cognitive/instructional designers must ensure its effectiveness at the cognitive domain. In such eminently multidisciplinary teams (game, technology, cognition, art), understanding and communication are essential for an effective collaboration from the early stage of inception. In a software engineering approach, we focus on the (multidisciplinary) activities of the development process rather than the disciplines themselves, with the intent to uniform and clarify the field. Then, we propose a software foundation that reinforces this multidisciplinary model thanks to an underdesign approach that favors the creation of collaborative design workspaces. Thereby, Genome Engine can be considered as a data-driven sociotechnical infrastructure that provides non-programmer developers, such as game designers and eventually cognitive designers, with a means to actively participate in the construction of the product design, rather than evaluating it once in usage time. Its architecture is based on a component-based application framework with an entity system of systems runtime object model, which contributes to modularity, reuse and adaptability, as well as to provide familiar abstractions that ease communication. Our approach has been extensively evaluated with the development of several serious game projects
APA, Harvard, Vancouver, ISO, and other styles
35

Hervé, Yannick. "Mise en oeuvre et optimisation de l'utilisation d'un processeur quasi-systolique dans une chaine de traitement d'images orientee temps reel." Université Louis Pasteur (Strasbourg) (1971-2008), 1988. http://www.theses.fr/1988STR13043.

Full text
Abstract:
Cette these a pour but de mettre en oeuvre un processus quasi-systolique gapp de chez ncr, organise en matrice de processeurs elementaires bit-serie et fonctionnant en mode simd, dans une chaine de traitement d'images orientees temps reel. Presentation des modelisations, des simulations et des evaluations du fonctionnement des entrees sorties de ce processeur permettant de l'utiliser au maximum de ses possibilites en fonction de l'application visee.
APA, Harvard, Vancouver, ISO, and other styles
36

Koutný, Jiří. "L systémy a jejich aplikace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235995.

Full text
Abstract:
This master thesis describes deterministic context-free L-systems and its context in procedural modeling, especially in fractal geometry, deals with rewriting technique and its usage for modeling structures similar to plants. Further it describes more complex types of L-systems, especially their context and parametric variations, and shows usage of L-systems in computer graphics and describes its usage for procedural modeling of architecture. At the end of this thesis there are described some other possibilities of usage procedural modeling with L-systems and introduced some extensions of rewriting rules, which will be subject of future research.
APA, Harvard, Vancouver, ISO, and other styles
37

Richet, Rémy. "high-resolution 3d stratigraphic modelling of the gresse-en-vercors lower cretaceous carbonate platform (SE france) : from digital outcrop modeling to carbonate sedimentary system characterization." Thesis, Aix-Marseille 1, 2011. http://www.theses.fr/2011AIX10144.

Full text
Abstract:
Les plateformes carbonatées sont typiquement caractérisées par une architecture sédimentaire et stratigraphique complexe qui s’exprime à une échelle qui peut dépasser le simple affleurement. Ce travail est centré sur les dépôts Barrémien (Crétacé inférieur) de la falaise de Gresse-en-Vercors (sud-est de la France) qui nous procure une fenêtre d’observation à l’échelle de la sismique à travers une bordure de plateforme – analogue des réservoirs du Moyen Orient - idéale pour étudier en continu et à grande échelle le développement des plateformes carbonatées. Cette falaise de 500 m de haut pour 25 km de long permet d’étudier la transition entre les dépôts de peu profonds de la plateforme et ceux du bassin. De nouvelles données biostratigraphiques montrent que la série de plate-forme de Gesse-en-Vercors est essentiellement Barrémien inférieur. Quatre séquences stratigraphiques ont été définies, avec deux épisodes complets de plateforme, séparés par trois « drowning ». Les nouvelles données numériques hautes résolutions (nuage de points LIDAR et photos géoréférencées hautes résolutions) acquises par hélicoptère permettent la réalisation d’un DEM 3D haute résolution pour l’ensemble de l’affleurement. L’intégration des observations stratigraphiques et du DEM dans gOcad abouti à la création d’un modèle 3D en continu de l’architecture stratigraphique et de la répartition des facies de l’affleurement qui peu être utilisé pour interprétations stratigraphiques et sédimentologiques. Le modèle géologique qui en résulte démontre que les données numériques d’affleurement et la modélisation géologique en 3D sont des outils pertinents pour tester la caractérisation des affleurements carbonatés et les modèles conceptuels de système de plateformes carbonatées. Il permet d’appréhender les variations subtiles de profils sédimentaires et d’établir une mosaïque de facies à haute résolution tout au long de la plateforme à l’échelle de la sismique. Cette approche est particulièrement critique en ce qui concerne la caractérisation 3D des clinoformes et des cortèges de dépôts sédimentaires dans un modèle non cylindrique tel que la plateforme carbonaté : par exemple, un prisme de bas niveau apparent ou des lobes distaux qui « onlappent » en 2D correspondent en réalité à des progradations en contexte de haut niveau en 3D
Carbonate platforms are characterized by complex sedimentary and stratigraphic architectures that can be expressed at length scale exceeding single outcrops. This work focuses on the Barremian (Lower Cretaceous) deposits of the Gresse-en-Vercors cliff (southeastern France) that provide a seismic-scale slice though a platform margin - analogous to Middle East reservoirs - ideal to study large scale carbonate platform developments in continuous. The cliffs are 500 m high and extend for 25 km along depositional dip, straddling the transition from shallow water platform to deeper basin. New biostratigraphical data shows that the Vercors platform is mainly Lower Barremian. Four stratigraphic sequences were defined, with two complete platform stages, separated by three drowning events.New high-resolution numerical data (LIDAR point-set and high-resolution georeferenced photos) obtained by helicopter survey, allowed the realization of a 3D high-resolution DEM over the entire outcrops. Integrating the stratigraphic observations and the DEM in gOcad result in a continuous 3D stratigraphic architecture and facies model of the carbonate outcrop that can be used for stratigraphic and sedimentological interpretations. The resulting geological model demonstrates that outcrop numerical data and 3D geological modeling are pertinent tools for improving carbonate outcrop characterization and conceptual models of carbonate platform systems. It allows to establish subtle sedimentary profiles and high resolution facies mosaic along seismic scale platform trend. This approach is particularly critical for the 3D characterization of clinoforms and stratigraphic system tracts in non-cylindrical carbonate systems: for example, apparent low stand wedge or distal onlapping lobes in 2D are in reality prograding high stand systems in 3D
APA, Harvard, Vancouver, ISO, and other styles
38

Schroeder, Greyce Nogueira. "Metodologia de modelagem e arquitetura de referência do Digital Twin em sistemas ciber físicos industriais usando AutomationML." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/182314.

Full text
Abstract:
Com as evoluções tecnológicas nas áreas de hardware, microeletrônica, sistemas de informação e computação, o conceito de sistemas ciberfísicos (do inglês Cyber-Physical Systems) vem ganhando importância. Este sistemas se referem à junção entre sistemas computacionais distribuídos e processos físicos da natureza e, são base fundamental para a nova revolução industrial que esta sendo introduzida. Esta revolução industrial é marcada pela completa descentralização do controle dos processos produtivos e uma proliferação de dispositivos inteligentes interconectados, ao longo de toda a cadeia de produção e logística. Sistemas de automação, e particularmente os sistemas de automação industrial, nos quais elementos computacionais controlam e automatizam a execução de processos físicos em plantas industriais, são um exemplo de sistemas ciber-físicos. Com isso, percebe-se que é necessário relacionar objetos físicos a informações associadas a este objeto no mundo cibernético. Para isso, destaca-se o conceito e o uso do Digital Twin, que é uma representação virtual de objetos físicos. O Digital Twin possibilita a virtualização e centralização do controle no produto. Este estudo irá explorar uma metodologia de modelagem genérica e flexível para o Digital Twin usando a ferramenta AutomationML e propor uma arquitetura de comunicação para a troca de dados sob a ótica de Cyber Physical Systems. Com a implementação dessa metodologia, pretende-se validar o conceito proposto e oferecer um método de modelagem e configuração para obter dados, extrair conhecimento e proporcionar sistemas de visualização para os usuários.
With technological advances in the fields of hardware, microelectronics and computer systems, Cyber Physical Systems is a new concept that is gaining importance. This systems are integrations of computation, networking, and physical processes. Cyber Physical Systems are one of the pillars for the new industrial revolution, and it is marked by the complete decentralization of the control of production processes and, marked by a proliferation of interconnected intelligent devices throughout the production and logistics chain. Embedded computers and networks monitor and control the physical processes, with feedback loops where physical processes affect computations and vice versa. A industrial automation system, is an example of cyber physical systems where computational elements control and automate the execution of physical processes in industrial plants. Thus, it is clear the need to relate physical objects to information associated with this object in the cyber world. For this, this work pretends to use the concept of Digital Twin, that is a virtual representation of physical objects. Digital Twin enables the virtualization of physical components and descentralization of control. This study will explore a generic and flexible modeling methodology for Digital Twin using the AutomationML tool. Also this work proposes a communication architecture for the exchange of data from the perspective of Cyber Physical Systems. With the implementation of this methodology, we intend to validate the proposed concept and offer a modeling and configuration method to obtain data, extract knowledge and provide visualization systems for users.
APA, Harvard, Vancouver, ISO, and other styles
39

Charoenvisal, Kongkun. "A BIM Interoperable Web-Based DSS for Vegetated Roofing System Selection." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51953.

Full text
Abstract:
There is a body of evidence indicating that the implementation of current Architecture, Engineering, and Construction (AEC) industry business models and practices have caused negative impacts on global energy supply, ecosystems, and local or regional economies. In order to eliminate such negative impacts, AEC practitioners are seeking new business models in which the Building Information Modeling (BIM) technology can be considered an important technology driver. Despite the fact that the majority of AEC practitioners have used BIM tools for construction-level modeling purposes, some early adopters of BIM technology began to use BIM tools to better inform their design decisions. Corresponding to the increasing demand for decision support functionality, a number of studies showed that a part of BIM technology will be developed toward decision support and artificial intelligence domains. The use of computer-based systems to support decision making processes can usually be found in the business management field. In this field, decision support and business intelligence systems are widely used for improving the quality of managerial decisions. Because of its theories and principles, Decision Support Systems (DSS) can be considered as one of the potential information technologies that can be applied to enhance the quality of design decisions. The DSS also has the potential to be constructed as a system platform for implementing building information contained in BIM models associated with other databases, analytical models, and expert knowledge used by AEC practitioners. This study explores an opportunity to extend the capability of BIM technology toward the decision support and artificial intelligence domains by applying the theories and principles of DSS. This research comprises the development of a prototype BIM interoperable web-based DSS for vegetated roofing system selection. The prototype development can be considered a part of an ongoing research agenda focusing on the development of the integrated web-based DSS for holistic building design conducted within the College of Architecture and Urban Studies (CAUS), Virginia Tech. Through a post-use interview study, the developed prototype is used as a tool for evaluating the possibility for the DSS development and the usefulness of DSS in improving the quality of vegetated roofing system design decisions. The understanding gained from the post-use study is used to create a guideline for developing a fully functional DSS for holistic building design that will be developed in the future.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
40

Akbiyik, Eren Kocak. "Service Oriented System Design Through Process Decomposition." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609884/index.pdf.

Full text
Abstract:
Although service oriented architecture has reached a particular maturity level especially in the technological dimension, there is a lack of common and acceptable approach to design a software system through composition and integration of web services. In this thesis, a service oriented system design approach for Service Oriented Architecture based software development is introduced to fill this gap. This new methodology basically offers a procedural top-down decomposition of a given software system allowing several abstraction levels. At the higher levels of the decomposition, the system is divided into abstract nodes that correspond to process models in the decomposition tree. Any node is a process and keeps the sequence and the state information for the possible sub-processes in this decomposition tree. Nodes which are defined as process models may include some sub-nodes to present details for the intermediate levels of the model. Eventually at the leaf level, process models are decomposed into existing web services as the atomic units of system execution. All processes constructing the system decomposition tree are modeled with BPEL (Business Process Execution Language) to expose the algorithmic details of the design. This modeling technique is also supported with a graphical modeling language referred to as SOSEML (Service Oriented Software Engineering Modeling Language) that is also newly introduced in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
41

Carrillo, Rozo Oscar. "Formal and incremental verification of SysML for the design of component-based system." Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2017/document.

Full text
Abstract:
Vérification Formelle et Incrémentale de Spécifications SysML pour la Conception de Systèmes à Base de ComposantsLe travail présenté dans cette thèse est une contribution à la spécification et la vérification des Systèmes à Base de Composants (SBC) modélisé avec le langage SysML. Les SBC sont largement utilisés dans le domaine industrielet ils sont construits en assemblant différents composants réutilisables, permettant ainsi le développement de systèmes complexes en réduisant leur coût de développement. Malgré le succès de l'utilisation des SBC, leur conception est une étape de plus en plus complexe qui nécessite la mise en {\oe}uvre d'approches plus rigoureuses.Pour faciliter la communication entre les différentes parties impliquées dans le développement d'un SBC, un des langages largement utilisé est SysML, qui permet de modéliser, en plus de la structure et le comportement du système, aussi ses exigences. Il offre un standard de modélisation, spécification et documentation de systèmes, dans lequel il est possible de développer un système, partant d'un niveau abstrait, vers des niveaux plus détaillés pouvant aboutir à une implémentation. %Généralement ces systèmes sont faits plus grands parce qu'ils sont développés avec des cadres logiciels.Dans ce contexte nous avons traité principalement deux problématiques.La première est liée au développement par raffinement d'un SBC modélisé uniquement par ses interfaces SysML. Notre contribution permet au concepteur des SBC de garantir formellement qu'une composition d'un ensemble de composants élémentaires et réutilisables raffine une spécification abstraite d'un SBC. Dans cette contribution, nous exploitons les outils: Ptolemy pour la vérification de la compatibilité des composants assemblés, et l'outil MIO Workbench pour la vérification du raffinementLa deuxième problématique traitée concerne la difficulté de déterminer quoi construire et comment le construire, en considérant seulement les exigences du système et des composants réutilisables, donc la question qui en découle est la suivante: comment spécifier une architecture SBC qui satisfait toutes les exigences du système? Nous proposons une approche de vérification formelle incrémentale basée sur des modèles SysML et des automates d'interface pour guider, par les exigences, le concepteur SBC afin de définir une architecture de système cohérente, qui satisfait toutes les exigences SysML proposées. Dans cette approche nous exploitons le model-checker SPIN et la LTL pour spécifier et vérifier les exigences.Mots clés: {Modélisation, Spécifications SysML, Architecture SBC, Raffinement, Compatibilité, Exigences, Propriétés LTL, Promela/SPIN, Ptolemy, MIO Workbench}
Formal and Incremental Verification of SysML Specifications for the Design of Component-Based SystemsThe work presented in this thesis is a contribution to the specification and verification of Component-Based Systems (CBS) modeled in SysML. CBS are widely used on the industrial field, and they are built by assembling various reusable components, allowing developing complex systems at lower cost.Despite the success of the use of CBS, their design is an increasingly complex step that requires the implementation of more rigorous approaches.To ease the communication between the various stakeholders in a CBS development project, one of the widely used modeling languages is SysML, which besides allowing modeling of structure and behavior, it has capabilities to model requirements. It offers a standard for modeling, specifying and documenting systems, wherein it is possible to develop a system, starting from an abstract level, to more detailed levels that may lead to an implementation.In this context, we have dealt mainly two issues. The first one concerns the development by refinement of a CBS, which is described only by its SysML interfaces and behavior protocols. Our contribution allows the designer of CBS to formally ensure that a composition of a set of elementary and reusable components refines an abstract specification of a CBS. In this contribution, we use the tools: Ptolemy for the verification of compatibility of the assembled components and MIO Workbench for refinement verification.The second one concerns the difficulty to decide what to build and how to build it, considering only system requirements and reusable components. Therefore, the question that arises is: how to specify a CBS architecture, which satisfies all system requirements? We propose a formal and incremental verification approach based on SysML models and interface automata to guide, by the requirements, the CBS designer to define a coherent system architecture that satisfies all proposed SysML requirements. In this approach we use the SPIN model-checker and LTL properties to specify and verify requirements.Keywords: {Modeling, SysML specifications, CBS architecture, Refinement, Compatibility, Requirements, LTL properties, Promela/SPIN, Ptolemy, MIO Workbench}
APA, Harvard, Vancouver, ISO, and other styles
42

Nordström, Lars. "Use of the CIM framework for data management in maintenance of electricity distribution networks." Doctoral thesis, KTH, Industriella informations- och styrsystem, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3985.

Full text
Abstract:
Aging infrastructure and personnel, combined with stricter financial constraints has put maintenance, or more popular Asset Management, at the top of the agenda for most power utilities. At the same time the industry reports that this area is not properly supported by information systems. Today’s power utilities have very comprehensive and complex portfolios of information systems that serve many different purposes. A common problem in such heterogeneous system architectures is data management, e.g. data in the systems do not represent the true status of the equipment in the power grid or several sources of data are contradictory. The research presented in this thesis concerns how this industrial problem can be better understood and approached by novel use of the ontology standardized in the Common Information Model defined in IEC standards 61970 & 61968. The theoretical framework for the research is that of data management using ontology based frameworks. This notion is not new, but is receiving renewed attention due to emerging technologies, e.g. Service Oriented Architectures, that support implementation of such ontological frameworks. The work presented is empirical in nature and takes its origin in the ontology available in the Common Information Model. The scope of the research is the applicability of the CIM ontology, not as it was intended i.e. in systems integration, but for analysis of business processes, legacy systems and data. The work has involved significant interaction with power distribution utilities in Sweden, in order to validate the framework developed around the CIM ontology. Results from the research have been published continuously, this thesis consists of an introduction and summary and papers describing the main contribution of the work. The main contribution of the work presented in this thesis is the validation of the proposition to use the CIM ontology as a basis for analysis existing legacy systems. By using the data models defined in the standards and combining them with established modeling techniques we propose a framework for information system management. The framework is appropriate for analyzing data quality problems related to power systems maintenance at power distribution utilities. As part of validating the results, the proposed framework has been applied in a case study involving medium voltage overhead line inspection. In addition to the main contribution, a classification of the state of the practice system support for power system maintenance at utilities has been created. Second, the work includes an analysis and classification of how high performance Wide Area communication technologies can be used to improve power system maintenance including improving data quality.
QC 20100614
APA, Harvard, Vancouver, ISO, and other styles
43

Albertao, Gilberto. "Control of the submarine palaeotopography on the turbidite system architecture : an approach combining structural restorations and sedimentary process-based numerical modeling, applied to a Brazilian offshore case study." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14064/document.

Full text
Abstract:
La dynamique des courants de turbidité est fortement contrôlée par la morphologie du fond marin. Les turbidites issues de ces courants constituent des réservoirs d’hydrocarbures très importants dans les bassins sédimentaires à travers le monde. L'objectif principal de ce travail est de comprendre comment le paleorelief a contrôlé la géométrie et l'architecture des réservoirs turbiditiques, en utilisant comme zone d'étude les réservoirs du Crétacé d'un champ pétrolier du bassin de Campos (offshore du Brésil), où la tectonique a été en partie dominée par l'halocinèse. La méthodologie utilisée dans cette thèse a couplé deux approches. La première a inclus à la fois la description des séquences sédimentaires, à partir de données de sismique-réflexion et de puits, et les restaurations structurales. Six horizons régionaux et quatre unités-réservoirs ont été identifiés et cartographiés afin de construire un modèle géologique multi-2D. Ces surfaces ont ensuite été aussi restaurées. Les résultats de cette étape suggèrent que les failles liées à l'halocinèse ont contraint la paléotopographie pour le dépôt des réservoirs plus anciens et que des structures tectoniques et un canyon ont formés les contraintes paléotopographiques pour la distribution des réservoirs plus jeunes. La seconde approche a été l'analyse du rôle des paramètres des écoulements en effectuant des simulations numériques du type stratigraphique (Dionisos) et des automates cellulaires (CATS). Une surface restaurée, considérée comme référence pour le dépôt des unités-réservoirs a été utilisée comme paléotopographie pour les simulations CATS. Le modèle numérique a été contraint par les données réservoirs. Cette utilisation inédite des simulations 3D avec des automates cellulaires dans une étude de cas réel concernant des dépôts marins anciens a produit des résultats réalistes par rapport aux exemples modernes connus. Elle a également fourni des résultats plus exploitables à l'échelle de réservoir que les modèles numériques de type "stratigraphique". Ce travail met en évidence l'importance des interactions tectonique-sédimentation et de la paléotopographie pour la distribution de réservoirs turbiditiques
The dynamic of gravity-driven turbidity currents is strongly influenced by the morphology of the seafloor. The resulting turbidites constitute important hydrocarbon reservoirs in sedimentary basins throughout the world. The main objective of the present work is thus to understand the way the paleorelief controls turbidite reservoir architectures, with application in a specific study area with Cretaceous reservoirs in Campos Basin (Brazilian offshore). The tectonics in this Basin was partly controlled by halokinesis. The first approach was describing the local Cretaceous sedimentary sequence architecture, from seismic and well data, and performing structural restorations. Six regional horizons and four reservoir-scale units were identified and mapped in order to build a multi-2D geological model. Structural restorations highlighted the structural evolution and allowed the related horizon palaeotopography to be obtained. The results of this work step suggest that the halokinesis-related listric faults regulated the distribution of the basal reservoirs. Moreover, at the top of the Albian carbonates, a canyon was identified, which, in association with the tectonic structures, forms the palaeotopographic constraints for the upper reservoir geometry. The second approach was analyzing the role of flow controlling parameters by performing stratigraphic (Dionisos) and cellular automata-based (CATS) numerical simulations. The latter provided a more appropriate reservoir scale-simulation process than Dionisos. A restored surface, considered as reference for the deposition of the reservoir units, was used as the palaeotopography for CATS simulations, having as constraints the reservoir data. This pioneer use of cellular automata simulations in a real subsurface case study produced coherent results when compared with the actual reservoir distribution. This work sheds light on the importance of tectonic-sedimentation interactions and of palaeotopography for the distribution of turbidite reservoirs
APA, Harvard, Vancouver, ISO, and other styles
44

Cubero-Castan, Michel. "Vers une définition méthodique d'architecture de calculateur pour l'exécution parallèle des langages fonctionnels." Toulouse 3, 1988. http://www.theses.fr/1988TOU30159.

Full text
Abstract:
Definition d'une machine offrant trois qualites primordiales: efficacite, simplicite et uniformite. Les principales etapes de cette conception sont presentees: la caracterisation des mecanismes de base; la definition du modele d'execution vis a vis de l'architecture; les elements d'evaluation
APA, Harvard, Vancouver, ISO, and other styles
45

Shah, Anuj P. "Analysis of transformations to socio-technical systems using agent based modeling and simulation." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/29399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Uddin, Amad. "Development of an integrated interface modelling methodology to support system architecture analysis." Thesis, University of Bradford, 2016. http://hdl.handle.net/10454/15905.

Full text
Abstract:
This thesis presents the development and validation of a novel interface modelling methodology integrated with a system architectural analysis framework that emphasises the need to manage the integrity of deriving and allocating requirements across multiple levels of abstraction in a structured manner. The state of the art review in this research shows that there is no shared or complete interface definition model that could integrate diverse interaction viewpoints for defining system requirements with complete information. Furthermore, while existing system modelling approaches define system architecture with functions and their allocation to subsystems to meet system requirements, they do not robustly address the importance of considering well-defined interfaces in an integrated manner at each level of systems hierarchy. This results in decomposition and integration issues across the multiple levels of systems hierarchy. Therefore, this thesis develops and validates following: -Interface Analysis Template as a systematic tool that integrates diverse interaction viewpoints for modelling system interfaces with intensive information for deriving requirements. -Coupling Matrix as an architecture analysis framework that not only allocates functions to subsystems to meet requirements but also promotes consistent consideration of well-defined interfaces at each level of design hierarchy. Insights from the validation of developed approach with engineering case studies within an automotive OEM are discussed, reflecting on the effectiveness, efficiency and usability of the methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Ajemian, Stephen P. "Modeling and evaluation of aerial layer communications system architectures." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90705.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 90-91).
Airborne networks are being developed to provide communications services in order to augment space-based and terrestrial communications systems. These airborne networks must provide point to point wireless communications capabilities between aircraft and to ground-based users. Architecting airborne networks requires evaluating the capabilities offered by candidate aircraft to operate at the required altitudes to bridge communications among ground users dispersed over large geographic areas. Decision makers are often faced with choices regarding the type and number of aircraft to utilize in an airborne network to meet information exchange requirements. In addition, the type of radio required to meet user needs may also factor into the architecture evaluation for an airborne network. Aircraft and radio design choices must be made under cost constraints in order to deliver capable communications architectures at an acceptable cost. Evaluating communications architectures is often conducted with modeling and simulation. However, evaluations typically focus on specific network configurations and can become intractable when varying design variables such as aircraft and radio types due to the complexity of the trade space being analyzed. Furthermore, the growth in choices for design variables (such as additional aircraft types) can lead to enormous growth in the number of feasible candidate architectures to analyze. The methodology developed and presented herein describes an approach for evaluating a large number of architecture combinations which vary on aircraft type and radio type for representative airborne networks. The methodology utilizes modeling and simulation to generate wireless communications performance data for candidate aircraft and radio types and enumerates a large trade space through a computational tool. The trade space is then evaluated against a multi-objective decision model to rapidly down-select to a handful of candidate architectures for more detailed analysis. The results of this analysis provide effective tools for reducing the complex trade space to a tractable number of architectures to make an informed architectural decision with no prior articulation of preferences for performance measures. For the notional concept of operation analyzed, the number of feasible architectures was approximately 500,000 for each of the two radio types examined. The decision model implemented reduced the feasible architectures to approximately 50 near-optimal architectures for each radio type. From this manageable set of near-optimal architectures, an analysis is conducted to evaluate marginal benefits versus cost to further reduce the candidate architectures to 3 architectures for each radio type. From these remaining architectures, detailed analysis and visualization can be conducted to aid decision makers in articulating preferences and identifying a single "best" architecture based on mission needs. The enumeration of the trade space using the computational tool and multi-objective decision model is highly flexible to incorporating new constraints and generating new candidate architectures as stakeholder preferences become clearer. The trade space enumeration and decision model can be conducted rapidly to down-select large trade spaces to a tractable number of communications architectures to inform an architectural recommendation.
by Stephen P. Ajemian.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
48

TEDJINI-BAILICHE, HACENE. "Systemes multipracesseurs dedies au traitement d'images : approches architecturales, modelisation, evaluation de performances : application a la conception d'un systeme multiprocesseur." Université Louis Pasteur (Strasbourg) (1971-2008), 1989. http://www.theses.fr/1989STR13069.

Full text
Abstract:
Conception et realisation d'un multiprocesseur adapte au traitement d'images. Ce multiprocesseur est organise autour d'un bus partage rapide et comporte plusieurs processeurs specialises pouvant communiquer a travers une memoire commune. Ces derniers couvrent les classes d'operateurs les plus utilisees relatives aux transformations ponctuelles, locales, globales et de type statistique
APA, Harvard, Vancouver, ISO, and other styles
49

Albalawi, Rania. "Toward a Real-Time Recommendation for Online Social Networks." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42255.

Full text
Abstract:
The Internet increases the demand for the development of commercial applications and services that can provide better shopping experiences for customers globally. It is full of information and knowledge sources that might confuse customers. This requires customers to spend additional time and effort when they are trying to find relevant information about specific topics or objects. Recommendation systems are considered to be an important method that solves this issue. Incorporating recommendation systems in online social networks led to a specific kind of recommendation system called social recommendation systems which have become popular with the global explosion in social media and online networks and they apply many prediction algorithms such as data mining techniques to address the problem of information overload and to analyze a vast amount of data. We believe that offering a real-time social recommendation system that can understand the real context of a user’s conversation dynamically is essential to defining and recommending interesting objects at the ideal time. In this thesis, we propose an architecture for a real-time social recommendation system that aims to improve word usage and understanding in social media platforms, advance the performance and accuracy of recommendations, and propose a possible solution to the user cold-start problem. Moreover, we aim to find out if the user’s social context can be used as an input source to offer personalized and improved recommendations that will help users to find valuable items immediately, without interrupting their conversation flow. The suggested architecture works as a third-party social recommendation system that could be incorporated with other existing social networking sites (e.g. Facebook and Twitter). The novelty of our approach is the dynamic understanding of the user-generated content, achieved by detecting topics from the user’s extracted dialogue and then matching them with an appropriate task as a recommendation. Topic extraction is done through a modified Latent Dirichlet Allocation topic modeling method. We also develop a social chat app as a proof of concept to validate our proposed architecture. The results of our proposed architecture offer promising gains in enhancing the real-time social recommendations.
APA, Harvard, Vancouver, ISO, and other styles
50

Jonsson, Kerstin. "Systemmetaforik : Språk och metafor som verktyg i systemarkitektens praktik." Thesis, Södertörns högskola, Institutionen för kultur och lärande, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-24251.

Full text
Abstract:
En systemarkitekts praktik består till stor del av att tolka, beskriva och strukturera verksamhetsprocesser och -information som underlag för förändrings- och utvecklingsarbete, oftast med stöd av it-system. Professionen betraktas traditionellt som en teknisk ingenjörskonst. Men de problem jag ställs inför som arkitekt handlar inte enbart om att designa tekniska system och kommunikation mellan maskiner, utan minst lika ofta om att hantera utmaningar relaterade till mellanmänsklig kommunikation i komplexa situationer. Vad händer om vi fokuserar på denna andra del av arkitektens praktiska kunskap? Denna magister- uppsats handlar om språkets och kommunikationens roll i kontexten av ett systemutvecklingsprojekt. Författaren använder sig av metaforer i en gestaltande skönlitterär kontext som kreativ metod för att visualisera och förmedla olika aspekter på systemarkitektens yrkesroll och praktik. På så vis utnyttjar uppsatsen den mer experimentella form som essän erbjuder för att även utforska sina egna uttrycksmöjligheter. Essäns teoretiska material baserar sig på den språkfilosofiska tradition som utvecklats av Ludwig Wittgenstein och Gilbert Ryle. Utifrån dessa båda tänkares verk förs ett resonemang runt språkets och den kontextuella förståelsens betydelse för systemarkitektens praktiska kunskap. Essän väver även in tankegångar från Thomas Kuhn, Peter Naur och Donald Schön i syfte att utforska just metaforens, improvisationens och den kreativa kommunikationens roll som verktyg i systemarkitektens praktik.
The system architect ́s practice is mainly about interpreting, describing and structuring the processes and information of an enterprise in order to create a foundation for change and development, often supported by IT systems. The profession is traditionally regarded as an art of technical engineering. But the problems I face as architect is not exclusively about designing technical systems and communication between machines, but just as much about handling challenges related to inter-subjective communi- cation between human beings in situations of complex interaction. What happens if we focus on this second aspect of the practical knowledge of the architect? This essay is about the role of language and communication in the context of a system development project. The author uses metaphors in fictional context as a creative method to visualize and mediate different aspects on the architect ́s professional role and practice. In that sense the text utilizes the more experimental form offered by the essay in order to explore its own expressive possibilities. The theoretical material of this essay is based on the language philosophical tradition developed by Ludwig Wittgenstein and Gilbert Ryle. Starting out from these two thinkers, the author reasons around the importance language and contextual understanding has for the practical knowledge of the system architect. Further on the essay weaves in thoughts from Thomas Kuhn, Peter Naur and Donald Schön with the purpose of exploring the role of the metaphor, improvisation and creative communication as tools in the practice of the system architect.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography