Дисертації з теми "Model at runtime"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Model at runtime.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Model at runtime".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Werner, Christopher, Hendrik Schön, Thomas Kühn, Sebastian Götz, and Uwe Aßmann. "Role-based Runtime Model Synchronization." IEEE, 2018. https://tud.qucosa.de/id/qucosa%3A75310.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Model-driven Software Development (MDSD) promotes the use of multiple related models to realize a software system systematically. These models usually contain redundant information but are independently edited. This easily leads to inconsistencies among them. To ensure consistency among multiple models, model synchronizations have to be employed, e.g., by means of model transformations, trace links, or triple graph grammars. Model synchronization poses three main problems for MDSD. First, classical model synchronization approaches have to be manually triggered to perform the synchronization. However, to support the consistent evolution of multiple models, it is necessary to immediately and continuously update all of them. Second, synchronization rules are specified at design time and, in classic approaches, cannot be extended at runtime, which is necessary if metamodels evolve at runtime. Finally, most classical synchronization approaches focus on bilateral model synchronization, i.e., the synchronization between two models. Consequently, for more than two models, they require the definition of pairwise model synchronizations leading to a combinatorial explosion of synchronization rules. To remedy these issues, we propose a role-based approach for runtime model synchronization. In particular, we propose role-based synchronization rules that enable the immediate and continuous propagation of changes to multiple interrelated models (and back again). Additionally, our approach permits adding new and customized synchronization rules at runtime. We illustrate the benefits of role-based runtime model synchronization using the Families to Persons case study from the Transformation Tool Contest 2017.
2

Saller, Karsten. "Model-Based Runtime Adaptation of Resource Constrained Devices." Phd thesis, Universitäts- und Landesbibliothek Darmstadt, 2015. https://tuprints.ulb.tu-darmstadt.de/4322/1/thesis_final_ULB.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dynamic Software Product Line (DSPL) engineering represents a promising approach for planning and applying runtime reconfiguration scenarios to self-adaptive software systems. Reconfigurations at runtime allow those systems to continuously adapt themselves to ever changing contextual requirements. With a systematic engineering approach such as DSPLs, a self-adaptive software system becomes more reliable and predictable. However, applying DSPLs in the vital domain of highly context-aware systems, e.g., mobile devices such as smartphones or tablets, is obstructed by the inherently limited resources. Therefore, mobile devices are not capable to handle large, constrained (re-)configuration spaces of complex self-adaptive software systems. The reconfiguration behavior of a DSPL is specified via so called feature models. However, the derivation of a reconfiguration based on a feature model (i) induces computational costs and (ii) utilizes the available memory. To tackle these drawbacks, I propose a model-based approach for designing DSPLs in a way that allows for a trade-off between pre-computation of reconfiguration scenarios at development time and on-demand evolution at runtime. In this regard, I intend to shift computational complexity from runtime to development time. Therefore, I propose the following three techniques for (1) enriching feature models with context information to reason about potential contextual changes, (2) reducing a DSPL specification w.r.t. the individual characteristics of a mobile device, and (3) specifying a context-aware reconfiguration process on the basis of a scalable transition system incorporating state space abstractions and incremental refinements at runtime. In addition to these optimization steps executed prior to runtime, I introduce a concept for (4) reducing the operational costs utilized by a reconfiguration at runtime on a long-term basis w.r.t. the DSPL transition system deployed on the device. To realize this concept, the DSPL transition system is enriched with non-functional properties, e.g., costs of a reconfiguration, and behavioral properties, e.g., the probability of a change within the contextual situation of a device. This provides the possibility to determine reconfigurations with minimum costs w.r.t. estimated long-term changes in the context of a device. The concepts and techniques contributed in this thesis are illustrated by means of a mobile device case study. Further, implementation strategies are presented and evaluated considering different trade-off metrics to provide detailed insights into benefits and drawbacks.
3

Mendonça, Danilo Filgueira. "Dependability verification for contextual/runtime goal modelling." reponame:Repositório Institucional da UnB, 2015. http://dx.doi.org/10.26512/2015.02.D.18158.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2015.
Submitted by Ana Cristina Barbosa da Silva (annabds@hotmail.com) on 2015-04-27T15:56:38Z No. of bitstreams: 1 2015_DaniloFilgueiraMendonça.pdf: 15442097 bytes, checksum: 3fd8f92515216f0962560e658342894f (MD5)
Approved for entry into archive by Raquel Viana(raquelviana@bce.unb.br) on 2015-05-14T19:16:39Z (GMT) No. of bitstreams: 1 2015_DaniloFilgueiraMendonça.pdf: 15442097 bytes, checksum: 3fd8f92515216f0962560e658342894f (MD5)
Made available in DSpace on 2015-05-14T19:16:39Z (GMT). No. of bitstreams: 1 2015_DaniloFilgueiraMendonça.pdf: 15442097 bytes, checksum: 3fd8f92515216f0962560e658342894f (MD5)
Um contexto de operação estático não é a realidade para muitos sistemas de software atualmente. Variações de contextos impõe novos desafios ao desenvolvimento de sistemas seguros, o que inclui a ativação de falhas apenas em contextos específicos de operação. A engenharia de requisitos orientada a objetivos (GORE) explicita o ‘por quê’ dos requisitos de um sistema, isto é, a intencionalidade por trás de objetivos do sistema e os meios de se atingi-los. Um Runtime goal model (RGM) adiciona especificação de comportamento ao modelo de objetivos convencional, enquanto um Contextual goal model (CGM) especifica efeitos de contextos sobre objetivos, meios e métricas de qualidade. Visando uma verificação formal da dependabilidade de um Contextual-Runtime goal model (CRGM), nesse trabalho é proposta uma nova abordagem para a análise de dependabilidade orientada a objetivos baseada na técnica de verificação probabilística de modelos. Em particular, são definidas regras para a transformação de um CRGM para um modelo cadeia de Makov de tempo discreto (DTMC) com o qual se possa verificar a confiabilidade de se satisfazer um ou mais objetivos do sistema. Adicionalmente, para diminuir o esforço de análise e aumentar a usabilidade de nossa proposta, um gerador automatizado de código CRGM para DTMC foi implementado e integrado com sucesso à ferramenta gráfica que dá suporte às fases de modelagem e análise de objetivos da metodologia TROPOS. A verificação contextual de dependabilidade resultante reflete os requisitos no CRGM, que podem representar: o projeto de um sistema, cuja verificação ocorreria em fase de projetos; ou um sistema em execução, cujo comportamento pode ser verificado em tempo de execução como parte de uma análise de auto-adaptação com foco em dependabilidade.
A static and stable operation environment is not a reality for many systems nowadays. Context variations impose many threats to systems safety, including the activation of context specific failures. Goal-oriented requirements engineering (GORE) brings forward the ‘why’ of system requirements, i.e., the intentionality behind system goals and the means to meet then. A runtime goal model adds a behaviour specification layer to a conventional design goal model, and a contextual goal model specifies the context effects over system goals, means and qualitative metrics. In order to formally verify the dependability of a CRGM, we propose a new goal-oriented dependability analysis based on the probabilistic model checking technique. In particular, we define rules for the transformation of a CRGM into a DTMC model that can be verified for the reliability of the fulfilment of one or more system goals. Also, to mitigate the analysis overhead and increase the usability of our proposal, we have successfully implemented and integrated a CRGM to DTMC code generator to the graphical tool that supports the goal modelling and analysis phases of the TROPOS development methodology. The resulting contextual dependability verification reflects the system requirements in a CRGM, which may represent: a system-to-be, whose verification would take place at design-time; or a running system, whose behaviour can be verified at runtime as part of a self-adaptation analysis targeting dependability.
4

Jäkel, Tobias, Martin Weißbach, Kai Herrmann, Hannes Voigt, and Max Leuthäuser. "Position paper: Runtime Model for Role-based Software Systems." IEEE, 2016. https://tud.qucosa.de/id/qucosa%3A75302.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the increasingly dynamic realities of today's software systems, it is no longer feasible to always expect human developers to react to changing environments and changing conditions immediately. Instead, software systems need to be self-aware and autonomously adapt their behavior according to their experiences gathered from their environment. Current research provides role-based modeling as a promising approach to handle the adaptivity and self-awareness within a software system. There are established role-based systems e.g., for application development, persistence, and so on. However, these are isolated approaches using the role-based model on their specific layer and mapping to existing non-role-based layers. We present a global runtime model covering the whole stack of a software system to maintain a global view of the current system state and model the interdependencies between the layers. This facilitates building holistic role-based software systems using the role concept on every single layer to exploit its full potential, particularly adaptivity and self-awareness.
5

Nödtvedt, Sebastian. "CM model view transformations To support runtime forward/backward compatibility." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-442392.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The task to implement a solution of handling updates and version discrepancies within a testbeds Configuration Management. The Ericsson 5G testbed is built to support deployment of higher layer functions in a cloud environment. The benefits of using cloud deployments is mainly that it enables elastic application that can grow and shrink its footprint in runtime to adjust the capacity according to the traffic load. Schema data associated with different versions of a document-oriented database within a cloud environment provides dynamic properties but what remains static and cumbersome is updating parts of the system. If one can resolve this is then newer versions of functions can be instantiated in runtime and in parallel with older versions which partially can remove the need for application and system upgrades. However, this puts completely new demands on the architecture and how supportfunctions are designed. One such support function is the configuration managementfunction which in the current 5G testbed system is seen as an infrastructure function that can be replaced and upgraded independently of other running traffic applications.This requires handling of forward and backwards compatibility between the configuration management function and traffic functions that consume the configuration data. In this report a prototype was constructed and tested, the prototype consists of mainly two core components. Firstly a Wizard which handles two different versions of a model and generates a transformation schema, this is then passed to the Transformation which does the needed data transformation for compatibility. The Wizard starts by ensuring the required data is compatible and additionally acts as ainteractive tool for a operator, providing a overview and insight into the datatransformation. A solution within the frames of being a proof of concept was successfully implemented and demonstrated, inherent limitations where taken into account in the design. In conclusion a feasible solution is possible to implement for resolving version management within a system like the 5G testbed, which reduces a otherwise slow and error prone manual process.
6

Kotrajaras, Vishnu. "Towards an improved memory model for Java." Thesis, Imperial College London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.272386.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhang, Minjia. "Efficient Runtime Support for Reliable and Scalable Parallelism." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1469557197.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Vogel, Thomas, and Holger Giese. "Model-driven engineering of adaptation engines for self-adaptive software : executable runtime megamodels." Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6382/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The development of self-adaptive software requires the engineering of an adaptation engine that controls and adapts the underlying adaptable software by means of feedback loops. The adaptation engine often describes the adaptation by using runtime models representing relevant aspects of the adaptable software and particular activities such as analysis and planning that operate on these runtime models. To systematically address the interplay between runtime models and adaptation activities in adaptation engines, runtime megamodels have been proposed for self-adaptive software. A runtime megamodel is a specific runtime model whose elements are runtime models and adaptation activities. Thus, a megamodel captures the interplay between multiple models and between models and activities as well as the activation of the activities. In this article, we go one step further and present a modeling language for ExecUtable RuntimE MegAmodels (EUREMA) that considerably eases the development of adaptation engines by following a model-driven engineering approach. We provide a domain-specific modeling language and a runtime interpreter for adaptation engines, in particular for feedback loops. Megamodels are kept explicit and alive at runtime and by interpreting them, they are directly executed to run feedback loops. Additionally, they can be dynamically adjusted to adapt feedback loops. Thus, EUREMA supports development by making feedback loops, their runtime models, and adaptation activities explicit at a higher level of abstraction. Moreover, it enables complex solutions where multiple feedback loops interact or even operate on top of each other. Finally, it leverages the co-existence of self-adaptation and off-line adaptation for evolution.
Die Entwicklung selbst-adaptiver Software erfordert die Konstruktion einer sogenannten "Adaptation Engine", die mittels Feedbackschleifen die unterliegende Software steuert und anpasst. Die Anpassung selbst wird häufig mittels Laufzeitmodellen, die die laufende Software repräsentieren, und Aktivitäten wie beispielsweise Analyse und Planung, die diese Laufzeitmodelle nutzen, beschrieben. Um das Zusammenspiel zwischen Laufzeitmodellen und Aktivitäten systematisch zu erfassen, wurden Megamodelle zur Laufzeit für selbst-adaptive Software vorgeschlagen. Ein Megamodell zur Laufzeit ist ein spezielles Laufzeitmodell, dessen Elemente Aktivitäten und andere Laufzeitmodelle sind. Folglich erfasst ein Megamodell das Zusammenspiel zwischen verschiedenen Laufzeitmodellen und zwischen Aktivitäten und Laufzeitmodellen als auch die Aktivierung und Ausführung der Aktivitäten. Darauf aufbauend präsentieren wir in diesem Artikel eine Modellierungssprache für ausführbare Megamodelle zur Laufzeit, EUREMA genannt, die aufgrund eines modellgetriebenen Ansatzes die Entwicklung selbst-adaptiver Software erleichtert. Der Ansatz umfasst eine domänen-spezifische Modellierungssprache und einen Laufzeit-Interpreter für Adaptation Engines, insbesondere für Feedbackschleifen. EUREMA Megamodelle werden über die Spezifikationsphase hinaus explizit zur Laufzeit genutzt, um mittels Interpreter Feedbackschleifen direkt auszuführen. Zusätzlich können Megamodelle zur Laufzeit dynamisch geändert werden, um Feedbackschleifen anzupassen. Daher unterstützt EUREMA die Entwicklung selbst-adaptiver Software durch die explizite Spezifikation von Feedbackschleifen, der verwendeten Laufzeitmodelle, und Adaptionsaktivitäten auf einer höheren Abstraktionsebene. Darüber hinaus ermöglicht EUREMA komplexe Lösungskonzepte, die mehrere Feedbackschleifen und deren Interaktion wie auch die hierarchische Komposition von Feedbackschleifen umfassen. Dies unterstützt schließlich das integrierte Zusammenspiel von Selbst-Adaption und Wartung für die Evolution der Software.
9

Hallou, Nabil. "Runtime optimization of binary through vectorization transformations." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S120/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les applications ne sont pas toujours optimisées pour le matériel sur lequel elles s'exécutent, comme les logiciels distribués sous forme binaire, ou le déploiement des programmes dans des fermes de calcul. On se concentre sur la maximisation de l'efficacité du processeur pour les extensions SIMD. Nous montrons que de nombreuses boucles compilées pour x86 SSE peuvent être converties dynamiquement en versions AVX plus récentes et plus puissantes. Nous obtenons des accélérations conformes à celles d'un compilateur natif ciblant AVX. De plus, on vectorise en temps réel des boucles scalaires. Nous avons intégré des logiciels libres pour (1) transformer dynamiquement le binaire vers la forme de représentation intermédiaire, (2) abstraire et vectoriser les boucles fréquemment exécutées dans le modèle polyédrique (3) enfin les compiler. Les accélérations obtenues sont proches du nombre d'éléments pouvant être traités simultanément par l'unité SIMD
In many cases, applications are not optimized for the hardware on which they run. This is due to backward compatibility of ISA that guarantees the functionality but not the best exploitation of the hardware. Many reasons contribute to this unsatisfying situation such as legacy code, commercial code distributed in binary form, or deployment on compute farms. Our work focuses on maximizing the CPU efficiency for the SIMD extensions. The first contribution is a lightweight binary translation mechanism that does not include a vectorizer, but instead leverages what a static vectorizer previously did. We show that many loops compiled for x86 SSE can be dynamically converted to the more recent and more powerful AVX; as well as, how correctness is maintained with regards to challenges such as data dependencies and reductions. We obtain speedups in line with those of a native compiler targeting AVX. The second contribution is a runtime auto-vectorization of scalar loops. For this purpose, we use open source frame-works that we have tuned and integrated to (1) dynamically lift the x86 binary into the Intermediate Representation form of the LLVM compiler, (2) abstract hot loops in the polyhedral model, (3) use the power of this mathematical framework to vectorize them, and (4) finally compile them back into executable form using the LLVM Just-In-Time compiler. In most cases, the obtained speedups are close to the number of elements that can be simultaneously processed by the SIMD unit. The re-vectorizer and auto-vectorizer are implemented inside a dynamic optimization platform; it is completely transparent to the user, does not require any rewriting of the binaries, and operates during program execution
10

Kabir, Sohag, I. Sorokos, K. Aslansefat, Y. Papadopoulos, Y. Gheraibia, J. Reich, M. Saimler, and R. Wei. "A Runtime Safety Analysis Concept for Open Adaptive Systems." Springer, 2019. http://hdl.handle.net/10454/17416.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
No
In the automotive industry, modern cyber-physical systems feature cooperation and autonomy. Such systems share information to enable collaborative functions, allowing dynamic component integration and architecture reconfiguration. Given the safety-critical nature of the applications involved, an approach for addressing safety in the context of reconfiguration impacting functional and non-functional properties at runtime is needed. In this paper, we introduce a concept for runtime safety analysis and decision input for open adaptive systems. We combine static safety analysis and evidence collected during operation to analyse, reason and provide online recommendations to minimize deviation from a system’s safe states. We illustrate our concept via an abstract vehicle platooning system use case.
This conference paper is available to view at http://hdl.handle.net/10454/17415.
11

Fouquet, François. "Kevoree : Model@Runtime pour le développement continu de systèmes adaptatifs distribués hétérogènes." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00831018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La complexité croissante des systèmes d'information modernes a motivé l'apparition de nouveaux paradigmes (objets, composants, services, etc), permettant de mieux appréhender et maîtriser la masse critique de leurs fonctionnalités. Ces systèmes sont construits de façon modulaire et adaptable afin de minimiser les temps d'arrêts dus aux évolutions ou à la maintenance de ceux-ci. Afin de garantir des propriétés non fonctionnelles (par ex. maintien du temps de réponse malgré un nombre croissant de requêtes), ces systèmes sont également amenés à être distribués sur différentes ressources de calcul (grilles). Outre l'apport en puissance de calcul, la distribution peut également intervenir pour distribuer une tâche sur des nœuds aux propriétés spécifiques. C'est le cas dans le cas des terminaux mobiles proches des utilisateurs ou encore des objets et capteurs connectés proches physiquement du contexte de mesure. L'adaptation d'un système et de ses ressources nécessite cependant une connaissance de son état courant afin d'adapter son architecture et sa topologie aux nouveaux besoins. Un nouvel état doit ensuite être propagé à l'ensemble des nœuds de calcul. Le maintien de la cohérence et le partage de cet état est rendu particulièrement difficile à cause des connexions sporadiques inhérentes à la distribution, pouvant amener des sous-systèmes à diverger. En réponse à ces défi scientifiques, cette thèse propose une abstraction de conception et de déploiement pour systèmes distribués dynamiquement adaptables, grâce au principe du Model@Runtime. Cette approche propose la construction d'une couche de réflexion distribuée qui permet la manipulation abstraite de systèmes répartis sur des nœuds hétérogènes. En outre, cette contribution introduit dans la modélisation des systèmes adaptables la notion de cohérence variable, permettant ainsi de capturer la divergence des nœuds de calcul dans leur propre conception. Cette couche de réflexion, désormais cohérente "à terme", permet d'envisager la construction de systèmes adaptatifs hétérogènes, regroupant des nœuds mobiles et embarqués dont la connectivité peut être intermittente. Cette contribution a été concrétisée par un projet nommé ''Kevoree'' dont la validation démontre l'applicabilité de l'approche proposée pour des cas d'usages aussi hétérogènes qu'un réseau de capteurs ou une flotte de terminaux mobiles.
12

Mathias, Elton Nicoletti. "Hierarchical message passing through a ProActive/GCM based runtime." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2010. http://hdl.handle.net/10183/25956.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nos últimos anos, computação em grade tem emergido como uma forma de utilização de recursos geograficamente distribuídos em múltiplas organizações. Devido ao fato de grids serem altamente distribuídos e compostos por recursos heterogêneos, a computação em grade tem dado importância a requisitos específicos, como escalabilidade, desempenho e a necessidade de um modelo de programação adequado. Vários modelos de programação já foram propostos para a computação em grade. Entretanto, ate agora, nenhum deles supriu todos os requisitos. Diferentemente, na área de alto desempenho em clusters, o modelo de passagem de mensagens se tornou um verdadeiro padrão com um grande número de bibliotecas e aplicações legadas. Este trabalho propõe um framework híbrido que combina os altos desempenho e aceitação do padrão MPI, melhorado com extensões intuitivas para permitir aos desenvolvedores o projeto e desenvolvimento de aplicações em grade ou a gridi-ficação de aplicações já existentes, com a flexibilidade de um runtime baseado em componentes, modelando uma hierarquia de recursos e suportando a comunicação entre clusters. A solução proposta se baseia na adição de comunicadores MPI e uma API relacionada, a qual oferece um suporte ao desenvolvimento de aplicações que levam em conta a topologia hierárquica de grades computacionais, adequado a desenvolvedores habituados a MPI. características (Simula_c~ao Baseada no Algoritmo de Monte Carlo, Mergesort e um solver Poisson3D) mostraram que a gridificação pode melhorar consideravelmente o desempenho dessas aplicações em ambientes de grade. Ainda que o objetivo deste trabalho não seja competir com distribuições MPI existentes, o desempenho da solução proposta _e comparável ao desempenho de MPI, sendo melhor em alguns casos. A partir dos resultados obtidos com o protótipo apresentado, é possível concluir que o custo adicionado pela utilização de componentes não é desprezível, mas dentro do esperado. Entretanto, espera-se que os benefícios para aplicações de grade devem superar os custos adicionais. Além disso, as extensões a interface MPI oferecem a usuários as abstrações necessárias ao projeto de algoritmos paralelos de forma hierárquica, visando ambientes de grade.
In the past several years, grid computing has emerged as a way to harness computing resources geographically distributed across multiple organizations. Due to its inherently largely distributed and heterogeneous nature, grid computing has enlarged the importance of specific requirements, such as scalability, performance and the need of an adequate programming model. Several programming models have been proposed for grid programming. Nonetheless, so far, none of them met all the requirements. Differently, in the field of high performance cluster computing, the message passing model became a true standard with a large number of libraries and legacy applications. This work proposes a hybrid framework that combines the high performance and high acceptability of the MPI standard boosted with intuitive extensions to enable developers to design grid applications or "gridify" existing ones with the flexibility of a component-based runtime modeling resources hierarchy and offering support to inter-cluster communication. The proposed solution relies on the addition of new MPI communicators and a related API, which may offer a support well-suited to programmers used to MPI in order to reflect a hierarchical topology within the deployed application. Carlo Simulation, a Mergesort and a Poissond3D solver) have shown that the "gridification" of applications improve application performance on grid environments. Even if the goal is not to compete against existing MPI distributions, the performance of the solution is comparable with MPI performance, even better in some cases. From the results obtained in the evaluation of this prototype, we conclude that the overhead introduced by the components is not negligible, but inside of the expected. However, we can expect the benefits to grid applications to bypass the generated overhead. Besides, the extended interface may offer users the adequate abstractions to design parallel algorithms in a hierarchical way addressing grid environments.
13

Arafat, Md Humayun. "Runtime Systems for Load Balancing and Fault Tolerance on Distributed Systems." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1408972218.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Pfannemüller, Martin [Verfasser], and Christian [Akademischer Betreuer] Becker. "A model-based runtime environment for adapting communication systems / Martin Pfannemüller ; Betreuer: Christian Becker." Mannheim : Universitätsbibliothek Mannheim, 2021. http://d-nb.info/1230323023/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Althoff, Guilherme Figueira. "Using executable assertions for runtime fault detection in a model-based software development approach." Instituto Tecnológico de Aeronáutica, 2007. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=1035.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The impressive technological evolution observed in the last years has as its main engine the computer. Among many possible applications for this notable machine, the Embedded Computer Systems (ECS) are of great relevance. The number of critical ECS, i.e., whose failure results in catastrophic consequences in terms of human or material lost, also grows dramatically, and opens a new horizon of hazards. Hence, studies in the field of critical ECS become more important. Among the strategies for the development of such systems, this work will deal with fault tolerance. More specifically, software techniques for detection of faults that arise due to external factors of software design errors will be studied. Such techniques are named assertions. It will be proposed an activities workflow that considers the process of software development for a critical ECS based on system models. This approach, called model-based design, is a tendency in the embedded software world, because it brings many benefits, such as reduction of development time, ease of understading and maintaining the design and high degree of reuse. A hypothetical system will be developed according to this approach and different assertions types will be tested and compared. The quality of the assertion set will be measured through a set of metrics, and fault injection at the model level will be applied for this evaluation.
16

De, Camargo Francesquini Emilio. "Dealing with actor runtime environments on hierarchical shared memory multi-core platforms." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM027/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le modèle de programmation à base d'acteurs a été intensivement utilisé pour le développement de grandes applications et systèmes. On citera par exemple la fonction chat de Facebook ou bien encore WhatsApp. Ces systèmes peuvent avoir plusieurs milliers d'utilisateurs connectés simultanément avec des contraintes fortes de performance et d'interactivité. Ces systèmes s"appuient sur des infrastructures informatiques basées sur des processeurs multi-cœurs. Ces infrastructures disposent en général d'un espace mémoire partagé et hiérarchique NUMA (Non-Uniform Memory Access). Notre analyse de l'état de l'art montre que peu d'études ont été menées sur l'adéquation des environnements d'exécution à base d'acteurs avec des plates-formes à mémoire hiérarchique. Ces environnements d'exécution font en général l'hypothèse que l'espace de mémoire est complètement plat, ce qui pose ensuite de sérieux problèmes de performance. Dans cette thèse, nous étudions les défis posés par les plates-formes multi-cœurs à mémoire hiérarchiques pour des environnements à base d'acteurs. Nous étudions plus particulièrement les problèmes de gestion mémoire, d'ordonnancement et d'équilibrage de charge.Dans la première partie de la thèse, nous avons analysé et caractérisé les applications basées sur le modèle d'acteurs. Cela a permis de mettre en évidence le fait que les exécutions des applications et benchmarks faisaient ressortir des structures de communication particulières que les environnements d'exécution se doivent de prendre en compte pour optimiser les performances. La prise en compte du graphe de communication et la mise en œuvre ont été effectuées dans un environnement d'exécution réel, la machine virtuelle (VM) du langage de programmation Erlang. Le langage de programmation Erlang s'appuie sur le modèle d'acteurs avec une syntaxe claire et cohérente pour la gestion des acteurs. Les modifications que nous avons intégrées à la machine virtuelle Erlang permettent d'améliorer significativement les performances grâce à une meilleure prise en compte de l'affinité entre des acteurs qui interagissent beaucoup. L'ordonnancement et la régulation de charge de l'application sont également améliorées grâce à une meilleure connaissance de l'application et de la topologie de la plate-forme. Une des perspectives serait d'intégrer ces contributions à d'autres environnements d'exécution à base d'acteurs, comme par exemple ceux des Kilim et Akka
The actor model is present in several mission-critical systems, such as those supporting WhatsApp and Facebook Chat. These systems serve thousands of clients simultaneously, therefore demanding substantial computing resources usually provided by multi-processor and multi-core platforms. Non-Uniform Memory Access (NUMA) architectures account for an important share of these platforms. Yet, research on the the suitability of the current actor runtime environments for these machines is very limited. Current runtime environments, in general, assume a flat memory space, thus not performing as well as they could. In this thesis we study the challenges hierarchical shared memory multi-core platforms present to actor runtime environments. In particular, we investigate aspects related to memory management, scheduling, and load-balancing.In this document, we analyze and characterize actor based applications to, in light of the above, propose improvements to actor runtime environments. This analysis highlighted the existence of peculiar communication structures. We argue that the comprehension of these structures and the knowledge about the underlying hardware architecture can be used in tandem to improve application performance. As a proof of concept, we implemented our proposal using a real actor runtime environment, the Erlang Virtual Machine (VM). Concurrency in Erlang is based on the actor model and the language has a consistent syntax for actor handling. Our modifications to the Erlang VM significantly improved the performance of some applications thanks to better informed decisions on scheduling and on load-balancing. As future work we envision the integration of our approach into other actor runtime environments such as Kilim and Akka
17

Saller, Karsten [Verfasser], Andy [Akademischer Betreuer] Schürr, and Ina [Akademischer Betreuer] Schaefer. "Model-Based Runtime Adaptation of Resource Constrained Devices / Karsten Saller. Betreuer: Andy Schürr ; Ina Schaefer." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2015. http://d-nb.info/1110980523/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Arora, Nitin. "High performance algorithms to improve the runtime computation of spacecraft trajectories." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49076.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Challenging science requirements and complex space missions are driving the need for fast and robust space trajectory design and simulation tools. The main aim of this thesis is to develop new and improved high performance algorithms and solution techniques for commonly encountered problems in astrodynamics. Five major problems are considered and their state-of-the art algorithms are systematically improved. Theoretical and methodological improvements are combined with modern computational techniques, resulting in increased algorithm robustness and faster runtime performance. The five selected problems are 1) Multiple revolution Lambert problem, 2) High-fidelity geopotential (gravity field) computation, 3) Ephemeris computation, 4) Fast and accurate sensitivity computation, and 5) High-fidelity multiple spacecraft simulation. The work being presented enjoys applications in a variety of fields like preliminary mission design, high-fidelity trajectory simulation, orbit estimation and numerical optimization. Other fields like space and environmental science to chemical and electrical engineering also stand to benefit.
19

Parra, Carlos. "Towards Dynamic Software Product Lines : Unifying Design and Runtime Adaptations." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2011. http://tel.archives-ouvertes.fr/tel-00583444.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Pour profiter des nombreux matériels actuellement, les logiciels s'exécutant sur des téléphones mobiles doivent devenir sensibles au contexte, c'est-à-dire, qu'ils doivent surveiller les événements provenant de leur environnement et réagir en conséquence. Nous considérons que ces logiciels peuvent bénéficier d'une approche basée sur les Lignes de Produits Logiciels (LPL). Les LPLs sont définies pour exploiter les points communs par la définition d'éléments réutilisables. Néanmoins, les LPLs ne prennent pas en compte les modifications à l'exécution des applications. Cette thèse propose une ligne de produits logiciels dynamique (LPLD) qui étend une LPL classique en fournissant des mécanismes pour adapter les produits à l'exécution. Notre objectif principal est d'unifier les adaptations à la conception et à l'exécution en utilisant des artefacts logiciels de haut niveau. Concrètement, nous introduisons un modèle de variabilité et un modèle de composition pour modulariser les produits sous forme de modèles d'aspect. Chaque modèle d'aspect a trois parties : l'architecture, les modifications, et le point de coupe. Ensuite, nous proposons deux processus de dérivation du produit : un pour la conception que vise à construire un produit, et un pour l'exécution que vise à adapter un produit. Ce travail de recherche s'est déroulé dans le cadre du projet FUI CAPPUCINO. Nous avons défini une LPLD pour une étude de cas de vente d'un hypermarché sensible au contexte. Le scénario démontre les avantages de notre approche et, en particulier, l'unification réalisée par les modèles d'aspect utilisés à la fois à la conception et à l'exécution.
20

Bauer, Andreas. "The theory and practice of runtime reflection a model-based framework for dynamic analysis of distributed reactive systems." Saarbrücken VDM Verlag Dr. Müller, 2007. http://d-nb.info/989452999/04.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Acosta, Padilla Francisco Javier. "Self-adaptation for Internet of things applications." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S094/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'Internet des Objets (IdO) couvre peu à peu tous les aspects de notre vie. À mesure que ces systèmes deviennent plus répandus, le besoin de gérer cette infrastructure complexe comporte plusieurs défis. En effet, beaucoup de petits appareils interconnectés fournissent maintenant plus d'un service dans plusieurs aspects de notre vie quotidienne, qui doivent être adaptés à de nouveaux contextes sans l'interruption de tels services. Cependant, ce nouveau système informatique diffère des systèmes classiques principalement sur le type, la taille physique et l'accès des nœuds. Ainsi, des méthodes typiques pour gérer la couche logicielle sur de grands systèmes distribués comme on fait traditionnellement ne peuvent pas être employées dans ce contexte. En effet, cela est dû aux capacités très différentes dans la puissance de calcul et la connectivité réseau, qui sont très contraintes pour les appareils de l'IdO. De plus, la complexité qui était auparavant gérée par des experts de plusieurs domaines, tels que les systèmes embarqués et les réseaux de capteurs sans fil (WSN), est maintenant accrue par la plus grande quantité et hétérogénéité des logiciels et du matériel des nœuds. Par conséquent, nous avons besoin de méthodes efficaces pour gérer la couche logicielle de ces systèmes, en tenant compte les ressources très limitées. Cette infrastructure matérielle sous-jacente pose de nouveaux défis dans la manière dont nous administrons la couche logicielle de ces systèmes. Ces défis peuvent entre divisés en : Intra-nœud, sur lequel nous faisons face à la mémoire limitée et à la puissance de calcul des nœuds IdO, afin de gérer les mises à jour sur ces appareils ; Inter-noeud, sur lequel une nouvelle façon de distribuer les mises à jour est nécessaire, en raison de la topologie réseau différente et le coût en énergie pour les appareils alimentés par batterie ; En effet, la puissance de calcul limitée et la durée de vie de chaque nœud combiné à la nature très distribuée de ces systèmes, ajoute de la complexité à la gestion de la couche logicielle distribuée. La reconfiguration logicielle des nœuds dans l'Internet des objets est une préoccupation majeure dans plusieurs domaines d'application. En particulier, la distribution du code pour fournir des nouvelles fonctionnalités ou mettre à jour le logiciel déjà installé afin de l'adapter aux nouvelles exigences, a un impact énorme sur la consommation d'énergie. La plupart des algorithmes actuels de diffusion du code sur l'air (OTA) sont destinés à diffuser un microprogramme complet à travers de petits fragments, et sont souvent mis en œuvre dans la couche réseau, ignorant ainsi toutes les informations de guidage de la couche applicative. Première contribution : Un moteur de modèles en temps d'exécution représentant une application de l'IdO en cours d'exécution sur les nœuds à ressources limitées. La transformation du méta-modèle Kevoree en code C pour répondre aux contraintes de mémoire spécifiques d'un dispositif IdO a été réalisée, ainsi que la proposition des outils de modélisation pour manipuler un modèle en temps d'exécution. Deuxième contribution : découplage en composants d'un système IdO ainsi qu'un algorithme de distribution de composants efficace. Le découplage en composants d'une application dans le contexte de l'IdO facilite sa représentation sur le modèle en temps d'exécution, alors qu'il fournit un moyen de changer facilement son comportement en ajoutant/supprimant des composants et de modifier leurs paramètres. En outre, un mécanisme pour distribuer ces composants en utilisant un nouvel algorithme appelé Calpulli est proposé
The Internet of Things (IoT) is covering little by little every aspect on our lives. As these systems become more pervasive, the need of managing this complex infrastructure comes with several challenges. Indeed, plenty of small interconnected devices are now providing more than a service in several aspects of our everyday life, which need to be adapted to new contexts without the interruption of such services. However, this new computing system differs from classical Internet systems mainly on the type, physical size and access of the nodes. Thus, typical methods to manage the distributed software layer on large distributed systems as usual cannot be employed on this context. Indeed, this is due to the very different capacities on computing power and network connectivity, which are very constrained for IoT devices. Moreover, the complexity which was before managed by experts on several fields, such as embedded systems and Wireless Sensor Networks (WSN), is now increased by the larger quantity and heterogeneity of the node’s software and hardware. Therefore, we need efficient methods to manage the software layer of these systems, taking into account the very limited resources. This underlying hardware infrastructure raises new challenges in the way we administrate the software layer of these systems. These challenges can be divided into: intra-node, on which we face the limited memory and CPU of IoT nodes, in order to manage the software layer and ; inter-node, on which a new way to distribute the updates is needed, due to the different network topology and cost in energy for battery powered devices. Indeed, the limited computing power and battery life of each node combined with the very distributed nature of these systems, greatly adds complexity to the distributed software layer management. Software reconfiguration of nodes in the Internet of Things is a major concern for various application fields. In particular, distributing the code of updated or new software features to their final node destination in order to adapt it to new requirements, has a huge impact on energy consumption. Most current algorithms for disseminating code over the air (OTA) are meant to disseminate a complete firmware through small chunks and are often implemented at the network layer, thus ignoring all guiding information from the application layer. First contribution: A models@runtime engine able to represent an IoT running application on resource constrained nodes. The transformation of the Kevoree meta-model into C code to meet the specific memory constraints of an IoT device was performed, as well as the proposition of modelling tools to manipulate a model@runtime. Second contribution: Component decoupling of an IoT system as well as an efficient component distribution algorithm. Components decoupling of an application in the context of the IoT facilitates its representation on the model@runtime, while it provides a way to easily change its behaviour by adding/removing components and changing their parameters. In addition, a mechanism to distribute such components using a new algorithm, called Calpulli is proposed
22

Maheo, Aurèle. "Improving the Hybrid model MPI+Threads through Applications, Runtimes and Performance tools." Thesis, Versailles-St Quentin en Yvelines, 2015. http://www.theses.fr/2015VERS039V/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Afin de répondre aux besoins de plus en plus importants en puissance de calcul de la part des applicationsnumériques, les supercalculateurs ont dû évoluer et sont ainsi de plus en plus compliqués àprogrammer. Ainsi, en plus de l’apparition des systèmes à mémoire partagée, des architectures ditesNUMA (Non Uniform Memory Access) sont présentes au sein de ces machines, fournissant plusieursniveaux de parallélisme. Une autre contrainte, la diminution de la mémoire disponible par coeur decalcul, doit être soulignée. C’est ainsi que des modèles parallèles tels que MPI (Message Passing Interface)ne permettent plus aux codes scientifiques haute performance de passer à l’echelle et d’exploiterefficacement les machines de calcul, et doivent donc être combinés avec d’autres modèles plus adaptésaux architectures à mémoire partagée. OpenMP, en tant que modèle standardisé, est un choix privilégiépour être combiné avec MPI. Mais mélanger deux modèles avec des paradigmes différents est unet âche compliquée et peut engendrer des goulets d’étranglement qui doivent être identifiés. Cette thèsea pour objectif d’aborder ces limitations et met en avant plusieurs contributions couvrant divers aspects.Notre première contribution permet de r éduire le surcoût des supports exécutifs OpenMP en optimisantle travail d’activation et de synchronisation des threads OpenMP pour les codes MPI+OpenMP. Dansun second temps, nous nous focalisons sur les opérations collectives MPI. Notre contribution a pourbut d’optimiser l’opération MPI Allreduce en réutilisant des unités de calcul inoccupées, et faisant intervenirdes threads OpenMP. Nous introduisons également le concept de collectives unifiées, impliquantdes tâches MPI et des threads OpenMP dans une même opération. Enfin, nous nous intéressons àl’analyse de performance et plus précisément l’instrumentation des applications MPI+OpenMP, et notredernière contribution consiste en l’implémentation et l’ évaluation de l’outil OpenMP Tools API (OMPT)dans le support exécutif OpenMP du framework MPC. Cet outil nous permet d’instrumenter des constructionsOpenMP et de conduire une analyse axée aussi bien du côté des applications que dessupports d’exécution
To provide increasing computational power for numerical simulations, supercomputers evolved and arenow more and more complex to program. Indeed, after the appearance of shared memory systemsemerged architectures such as NUMA (Non Uniform Memory Access) systems, providing several levelsof parallelism. Another constraint, the decreasing amount of memory per compute core, has to bementioned. Therefore, parallel models such as Message Passing Interface (MPI) are no more sufficientto enable scalability of High Performance applications, and have to be coupled with another modeladapted to shared memory architectures. OpenMP, as a de facto standard, is a good candidate to bemixed with MPI. The principle is to use this model to augment legacy codes already parallelized withMPI. But hybridizing scientific codes is a complex task, bottlenecks exist and need to be identified. Thisthesis tackles these limitations and proposes different contributions following various aspects. Our firstcontribution reduces the overhead of the OpenMP layer by optimizing the creation and synchronizationof threads for MPI+OpenMP codes. On a second time, we target MPI collective operations. Our contributionconsists in proposing a technique to exploit idle cores in order to help the operation, with theexample of MPI Allreduce collective. We also introduce unified Collectives involving both MPI tasks andOpenMP threads. Finally, we focus on performance analysis of hybrid MPI+OpenMP codes, and ourlast contribution consists in the implementation of OpenMP Tools API (OMPT), an instrumentation tool,inside the OpenMP runtime of MPC framework. This tool allows us to instrument and profile OpenMPconstructs and allows the analysis of both runtime and application sides
23

Loulou, Hassan. "Verifying Design Properties at Runtime Using an MDE-Based Approach Models @Run.Time Verification-Application to Autonomous Connected Vehicles." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS405.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Un véhicule autonome et connecté (ACV – pour Autonomous Connected Vehicle ) est un système cyber-physique où le monde réel et l’espace numérique virtuel se fusionnent. Ce type de véhicule requiert un processus de validation rigoureuse commençant à la phase de conception et se poursuivant même après le déploiement du logiciel. Un nouveau paradigme est apparu pour le monitorat continu des exécutions des logiciels afin d'autoriser des adaptations automatiquement en temps réel, systématiquement lors d’une détection de changement dans l'environnement d'exécution, d’une panne ou d’un bug. Ce paradigme s’intitule : « Models@Run.time ». Cette thèse s’inscrit dans le cadre des ACVs et plus particulièrement dans le contexte des véhicules qui collaborent et qui partagent leurs données d’une manière sécurisée. Plusieurs approches de modélisation sont déjà utilisées pour exprimer les exigences relatives au contrôle d'accès afin d’imposer des politiques de sécurité. Toutefois, leurs outils de validation ne tiennent pas compte les impacts de l'interaction entre les exigences fonctionnelles et les exigences de sécurité. Cette interaction peut conduire à des violations de sécurité inattendues lors de l'exécution du système ou lors des éventuelles adaptations à l’exécution. En outre, l’estimation en temps réel de l’état de trafic utilisant des données de type crowdsourcing pourrait être utilisée pour les adaptations aux modèles de coopération des AVCs. Cette approche n'a pas encore été suffisamment étudiée dans la littérature. Pour pallier à ces limitations, de nombreuses questions doivent être abordées:• L'évolution des exigences fonctionnelles du système doit être prise en compte lors de la validation des politiques de sécurité ainsi que les scénarios d'attaque doivent être générés automatiquement.• Une approche pour concevoir et détecter automatiquement les anti-patrons (antipatterns) de sécurité doit être développée. En outre, de nouvelles reconfigurations pour les politiques de contrôle d'accès doivent également être identifiées, validées et déployées efficacement à l'exécution.• Les ACVs doivent observer et analyser leur environnement, qui contient plusieurs flux de données dite massives (Big Data) pour proposer de nouveaux modèles de coopération, en temps réel.Dans cette thèse, une approche pour la surveillance de l'environnement des ACVs est proposée. L’approche permet de valider les politiques de contrôle d'accès et de les reconfigurer en toute sécurité. La contribution de cette thèse consiste à:• Guider les Model Checkers de sécurité pour trouver automatiquement les scénarios d'attaque dès la phase de conception.• Concevoir des anti-patterns pour guider le processus de validation, et développer un algorithme pour les détecter automatiquement lors des reconfigurations des modèles.• Construire une approche pour surveiller en temps réel les flux de données dynamiques afin de proposer des adaptations de la politique d'accès lors de l'exécution.L’approche proposée a été validée en utilisant plusieurs exemples liés aux ACVs, et les résultats des expérimentations prouvent la faisabilité de cette approche
Autonomous Connected Vehicles (ACVs) are Cyber-physical systems (CPS) where the computationalworld and the real one meet. These systems require a rigorous validation processthat starts at design phase and continues after the software deployment. Models@Runtimehas appeared as a new paradigm for continuously monitoring software systems execution inorder to enable adaptations whenever a change, a failure or a bug is introduced in the executionenvironment. In this thesis, we are going to tackle ACVs environment where vehicles tries tocollaborate and share their data in a secure manner.Different modeling approaches are already used for expressing access control requirementsin order to impose security policies. However, their validation tools do not consider the impactsof the interaction between the functional and the security requirements. This interaction canlead to unexpected security breaches during the system execution and its potential runtimeadaptations. Also, the real-time prediction of traffic states using crowd sourcing data could beuseful for proposition adaptations to AVCs cooperation models. Nevertheless, it has not beensufficiently studied yet. To overcome these limitations, many issues should be addressed:• The evolution of the system functional part must be considered during the validation ofthe security policy and attack scenarios must be generated automatically.• An approach for designing and automatically detecting security anti-patterns might bedeveloped. Furthermore, new reconfigurations for access control policies also must befound, validated and deployed efficiently at runtime.• ACVs need to observe and analyze their complex environment, containing big-datastreams to recommend new cooperation models, in near real-time.In this thesis, we build an approach for sensing the ACVs environment, validating its accesscontrol models and securely reconfiguring it on the fly. We cover three aspects:• We propose an approach for guiding security models checkers to find the attack scenariosat design time automatically.• We design anti-patterns to guide the validation process. Then, we develop an algorithmto detect them automatically during models reconfigurations. Also, we design a mechanismfor reconfiguring the access control model and we develop a lightweight modularframework for an efficient deployment of new reconfigurations.• We build an approach for the real-time monitoring of dynamic data streams to proposeadaptations for the access policy at runtime.Our proposed approach was validated using several examples related o ACVs. the results ofour experimentations prove the feasibility of this approach
24

Serral, Asensio Estefanía. "Automating Routine Tasks in Smart Environments. A Context-aware Model-driven Approach." Doctoral thesis, Universitat Politècnica de València, 2011. http://hdl.handle.net/10251/11550.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ubiquitous and Pervasive computing put forth a vision where environments are enriched with devices that provide users with services to serve them in their everyday lives. The building of such environments has the final objective of automating tedious routine tasks that users must perform every day. This automation is a very desirable challenge because it can considerably reduce resource consumption and improve users' quality of life by 1) making users' lives more comfortable, eficient, and productive, and 2) helping them to stop worrying and wasting time in performing tasks that need to be done and that they do not enjoy. However, the automation of user tasks is a complicated and delicate matter because it may bother users, interfere in their goals, or even be dangerous. To avoid this, tasks must be automated in a non-intrusive way by attending to users' desires and demands. This is the main goal of this thesis, that is, to automate the routine tasks that users want the way they want them. To achieve this, we propose two models of a high level of abstraction to specify the routines to be automated. These models provide abstract concepts that facilitate the participation of end-users in the model specification. In addition, these models are designed to be machine-processable and precise-enough to be executable models. Thus, we provide a software infrastructure that is capable of automating the specified routines by directly interpreting the models at runtime. Therefore, the routines to be automated are only represented in the models. This makes the models the primary means to understand, interact with, and modify the automated routines. This considerably facilitates the evolution of the routines over time to adapt them to changes in user behaviour. Without this adaptation, the automation of the routines may not only become useless for end-users but may also become a burden on them instead of being a help in their daily life.
Serral Asensio, E. (2011). Automating Routine Tasks in Smart Environments. A Context-aware Model-driven Approach [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11550
Palancia
25

Lochau, Malte [Verfasser]. "Model-based Quality Assurance of Cyber-Physical Systems with Variability in Space, over Time and at Runtime / Malte Lochau." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2017. http://d-nb.info/1147968470/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Diamos, Gregory Frederick. "Harmony: an execution model for heterogeneous systems." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42874.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The emergence of heterogeneous and many-core architectures presents a unique opportunity to deliver order of magnitude performance increases to high performance applications by matching certain classes of algorithms to specifically tailored architectures. However, their ubiquitous adoption has been limited by a lack of programming models and management frameworks designed to reduce the high degree of complexity of software development inherent to heterogeneous architectures. This dissertation introduces Harmony, an execution model for heterogeneous systems that draws heavily from concepts and optimizations used in processor micro-architecture to provide: (1) semantics for simplifying heterogeneity management, (2) dynamic scheduling of compute intensive kernels to heterogeneous processor resources, and (3) online monitoring driven performance optimization for heterogeneous many core systems. This work focuses on simplifying development and ensuring binary portability and scalability across system configurations and sizes.
27

Penczek, Frank. "Static guarantees for coordinated components : a statically typed composition model for stream-processing networks." Thesis, University of Hertfordshire, 2012. http://hdl.handle.net/2299/9046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Does your program do what it is supposed to be doing? Without running the program providing an answer to this question is much harder if the language does not support static type checking. Of course, even if compile-time checks are in place only certain errors will be detected: compilers can only second-guess the programmer’s intention. But, type based techniques go a long way in assisting programmers to detect errors in their computations earlier on. The question if a program behaves correctly is even harder to answer if the program consists of several parts that execute concurrently and need to communicate with each other. Compilers of standard programming languages are typically unable to infer information about how the parts of a concurrent program interact with each other, especially where explicit threading or message passing techniques are used. Hence, correctness guarantees are often conspicuously absent. Concurrency management in an application is a complex problem. However, it is largely orthogonal to the actual computational functionality that a program realises. Because of this orthogonality, the problem can be considered in isolation. The largest possible separation between concurrency and functionality is achieved if a dedicated language is used for concurrency management, i.e. an additional program manages the concurrent execution and interaction of the computational tasks of the original program. Such an approach does not only help programmers to focus on the core functionality and on the exploitation of concurrency independently, it also allows for a specialised analysis mechanism geared towards concurrency-related properties. This dissertation shows how an approach that completely decouples coordination from computation is a very supportive substrate for inferring static guarantees of the correctness of concurrent programs. Programs are described as streaming networks connecting independent components that implement the computations of the program, where the network describes the dependencies and interactions between components. A coordination program only requires an abstract notion of computation inside the components and may therefore be used as a generic and reusable design pattern for coordination. A type-based inference and checking mechanism analyses such streaming networks and provides comprehensive guarantees of the consistency and behaviour of coordination programs. Concrete implementations of components are deliberately left out of the scope of coordination programs: Components may be implemented in an external language, for example C, to provide the desired computational functionality. Based on this separation, a concise semantic framework allows for step-wise interpretation of coordination programs without requiring concrete implementations of their components. The framework also provides clear guidance for the implementation of the language. One such implementation is presented and hands-on examples demonstrate how the language is used in practice.
28

Heller, Thomas [Verfasser], Thomas [Akademischer Betreuer] Fahringer, and Dietmar [Gutachter] Fey. "Extending the C++ Asynchronous Programming Model with the HPX Runtime System for Distributed Memory Computing / Thomas Heller ; Gutachter: Dietmar Fey ; Betreuer: Thomas Fahringer." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2019. http://d-nb.info/1187523291/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Gil, Pascual Miriam. "Adapting Interaction Obtrusiveness: Making Ubiquitous Interactions Less Obnoxious. A Model Driven Engineering approach." Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/31660.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La Computaci'on Ubicua plantea proveer de inteligencia a nuestros entornos ofreciendo servicios a los usuarios que permitan ayudarlos en su vida cotidiana. Con la inclusi'on de dispositivos ubicuos en nuestra vida (por ejemplo los dispositivos m'oviles), los usuarios hemos pasado a estar siempre conectados al entorno, pudiendo interactuar con el. Sin embargo, a diferencia de las interacciones de escritorio tradicionales donde los usuarios eran quienes ped'¿an informaci'on o introduc'¿an datos, las interacciones ubicuas tienen que lidiar con un entorno de los usuarios variable, demandando uno de los recursos mas valiosos para los usuarios: la atenci'on humana. De esta forma, un reto en el paradigma de computaci'on ubicua es regular las peticiones de atenci'on del usuario. Esto implica que las interacciones de los servicios deber'¿an comportarse de una manera ¿considerada¿ teniendo en cuenta el grado en que cada servicio se inmiscuye en la mente del usuario (el nivel de molestia). Partiendo de las bases de la Ingenier'¿a Dirigida por Modelos (MDE) y de los principios de la Computaci'on Considerada, esta tesis se orienta a dise¿nar y desarrollar servicios que sean capaces de adaptar sus interacciones de acuerdo a la atenci'on del usuario en cada momento. El principal objetivo de esta tesis es introducir capacidades de adaptaci'on considerada en los servicios ubicuos para proporcionar interacciones que no perturben al usuario. Esto lo conseguimos mediante un proceso de desarrollo que cubre desde el dise¿no de los servicios hasta su implementaci'on, centr'andose en los requisitos de adaptaci'on de la interacci'on particulares para cada usuario. Para el dise¿no del comportamiento de la interacci'on en base al nivel de molestia se han de¿nido unos modelos de intromisi'on e interacci'on independientes de la tecnolog'¿a. Estos modelos son los que posteriormente conducen la adaptaci'on de la interacci'on din'amicamente, por medio de una infraestructura aut'onoma que los usa en tiempo de ejecuci'on. Esta infraestructura es capaz de detectar cambios en la situaci'on del usuario (por ejemplo cambios en su localizaci'on, su actividad, etc.) y planear y ejecutar modi¿caciones en la interacci'on de los servicios. Cuando se detecta un cambio del contexto del usuario, los servicios se auto-adaptan para usar los componentes de interacci'on m'as apropiados de acuerdo a la nueva situaci'on y no molestar al usuario. Adem'as, como las necesidades y preferencias de los usuarios pueden cambiar con el tiempo, nuestra aproximaci'on utiliza la estrategia del aprendizaje por refuerzo para ajustar los modelos de dise¿no iniciales de forma que maximicemos la experiencia del usuario. El dise¿no inicial de la interacci'on basado en el nivel de molestia nos asegura un comportamiento inicial consistente con las necesidades de los usuarios en ese momento. Luego, este dise¿no se va re¿nando de acuerdo al comportamiento y preferencias de cada usuario por medio de su retroalimentaci'on a trav'es de la experiencia de uso. Adem'as, tambi'en proporcionamos una interfaz m'ovil que permite a los usuarios ¿nales personalizarse de forma manual los modelos en base a sus propias preferencias. El trabajo presentado en esta tesis se ha llevado a la pr'actica para su evaluaci'on desde el punto de vista de los dise¿nadores y de los usuarios ¿nales. Por una parte, el m'etodo de dise¿no se ha validado para comprobar que ayuda a los dise¿nadores a especi¿car este tipo de servicios. Pese a que el proceso de desarrollo no ofrece una automatizaci'on completa, las gu'¿as ofrecidas y la formalizaci'on de los conceptos implicados ha demostrado ser 'util a la hora de desarrollar servicios cuya interacci'on es no molesta. Por otra parte, la adaptaci'on de la interacci'on en base al nivel de molestia se ha puesto en pr'actica con usuarios para evaluar su satisfacci'on con el sistema y su experiencia de usuario. Esta validaci'on ha desvelado la importancia de considerar los aspectos de molestia en el proceso de adaptaci'on de la interacci'on para ayudar a mejorar la experiencia de usuario.
In Ubiquitous Computing environments, people are surrounded by a lot of embedded services. Since ubiquitous devices, such as mobile phones, have become a key part of our everyday life, they enable users to be always connected to the environment and interact with it. However, unlike traditional desktop interactions where users are used to request for information or input data, ubiquitous interactions have to face with variable user¿s environment, making demands on one of the most valuable resources of users: human attention. A challenge in the Ubiquitous Computing paradigm is regulating the request for user¿s attention. That is, service interactions should behave in a considerate manner by taking into account the degree in which each service intrudes the user¿s mind (i.e., the obtrusiveness degree). In order to prevent service behavior from becoming overwhelming, this work, based on Model Driven Engineering foundations and the Considerate Computing principles, is devoted to design and develop services that adapt their interactions according to user¿s attention. The main goal of the present thesis is to introduce considerate adaptation capabilities in ubiquitous services to provide non-disturbing interactions. We achieve this by means of a systematic method that covers from the services¿ design to their implementation and later adaptation of interaction at runtime
Gil Pascual, M. (2013). Adapting Interaction Obtrusiveness: Making Ubiquitous Interactions Less Obnoxious. A Model Driven Engineering approach [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31660
TESIS
30

Le, Nhan Tam. "Ingénierie dirigée par les modèles pour le provisioning d'images de machines virtuelles pour l'informatique en nuage." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00926228.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le contexte et la problématique De nos jours, le cloud computing est omniprésent dans la recherche et aussi dans l'industrie. Il est considéré comme une nouvelle génération de l'informatique où les ressources informatiques virtuelles à l'échelle dynamique sont fournies comme des services via l'internet. Les utilisateurs peuvent accéder aux systèmes de cloud utilisant différentes interfaces sur leurs différents dis- positifs. Ils ont seulement besoin de payer ce qu'ils utilisent, respectant le l'accord de service (Service-Layer Agreement) établi entre eux et les fournisseurs de services de cloud. Une des caractéristiques principales du cloud computing est la virtualisation grâce à laquelle toutes les ressources deviennent transparentes aux utilisateurs. Les utilisateurs n'ont plus besoin de contrôler et de maintenir les infrastructures informatiques. La virtualisation dans le cloud computing combine des images de machines virtuelles (VMIs) et des machines physiques où ces images seront déployées. Typiquement, le déploiement d'une telle VMI comprend le démarrage de l'image, l'installation et la configuration des packages définis pas la VMI. Dans les approches traditionnelles, les VMIs sont crées par les experts techniques des fournisseurs de services cloud. Il s'agit des VMIs pré-packagés qui viennent avec des composants pré-installés et pré-configurés. Pour répondre à une requête d'un client, le fournisseur sélectionne une VMI appropriée pour cloner et déployer sur un nœud de cloud. Si une telle VMI n'existe pas, une nouvelle VMI va être créée pour cette requête. Cette VMI pourrait être générée à partir de la VMI existante la plus proche ou être entièrement neuve. Le cycle de vie de l'approvisionnement d'une VMI dans l'approche traditionnelle est décrite dans la Figure 1. Une VMI standard contient normalement plusieurs packages parmi lesquels certains qui ne seront jamais utilisés. Cela vient du fait que la VMI est créée au moment de conception pour le but d'être clonée plus tard. Cette approche a des inconvénients tels que la demande de ressources importantes pour stocker des VMIs ou pour les déployer. De plus, elle requiert le démarrage de plusieurs composants, y compris ceux non utilisés. Particulièrement, à partir du point de vue de gestion de services, il est difficile de gérer la complexité des interdépendances entre les différents composants afin de maintenir les VMIs déployées et de les faire évoluer. Pour résoudre les problèmes énumérés ci-dessus, les fournisseurs de services de cloud pourraient automatiser le processus d'approvisionnement et permettre aux utilisateurs de choisir des VMIs d'une manière flexible en gardant les profites des fournisseur en terme de temps, de ressources, et de coût. Dans cette optique, les fournisseurs devraient considérer quelques préoccupations: (1) Quels packages et dépendances seront déployés? (2) Comment optimiser une configuration en terme de coût, de temps, et de consommation de ressources? (3) Comment trouver la VMI la plus ressemblante et comment l'adapter pour obtenir une nouvelle VMI? (4) Comment éviter les erreurs qui viennent souvent des opérations manuelles? (5) Comment gérer l'évolution de la VMI déployée et l'adapter aux besoins de reconfigurer et de passer automatiquement à l'échelle? A cause de ces exigences, la construction d'un systèmes de gestion de plateformes cloud (PaaS-Platform as a Sevice) est difficile, particulièrement dans le processus d'approvisionnement de VMIs. Cette difficulté requiert donc une approche appropriée pour gérer les VMIs dans les systèmes de cloud computing. Cette méthode fournirait des solutions pour la reconfiguration et le passage automatique à l'échelle. Les défis et les problèmes clés A partir de la problématique, nous avons identifié sept défis pour le développement d'un processus d'approvisionnements dans cloud computing. * C1: Modélisation de la variabilité des options de configuration des VMIs afin de gérer les interdépendances entre les packages logiciels Les différents composants logiciels pourraient requérir des packages spécifiques ou des bibliothèques du système d'exploitation pour une configuration correcte. Ces dépendances doivent être arrangées, sélectionnées, et résolues manuellement pour chaque copie de la VMI standard. D'autre part, les VMIs sont créées pour répondre aux exigences d'utilisateurs qui pourraient partager des sous-besoins en commun. La modélisation de la similitude et de la variabilité des VMIs au regard de ces exigences est donc nécessaire. * C2: Réduction des données transférées via les réseaux pendant le processus d'approvisionnement Afin d'être prêt pour répondre aux requêtes de clients, plusieurs packages sont installés sur la machine virtuelle standard , y compris les packages qui ne seront pas utilisé. Ces packages devront être limités afin de minimaliser la taille des VMIs. * C3: Optimisation de la consommation de ressources pendant l'exécution Dans l'approche traditionnelle, les activités de création et de mise à jour des VMIs requièrent des opérations manuelles qui prennent du temps. D'autre part, tous les packages dans les VMIs, y compris ceux qui ne sont pas utilisés, sont démarrés et occupent donc des ressources. Ces consommations de ressources devraient être optimisées. * C4: Mise à disposition d'un outil interactif facilitant les choix de VMIs des utilisateurs Les fournisseurs de services cloud voudraient normalement donner la flexibilité aux utilisateurs clients dans leurs choix de VMIs. Cependant, les utilisateurs n'ont pas de con- naissances techniques approfondies. Pour cette raison, des outils facilitant les choix sont nécessaires. * C5: Automatisation du déploiement des VMIs Plusieurs opérations du processus d'approvisionnement sont très complexes. L'automatisation de ces opérations peut réduire le temps de déploiement et les erreurs. * C6: Support de la reconfiguration de VMIs pendant leurs exécutions Un des caractéristiques importantes de cloud computing est de fournir des services à la demande. Puisque les demandes évoluent pendant l'exécution des VMIs, les systèmes de cloud devraient aussi s'adapter à ces évolutions des demandes. * C7: Gestion de la topologie de déploiement de VMIs Le déploiement de VMIs ne doit pas seulement tenir en compte multiple VMIs avec la même configuration, mais aussi le cas de multiple VMIs ayant différentes configurations. De plus, le déploiement de VMIs pourrait être réalisé sur différentes plateformes de cloud quand le fournisseur de service accepte une infrastructure d'un autre fournisseur Afin d'adresser ces défis, nous considérons trois problèmes clés pour le déploiement du processus d'approvisionnement de VMIs: 1. Besoin d'un niveau d'abstraction pour la gestion de configurations de VMIs: Une approche appropriée devrait fournir un haut niveau d'abstraction pour la modélisation et la gestion des configurations des VMIs avec leurs packages et les dépendances entre ces packages. Cette abstraction permet aux ingénieurs experts des fournisseurs de services de cloud à spécifier la famille de produits de configurations de VMIs. Elle facilite aussi l'analyse et la modélisation de la similitude et de la variabilité des configurations de VMIs, ainsi que la création des VMIs valides et cohérentes. 2. Besoin d'un niveau d'abstraction pour le processus de déploiement de VMIs: Une ap- proche appropriée pour l'approvisionnement de VMIs devrait fournir une abstraction du processus de déploiement. 3. Besoin d'un processus de déploiement et de reconfiguration automatique: Une approche appropriée devrait fournir une abstraction du processus de déploiement et de reconfigura- tion automatique. Cette abstraction facilite la spécification, l'analyse, et la modélisation la modularité du processus. De plus, l'approche devrait supporter l'automatisation afin de réduire les tâches manuelles qui sont couteuses en terme de performance et contiennent potentiellement des erreurs.
31

Jimborean, Alexandra. "Adapting the polytope model for dynamic and speculative parallelization." Phd thesis, Université de Strasbourg, 2012. http://tel.archives-ouvertes.fr/tel-00733850.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis, we present a Thread-Level Speculation (TLS) framework whose main feature is to speculatively parallelize a sequential loop nest in various ways, to maximize performance. We perform code transformations by applying the polyhedral model that we adapted for speculative and runtime code parallelization. For this purpose, we designed a parallel code pattern which is patched by our runtime system according to the profiling information collected on some execution samples. We show on several benchmarks that our framework yields good performance on codes which could not be handled efficiently by previously proposed TLS systems.
32

Wilke, Claas. "Model-Based Run-time Verification of Software Components by Integrating OCL into Treaty." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-27365.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Model Driven Development is used to improve software quality and efficiency by automatically transforming abstract and formal models into software implementations. This is particularly sensible if the model’s integrity can be proven formally and is preserved during the model’s transformation. A standard to specify software model integrity is the Object Constraint Language (OCL). Another topic of research is the dynamic development of software components, enabling software system composition at component run-time. As a consequence, the system’s verification must be realized during system run-time (and not during transformation or compile time). Many established verification techniques cannot be used for run-time verification. A method to enable model-based run-time verification will be developed during this work. How OCL constraints can be transformed into executable software artifacts and how they can be used in the component-based system Treaty will be the major task of this diploma thesis
Modellgetriebene Entwicklung dient der Verbesserung von Qualität und Effizienz in der Software-Entwicklung durch Automatisierung der notwendigen Transformationen von abstrakten bzw. formalen Modellen bis zur Implementierung. Dies ist insbesondere dann sinnvoll, wenn die Integrität der ursprünglichen Modelle formal bewiesen werden kann und durch die Transformation gewährleistet wird. Ein Standard zur Spezifikation der Integrität von Softwaremodellen ist die Object Constraint Language (OCL). Eine weitere Forschungsrichtung im Software-Engineering ist die Entwicklung von dynamischen Komponenten-Modellen, die die Komposition von Softwaresystemen im laufenden Betrieb ermöglichen. Dies bedeutet, dass die Systemverifikation im laufenden Betrieb realisiert werden muss. Die meisten der etablierten Verifikationstechniken sind dazu nicht geeignet. In der Diplomarbeit soll ausgehend von diesem Stand der Technik eine Methode zur modellbasierten Verifikation zur Laufzeit entwickelt werden. Insbesondere soll untersucht werden, wie OCL-Constraints zur Laufzeit in ausführbare Software-Artefakte übersetzt und in dem komponentenbasierten System Treaty verwendet werden können
33

Göbel, Steffen, Christoph Pohl, Ronald Aigner, Martin Pohlack, Simone Röttger, and Steffen Zschaler. "The COMQUAD Component Container Architecture and Contract Negotiation." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-100181.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Component-based applications require runtime support to be able to guarantee non-functional properties. This report proposes an architecture for a real-time-capable, component-based runtime environment, which allows to separate non-functional and functional concerns in component-based software development. The architecture is presented with particular focus on three key issues: the conceptual architecture, an approach including implementation issues for splitting the runtime environment into a real-time-capable and a real-time-incapable part, and details of contract negotiation. The latter includes selecting component implementations for instantiantion based on their non-functional properties.
34

Nguyen, Viet Hoa. "Une méthode fondée sur les modèles pour gérer les propriétés temporelles des systèmes à composants logiciels." Thesis, Rennes 1, 2013. http://www.theses.fr/2013REN1S090/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse propose une approche pour intégrer l'utilisation des propriétés temporisées stochastiques dans un processus continu de design fondé sur des modèles à l'exécution. La spécification temporelle de services est un aspect important des architectures à base de composants, par exemple dans des réseaux distribués volatiles de nœuds informatiques. L'approche models@runtime facilite la gestion de ces architectures en maintenant des modèles abstraits des architectures synchronisés avec la structure physique de la plate-forme d'exécution distribuée. Pour les systèmes auto-adaptatifs, la prédiction de délais et de débit d'un assemblage de composants est primordial pour prendre la décision d'adaptation et accepter les évolutions qui sont conformes aux spécifications temporelles. Dans ce but, nous définissons une extension du métamodèle fondée sur les réseaux de Petri stochastiques comme un modèle temporisé interne pour la prédiction. Nous concevons une bibliothèque de patrons pour faciliter la spécification et la prédiction des propriétés temporisées classiques de modèles à l'exécution et rendre la synchronisation des comportements et des changements structurels plus facile. D'autre part, nous appliquons l'approche de la modélisation par aspects pour tisser les modèles temporisés internes dans les modèles temporisés de comportement du composant et du système. Notre moteur de prédiction est suffisamment rapide pour effectuer la prédiction à l'exécution dans un cadre réaliste et valider des modèles à l'exécution
This thesis proposes an approach to integrate the use of time-related stochastic properties in a continuous design process based on models at runtime. Time-related specification of services are an important aspect of component-based architectures, for instance in distributed, volatile networks of computer nodes. The models at runtime approach eases the management of such architectures by maintaining abstract models of architectures synchronized with the physical, distributed execution platform. For self-adapting systems, prediction of delays and throughput of a component assembly is of utmost importance to take adaptation decision and accept evolutions that conform to the specifications. To this aim we define a metamodel extension based on stochastic Petri nets as an internal time model for prediction. We design a library of patterns to ease the specification and prediction of common time properties of models at runtime and make the synchronization of behaviors and structural changes easier. Furthermore, we apply the approach of Aspect-Oriented Modeling to weave the internal time models into timed behavior models of the component and the system. Our prediction engine is fast enough to perform prediction at runtime in a realistic setting and validate models at runtime
35

Gindraud, François. "Système distribué à adressage global et cohérence logicielle pourl’exécution d’un modèle de tâche à flot de données." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM001/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les architectures distribuées sont fréquemment utilisées pour le calcul haute performance (HPC). Afin de réduire la consommation énergétique, certains fabricants de processeurs sont passés d’architectures multi-cœurs en mémoire partagée aux MPSoC. Les MPSoC (Multi-Processor System On Chip) sont des architectures incluant un système distribué dans une puce.La programmation des architectures distribuées est plus difficile que pour les systèmes à mémoire partagée, principalement à cause de la nature distribuée de la mémoire. Une famille d’outils nommée DSM (Distributed Shared Memory) a été développée pour simplifier la programmation des architectures distribuées. Cette famille inclut les architectures NUMA, les langages PGAS, et les supports d’exécution distribués pour graphes de tâches. La stratégie utilisée par les DSM est de créer un espace d’adressage global pour les objets du programme, et de faire automatiquement les transferts réseaux nécessaires lorsque ces objets sont utilisés. Les systèmes DSM sont très variés, que ce soit par l’interface fournie, les fonctionnalités, la sémantique autour des objets globalement adressables, le type de support (matériel ou logiciel), ...Cette thèse présente un nouveau système DSM à support logiciel appelé Givy. Le but de Givy est d’exécuter sur des MPSoC (MPPA) des programmes sous la forme de graphes de tâches dynamiques, avec des dépendances de flot de données (data-flow ). L’espace d’adressage global (GAS) de Givy est indexé par des vrais pointeurs, contrairement à de nombreux autres systèmes DSM à support logiciel : les pointeurs bruts du langage C sont valides sur tout le système distribué. Dans Givy, les objets globaux sont les blocs de mémoire fournis par malloc(). Ces blocs sont répliqués entre les nœuds du système distribué, et sont gérés par un protocole de cohérence de cache logiciel nommé Owner Writable Memory. Le protocole est capable de déplacer ses propres métadonnées, ce qui devrait permettre l’exécution efficace de programmes irréguliers. Le modèle de programmation impose de découper le programme en tâches créées dynamiquement et annotées par leurs accès mémoire. Ces annotations sont utilisées pour générer les requêtes au protocole de cohérence, ainsi que pour fournir des informations à l’ordonnanceur de tâche (spatial et temporel).Le premier résultat de cette thèse est l’organisation globale de Givy. Une deuxième contribution est la formalisation du protocole Owner Writable Memory. Le troisième résultat est la traduction de cette formalisation dans le langage d’un model checker (Cubicle), et les essais de validation du protocole. Le dernier résultat est la réalisation et explication détaillée du sous-système d’allocation mémoire : le choix de pointeurs bruts en tant qu’index globaux nécessite une intégration forte entre l’allocateur mémoire et le protocole de cohérence de cache
Distributed systems are widely used in HPC (High Performance Computing). Owing to rising energy concerns, some chip manufacturers moved from multi-core CPUs to MPSoC (Multi-Processor System on Chip), which includes a distributed system on one chip.However distributed systems – with distributed memories – are hard to program compared to more friendly shared memory systems. A family of solutions called DSM (Distributed Shared Memory) systems has been developed to simplify the programming of distributed systems. DSM systems include NUMA architectures, PGAS languages, and distributed task runtimes. The common strategy of these systems is to create a global address space of some kind, and automate network transfers on accesses to global objects. DSM systems usually differ in their interfaces, capabilities, semantics on global objects, implementation levels (hardware / software), ...This thesis presents a new software DSM system called Givy. The motivation of Givy is to execute programs modeled as dynamic task graphs with data-flow dependencies on MPSoC architectures (MPPA). Contrary to many software DSM, the global address space of Givy is indexed by real pointers: raw C pointers are made global to the distributed system. Givy global objects are memory blocks returned by malloc(). Data is replicated across nodes, and all these copies are managed by a software cache coherence protocol called Owner Writable Memory. This protocol can relocate coherence metadata, and thus should help execute irregular applications efficiently. The programming model cuts the program into tasks which are annotated with memory accesses, and created dynamically. Memory annotations are used to drive coherence requests, and provide useful information for scheduling and load-balancing.The first contribution of this thesis is the overall design of the Givy runtime. A second contribution is the formalization of the Owner Writable Memory coherence protocol. A third contribution is its translation in a model checker language (Cubicle), and correctness validation attempts. The last contribution is the detailed allocator subsystem implementation: the choice of real pointers for global references requires a tight integration between memory allocator and coherence protocol
36

Göbel, Steffen, Christoph Pohl, Ronald Aigner, Martin Pohlack, Simone Röttger, and Steffen Zschaler. "The COMQUAD Component Container Architecture and Contract Negotiation." Technische Universität Dresden, 2004. https://tud.qucosa.de/id/qucosa%3A26291.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Component-based applications require runtime support to be able to guarantee non-functional properties. This report proposes an architecture for a real-time-capable, component-based runtime environment, which allows to separate non-functional and functional concerns in component-based software development. The architecture is presented with particular focus on three key issues: the conceptual architecture, an approach including implementation issues for splitting the runtime environment into a real-time-capable and a real-time-incapable part, and details of contract negotiation. The latter includes selecting component implementations for instantiantion based on their non-functional properties.
37

Silva, Flayson Potenciano e. "Abordagem baseada em metamodelos para a representação e modelagem de características em linhas de produto de software dinâmicas." Universidade Federal de Goiás, 2016. http://repositorio.bc.ufg.br/tede/handle/tede/6231.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-09-16T17:35:04Z No. of bitstreams: 2 Dissertação - Flayson Potenciano e Silva - 2016.pdf: 6563517 bytes, checksum: 7f7a3d166741057427f2d333473af546 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-09-19T11:17:33Z (GMT) No. of bitstreams: 2 Dissertação - Flayson Potenciano e Silva - 2016.pdf: 6563517 bytes, checksum: 7f7a3d166741057427f2d333473af546 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2016-09-19T11:17:33Z (GMT). No. of bitstreams: 2 Dissertação - Flayson Potenciano e Silva - 2016.pdf: 6563517 bytes, checksum: 7f7a3d166741057427f2d333473af546 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2016-09-06
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
This dissertation presents a requirement representation approach for Dynamic Software Product Lines (DSPLs). DSPLs are oriented towards the designing of adaptive applications and each requirement is represented as a feature. Traditionally, features are represented in a Software Product Line (SPL) by a Feature Model (FM). Nonetheless, such a model does not originally support dynamic features representation. This dissertation proposes an extension to FM by adding a representation for dynamic feature to it so that the model can have a higher expressivity regarding the context change conditions and the application itself. Therefore, a metamodel based on Ecore meta-metamodel has been developed to enable the definition of both Dynamic Feature Models (proposed extension to FM) and Dynamic Feature Configurations (DFC), the latter used to describe the possible configuration of products at-runtime. In addition to a representation for dynamic features and the metamodel, this dissertation provides a tool that interprets the proposed model and allows Dynamic Feature Models design. Simulations involving dynamic feature state changes have been carried out, considering scenarios of a ubiquitous monitoring application for homecare patients.
Esta dissertação apresenta uma abordagem de representação de requisitos para Linhas de Produto de Software Dinâmicas (LPSD). LPSDs são voltadas para a produção de aplicações adaptativas e cada requisito é representado como uma característica. Tradicionalmente, características são representadas em uma Linha de Produto de Software (LPS) por meio de um Modelo de Características (MC). Tal modelo, no entanto, não possui, originalmente, suporte para a representação de características dinâmicas. Esta dissertação propõe uma extensão ao MC, incorporando uma representação para as características dinâmicas, de forma que o modelo tenha maior expressividade quanto às condições de mudanças de contexto e da própria aplicação. Para isso, um metamodelo baseado no meta-metamodelo Ecore foi desenvolvido, para possibilitar a definição tanto de Modelos de Características Dinâmicas (extensão do MC proposta) quanto também de Modelos de Configuração de Características Dinâmicas (MCC-D), estes utilizados para descrever as possíveis configurações dos produtos em tempo de execução. Além de uma representação para características dinâmicas e do metamodelo, essa dissertação traz como contribuição uma ferramenta que interpreta o metamodelo proposto e permite a construção de Modelos de Características Dinâmicas. Simulações envolvendo mudanças de estado das configurações de características dinâmicas foram realizadas, considerando cenários de uma aplicação ubíqua de monitoramento de pacientes domiciliares.
38

Reeves, Dwayne Lloyd. "Ormolu : generating runtime monitors from alloy models." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/76996.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 57-58).
This thesis presents Ormolu, a runtime monitor used for monitoring distributed systems. Given an Alloy model, Ormolu generates a database schema and translates the constraints of the model to queries over the database. The translation preserves the semantics of Alloy, especially in regards to its type system. Ormolu allows domain specific knowledge to be expressed in Alloy, where it can be checked and verified. The same model can then be used to check if the constraints of the model are still satisfied at runtime. The feasibility of Ormolu is examined in the domain of air traffic control at a local airport, using data provided by the Tower Flight Data Manager developed by Lincoln Laboratory.
by Dwayne Lloyd Reeves.
M.Eng.
39

Tan, Xubin. "Hardware runtime management for task-based programming models." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/664109.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Task-based programming models allow programmers to express applications as a collection of tasks with dependences. They are simple to use and greatly improve programmability by using software runtimes to exploit task parallelism and heterogeneity over multi-core, many-core and heterogeneous platforms. In these programming models, the runtimes guarantee correct execution order by managing tasks using task-dependence graphs (TDGs). These runtimes are powerful enough to provide high performance with coarse-grained tasks although they impose overheads on the application execution to maintain all the information they need to do their work. However, as the current trend in processor architectures keeps including more cores and heterogeneity (in fact complexity) in the systems, coarse-grained parallelism is not enough to feed all the underlying resources. Instead, fine-grained tasks are preferable as they are able to expose higher parallelism in applications but the overheads introduced by the software runtimes under these conditions prevent an efficient exploitation of fine-grained parallelism. The two most critical runtime overheads are task dependence graph management and task scheduling to heterogeneous systems. We propose a hardware architecture Picos, consisting of a hardware task dependence manager including nested task support, and a heterogeneous task scheduler, to accelerate the critical runtime functions for task-based programming models. With Picos, we aim at extending the benefit of these programming models into exploiting fine-grained task parallelism and heterogeneity. As a proof-of-concept, Three prototypes of Picos have been designed in VHDL and implemented in a System-on-chip platform consisting of regular ARM SMP cores and an integrated FPGA. They also have been analyzed with real benchmarks with OmpSs running and Linux on the platform. The first prototype is a hardware task dependence manager, which has been implemented in a Xilinx Zynq 7000 series SoCs. It is connected to a 2-core ARM Cortex A9 processor, with bare-metal OS integration. With 24 simulated workers, and running real task-dependence analysis in Picos, it scales up to 21x speedup. The second prototype Picos++ extended Picos with an exciting new feature for nested task support in hardware. To the best of our knowledge, this is the first time that such a feature has been support fully in hardware task dependence managers. This prototype is fully integrated in not only hardware, but also with a State-of-the-Art parallel programming model, and with Linux. The third prototype includes both a hardware task dependence manager and a heterogeneous task scheduler. The heterogeneous task scheduler receives ready tasks from the task-dependence manager and then schedule them to hardware execution units that have the estimated earliest finish time. It is implemented in a Xilinx Zynq Ultrascale+ MPSoC chip. In a system with 4 threads and up to 15 HW accelerators, it achieves up to 16.2x speedup for real benchmarks, and saves up to 90% of energy.
Los modelos de programación basados en tareas permiten a los programadores expresar las aplicaciones como una colección de tareas con dependencias entre ellas. Dichos modelos son simples de usar y mejoran enormemente la programabilidad. Para ello se valen del uso de una runtime que en tiempo de ejecución ayuda a explotar el paralelismo de las tareas cuando se ejecutan en plataformas multi-cores, many-cores y heterogéneas. En estos modelos de programación los runtimes garantizan la ejecución de las tareas en el orden correcto mediante el uso de gráficos de dependencias entre tareas (TDG). Actualmente, los runtimes son lo suficientemente potentes para proporcionar un alto rendimiento con tareas de granularidad gruesa a pesar de que para mantener toda la información que necesitan para hacer su trabajo, introducen un sobrecoste importante en la ejecución de las aplicaciones. El problema viene dado por la tendencia actual en arquitectura de computadores a seguir incluyendo más núcleos y heterogeneidad (de hecho, complejidad) en los sistemas de procesado con lo que el paralelismo de granularidad gruesa no es suficiente para alimentar todos los recursos. En estos entornos complejos las tareas de granularidad fina son preferibles ya que son capaces de exponer un mayor paralelismo de las aplicaciones. Sin embargo, con tareas de granularidad fina, los sobrecostes introducidos por los runtimes software son mayores debido a la necesidad de manejar muchas más tareas más rápido. En general los mayores sobrecostes introducidos por los runtimes son: la administración de los grafos de dependencias que relacionan las tareas y la gestión de las tareas en sistemas heterogéneos. Proponemos una arquitectura hardware, llamada Picos, que consiste en un administrador de dependencias entre tareas incluyendo soporte para tareas anidadas y planificación de tareas heterogéneas. La función principal de dicha arquitectura es acelerar las funciones críticas de los runtimes para modelos de programación basados en tareas. Con Picos, se pretende extender el beneficio de estos modelos de programación para explotar el paralelismo y la heterogeneidad ejecutando tareas de granularidad fina. Como prueba de concepto, tres prototipos de Picos han sido diseñado en VHDL e implementado en una plataforma System-on-chip que consta de varios núcleos ARM integrados junto con una FPGA, y ademas analizados con ejecuciones reales con OmpSs y con Linux. El primer prototipo es un administrador hardware de tareas con dependencias, que se ha implementado en un SoC Xilinx Zynq serie 7000. Está conectado a un procesador ARM Cortex A9 de 2 núcleos, e integrado con el SO. Con 24 núcleos simulados y realizando el análisis real de las dependencias entre tareas en Picos, obtiene hasta un 21x sobre las mismas ejecuciones usando el entorno software. El segundo prototipo, Picos++, amplió Picos incorporando el soporte para la gestión de tareas anidadas en hardware. Hasta donde llega nuestro conocimiento, esta es la primera vez que dicha característica ha sido propuesta y/o incorporada en un administrador hardware de dependencias entre tareas. El segundo prototipo está completamente integrado en el sistema, no solo en hardware, sino también con el modelo de programación paralelo y con el sistema operativo. El tercer prototipo, incluye un administrador y planificador de tareas heterogéneas. El planificador de tareas heterogéneas recibe dichas tareas listas del administrador de dependencias entre tareas y las programa en la unidad de ejecución de hardware adecuada que tenga el tiempo de finalización estimado más corto. Este prototipo se ha implementado en un chip MPSoC Xilinx Zynq Ultrascale+. En dicho sistema con cuatro núcleos ARM y hasta 15 aceleradores HW funcionales, logra una aceleración de hasta 16.2x, y ahorra hasta el 90% de la energía con respecto al software.
40

Alférez, Salinas Germán Harvey. "Achieving Autonomic Web Service Compositions with Models at Runtime." Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/34672.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Over the last years, Web services have become increasingly popular. It is because they allow businesses to share data and business process (BP) logic through a programmatic interface across networks. In order to reach the full potential of Web services, they can be combined to achieve specifi c functionalities. Web services run in complex contexts where arising events may compromise the quality of the system (e.g. a sudden security attack). As a result, it is desirable to count on mechanisms to adapt Web service compositions (or simply called service compositions) according to problematic events in the context. Since critical systems may require prompt responses, manual adaptations are unfeasible in large and intricate service compositions. Thus, it is suitable to have autonomic mechanisms to guide their self-adaptation. One way to achieve this is by implementing variability constructs at the language level. However, this approach may become tedious, difficult to manage, and error-prone as the number of con figurations for the service composition grows. The goal of this thesis is to provide a model-driven framework to guide autonomic adjustments of context-aware service compositions. This framework spans over design time and runtime to face arising known and unknown context events (i.e., foreseen and unforeseen at design time) in the close and open worlds respectively. At design time, we propose a methodology for creating the models that guide autonomic changes. Since Service-Oriented Architecture (SOA) lacks support for systematic reuse of service operations, we represent service operations as Software Product Line (SPL) features in a variability model. As a result, our approach can support the construction of service composition families in mass production-environments. In order to reach optimum adaptations, the variability model and its possible con figurations are verifi ed at design time using Constraint Programming (CP). At runtime, when problematic events arise in the context, the variability model is leveraged for guiding autonomic changes of the service composition. The activation and deactivation of features in the variability model result in changes in a composition model that abstracts the underlying service composition. Changes in the variability model are refl ected into the service composition by adding or removing fragments of Business Process Execution Language (WS-BPEL) code, which are deployed at runtime. Model-driven strategies guide the safe migration of running service composition instances. Under the closed-world assumption, the possible context events are fully known at design time. These events will eventually trigger the dynamic adaptation of the service composition. Nevertheless, it is diffi cult to foresee all the possible situations arising in uncertain contexts where service compositions run. Therefore, we extend our framework to cover the dynamic evolution of service compositions to deal with unexpected events in the open world. If model adaptations cannot solve uncertainty, the supporting models self-evolve according to abstract tactics that preserve expected requirements.
Alférez Salinas, GH. (2013). Achieving Autonomic Web Service Compositions with Models at Runtime [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34672
TESIS
41

Richardson, S. G. "Discovering the runtime structure of software with probabilistic generative models." Connect to online resource, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1447681.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Osman, Nardine Zoulfikar. "Runtime verification of deontic and trust models in multiagent interactions." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/21993.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In distributed open systems, such as multiagent systems, new interactions are constantly appearing and new agents are continuously joining or leaving. It is unrealistic to expect agents to automatically trust new interactions. It is also unrealistic to expect agents to refer to their users for help every time a new interaction is encountered. An agent should decide for itself whether a specific interaction with a given group of agents is suitable or not. This thesis presents a runtime verification mechanism for addressing this problem. Verifying multiagent systems has its challenges. It is hard to predict the reliability of interactions, in systems that are heavily influenced by autonomous agents, without having access to the agent specifications. Available verification mechanisms may roughly be divided into two categories: (1) those that verify interaction models independently of specific agents, and (2) those that verify agent models whose constraints shape the interactions. Interaction models are not sufficient when verifying dynamic properties that depend on the agents engaged in an interaction. On the other hand, verifying agent specifications, such as BDI models, is extremely inefficient. Specifications are usually not explicit enough, resulting in the verification of a massive number of permissible interactions. Furthermore, in open systems, an agent’s internal specification is usually not accessible for many reasons, including security and privacy. This thesis proposes a model checker that verifies a combination of a global interaction model and local deontic models. The deontic model may be viewed as a list of agent constraints that are deemed necessary to share and verify, such as the inability of the buyer to pay by credit card. The result is a lightweight, efficient, and powerful model checker that is capable of verifying rich properties of multiagent systems without the need for accessing agents’ internal specifications. Although the proposed model checker has potential for addressing a variety of problems, the trust domain receives special attention due to the critically of the trust issue in distributed open systems and the lack of reliable trust solutions. The thesis illustrates how a dynamic model checker, using deontic/trust models, can help agents decide whether the scenarios they wish to join are trustworthy or not. In summary, the main contribution of this research is in introducing interaction time verification for checking deontic and trust models multiagent interactions. When faced with new unexplored interactions, agents can verify whether joining a given interaction with a given set of collaborating agents would violate any of its constraints.
43

Montague, S. "Concern-based specification and runtime verification of declarative process models." Thesis, University of Salford, 2012. http://usir.salford.ac.uk/38097/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An organisation has a number of business processes that when carried out achieve its business goals. A business process defines a specific ordering of activities. It can be modelled using a process model. The process model is constructed using a modelling language. In practice, business processes can be complex. They can consist of dozens of activities with complex ordering dependencies. In this thesis, we claim that such complexity can be handled by the principle of separation of concerns. We introduce a concern-based framework called MIC (Modelling Interactions using Concerns). In the MIC framework a business process is modelled in a declarative process model as a set of interrelated concerns. Computational logic is used to represent and reason about the concerns and relations among them. It is argued that the declarative process models constructed by the MIC framework can be understood, maintained and reused.
44

Morin, Brice. "Leveraging models from design-time to runtime to support dynamic variability." Rennes 1, 2010. http://www.theses.fr/2010REN1S101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis presents a Model-Driven and Aspect-Oriented approach to tame the complexity of Dynamically Adaptive Systems (DAS). At design-time, we capture the different facets of a DAS (variability, environment/context, reasoning and architecture) using dedicated metamodels. Each feature of the variability model describing a DAS is refined into an aspect model. We leverage these design models at runtime to drive the dynamic adaptation process. Both the running system and its execution context are abstracted as models. Depending on the current context (model) a reasoner interprets the reasoning model to determine a well fitted selection of features. We then use Aspect-Oriented Modeling techniques to compose the aspect models (associated to the selected features) to derive the corresponding architecture. This way, there is no need to specify the whole set of possible configurations at design-time: each configuration is automatically built when needed. We finally rely on model comparison to fully automate the reconfiguration process in order to adapt the running system, with no need to write low-level reconfiguration scripts
Cette thèse présente une approche dirigée par les modèles et basée sur la modélisation par aspects pour maitriser la complexité de systèmes logiciels adaptatifs (SA). Lors de la conception, les différentes facettes d’un SA (variabilité, environnement/contexte, raisonnement et architecture) sont capturées à l’aide de différents méta-modèles dédiés. En particuliers, les variantes de chaque point de variation sont raffinées à l’aide d’aspect (niveau model). Ces modèles sont embarqués à l’exécution pour contrôler et automatiser le mécanisme de reconfiguration. Selon le contexte courant un composant de raisonnement détermine un ensemble de variantes bien adaptées au contexte. Nous utilisons ensuite un tisseur d’aspects pour dériver automatiquement l’architecture correspondante à cette sélection de variantes. Ainsi, les concepteurs du SA n’ont pas besoin de spécifier toutes les configurations : au contraire, chaque configuration est dérivée lorsqu’il y en a besoin. Nous utilisons finalement une comparaison de modèle pour automatiser entièrement le processus de reconfiguration, sans devoir écrire des scripts de reconfiguration de bas niveau
45

Stamenkovich, Joseph Allan. "Enhancing Trust in Autonomous Systems without Verifying Software." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/89950.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The complexity of the software behind autonomous systems is rapidly growing, as are the applications of what they can do. It is not unusual for the lines of code to reach the millions, which adds to the verification challenge. The machine learning algorithms involved are often "black boxes" where the precise workings are not known by the developer applying them, and their behavior is undefined when encountering an untrained scenario. With so much code, the possibility of bugs or malicious code is considerable. An approach is developed to monitor and possibly override the behavior of autonomous systems independent of the software controlling them. Application-isolated safety monitors are implemented in configurable hardware to ensure that the behavior of an autonomous system is limited to what is intended. The sensor inputs may be shared with the software, but the output from the monitors is only engaged when the system violates its prescribed behavior. For each specific rule the system is expected to follow, a monitor is present processing the relevant sensor information. The behavior is defined in linear temporal logic (LTL) and the associated monitors are implemented in a field programmable gate array (FPGA). An off-the-shelf drone is used to demonstrate the effectiveness of the monitors without any physical modifications to the drone. Upon detection of a violation, appropriate corrective actions are persistently enforced on the autonomous system.
Master of Science
Autonomous systems are surprisingly vulnerable, not just from malicious hackers, but from design errors and oversights. The lines of code required can quickly climb into the millions, and the artificial decision algorithms can be inscrutable and fully dependent upon the information they are trained on. These factors cause the verification of the core software running our autonomous cars, drones, and everything else to be prohibitively difficult by traditional means. Independent safety monitors are implemented to provide internal oversight for these autonomous systems. A semi-automatic design process efficiently creates error-free monitors from safety rules drones need to follow. These monitors remain separate and isolated from the software typically controlling the system, but use the same sensor information. They are embedded in the circuitry and act as their own small, task-specific processors watching to make sure a particular rule is not violated; otherwise, they take control of the system and force corrective behavior. The monitors are added to a consumer off-the-shelf (COTS) drone to demonstrate their effectiveness. For every rule monitored, an override is triggered when they are violated. Their effectiveness depends on reliable sensor information as with any electronic component, and the completeness of the rules detailing these monitors.
46

Castillo, Villar Emilio. "Parallel architectures and runtime systems co-design for task-based programming models." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/666783.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The increasing parallelism levels in modern computing systems has extolled the need for a holistic vision when designing multiprocessor architectures taking in account the needs of the programming models and applications. Nowadays, system design consists of several layers on top of each other from the architecture up to the application software. Although this design allows to do a separation of concerns where it is possible to independently change layers due to a well-known interface between them, it is hampering future systems design as the Law of Moore reaches to an end. Current performance improvements on computer architecture are driven by the shrinkage of the transistor channel width, allowing faster and more power efficient chips to be made. However, technology is reaching physical limitations were the transistor size will not be able to be reduced furthermore and requires a change of paradigm in systems design. This thesis proposes to break this layered design, and advocates for a system where the architecture and the programming model runtime system are able to exchange information towards a common goal, improve performance and reduce power consumption. By making the architecture aware of runtime information such as a Task Dependency Graph (TDG) in the case of dataflow task-based programming models, it is possible to improve power consumption by exploiting the critical path of the graph. Moreover, the architecture can provide hardware support to create such a graph in order to reduce the runtime overheads and making possible the execution of fine-grained tasks to increase the available parallelism. Finally, the current status of inter-node communication primitives can be exposed to the runtime system in order to perform a more efficient communication scheduling, and also creates new opportunities of computation and communication overlap that were not possible before. An evaluation of the proposals introduced in this thesis is provided and a methodology to simulate and characterize the application behavior is also presented.
El aumento del paralelismo proporcionado por los sistemas de cómputo modernos ha provocado la necesidad de una visión holística en el diseño de arquitecturas multiprocesador que tome en cuenta las necesidades de los modelos de programación y las aplicaciones. Hoy en día el diseño de los computadores consiste en diferentes capas de abstracción con una interfaz bien definida entre ellas. Las limitaciones de esta aproximación junto con el fin de la ley de Moore limitan el potencial de los futuros computadores. La mayoría de las mejoras actuales en el diseño de los computadores provienen fundamentalmente de la reducción del tamaño del canal del transistor, lo cual permite chips más rápidos y con un consumo eficiente sin apenas cambios fundamentales en el diseño de la arquitectura. Sin embargo, la tecnología actual está alcanzando limitaciones físicas donde no será posible reducir el tamaño de los transistores motivando así un cambio de paradigma en la construcción de los computadores. Esta tesis propone romper este diseño en capas y abogar por un sistema donde la arquitectura y el sistema de tiempo de ejecución del modelo de programación sean capaces de intercambiar información para alcanzar una meta común: La mejora del rendimiento y la reducción del consumo energético. Haciendo que la arquitectura sea consciente de la información disponible en el modelo de programación, como puede ser el grafo de dependencias entre tareas en los modelos de programación dataflow, es posible reducir el consumo energético explotando el camino critico del grafo. Además, la arquitectura puede proveer de soporte hardware para crear este grafo con el objetivo de reducir el overhead de construir este grado cuando la granularidad de las tareas es demasiado fina. Finalmente, el estado de las comunicaciones entre nodos puede ser expuesto al sistema de tiempo de ejecución para realizar una mejor planificación de las comunicaciones y creando nuevas oportunidades de solapamiento entre cómputo y comunicación que no eran posibles anteriormente. Esta tesis aporta una evaluación de todas estas propuestas, así como una metodología para simular y caracterizar el comportamiento de las aplicaciones
47

Krishna, Renan. "Constructing runtime models with bigraphs to address ubiquitous computing service composition volatility." Thesis, University of Sussex, 2015. http://sro.sussex.ac.uk/id/eprint/54282/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis, we explore the appropriateness of the language abstractions provided by Bigraphs to construct a model at runtime to tackle the problem of volatility in a service composition running on a mobile device. Our contributions to knowledge are as follows: 1) We have shown that Bigraphs (Milner, 2009) are suitable for expressing models at runtime. 2) We have offered Bigraph language abstractions as an appropriate solution to some of the research problems posed by the models at runtime community (Aßmann et al., 2012). 3) We have discussed the general lessons learnt from using Bigraphs for a practical application such as a model at runtime. 4) We have discussed the general lessons learnt from our experiences of designing models at runtime. 5) We have implemented the model at runtime using the BPL Tool (ITU, 2011) and have experimentally studied the response times of our Bigraphical model. We have suggested appropriate enhancements for the tool based on our experiences. We present techniques to parameterize the reaction rules so that the matching algorithm of the BPL Tool returns a single match giving us the ability to dynamically program the model at runtime. We also show how to query the Bigraph structure.
48

Zhong, Christopher. "Modeling humans as peers and supervisors in computing systems through runtime models." Diss., Kansas State University, 2012. http://hdl.handle.net/2097/14047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Doctor of Philosophy
Department of Computing and Information Sciences
Scott A. DeLoach
There is a growing demand for more effective integration of humans and computing systems, specifically in multiagent and multirobot systems. There are two aspects to consider in human integration: (1) the ability to control an arbitrary number of robots (particularly heterogeneous robots) and (2) integrating humans as peers in computing systems instead of being just users or supervisors. With traditional supervisory control of multirobot systems, the number of robots that a human can manage effectively is between four and six [17]. A limitation of traditional supervisory control is that the human must interact individually with each robot, which limits the upper-bound on the number of robots that a human can control effectively. In this work, I define the concept of "organizational control" together with an autonomous mechanism that can perform task allocation and other low-level housekeeping duties, which significantly reduces the need for the human to interact with individual robots. Humans are very versatile and robust in the types of tasks they can accomplish. However, failures in computing systems are common and thus redundancies are included to mitigate the chance of failure. When all redundancies have failed, system failure will occur and the computing system will be unable to accomplish its tasks. One way to further reduce the chance of a system failure is to integrate humans as peer "agents" in the computing system. As part of the system, humans can be assigned tasks that would have been impossible to complete due to failures.
49

Obrovac, Marko. "Chemical Computing for Distributed Systems: Algorithms and Implementation." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00925257.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Avec l'émergence de plates-formes distribuées très hétérogènes, dynamiques et à large échelle, la nécessité d'un moyen de les programmer efficacement et de les gérer est apparu. Le concept de l'informatique autonomique propose de créer des systèmes auto-gérés c'est-à-dire des systèmes qui sont conscients de leurs composants et de leur environnement, et peuvent se configurer, s'optimiser, se réparer et se protéger. Dans le cadre de la réalisation de tels systèmes, la programmation déclarative, dont l'objectif est de faciliter la tâche du programmeur en séparant le contrôle de la logique du calcul, a retrouvé beaucoup d'intérêt ces derniers temps. En particulier, la programmation à base de des règles est considérée comme un modèle prometteur dans cette quête d'abstractions de programmation adéquates pour ces plates-formes. Cependant, bien que ces modèles gagnent beaucoup d'attention, ils créent une demande pour des outils génériques capables de les exécuter à large échelle. Le modèle de programmation chimique, qui a été conçu suivant la métaphore chimique, est un modèle de programmation à bas de règles et d'ordre supérieur, avec une exécution non-déterministe, où les règles sont appliquées de façon concurrente sur un multi ensemble de données. Dans cette thèse, nous proposons la conception, le développement et l'expérimentation d'un intergiciel distribué pour l'exécution de programmes chimique sur des plates-formes à large échelle et génériques. L'architecture proposée combine une couche de communication pair-à-pair avec un protocole de capture atomique d'objets sur lesquels les règles doivent être appliquées, et un système efficace de détection de terminaison. Nous décrivons le prototype d'intergiciel mettant en oeuvre cette architecture. En s'appuyant sur son déploiement sur une plate-forme expérimentale à large échelle, nous présentons les résultats de performance, qui confirment les complexités analytiques obtenues et montrons expérimentalement la viabilité d'un tel modèle de programmation.
50

Höfig, Edzard Verfasser], and Ina [Akademischer Betreuer] [Schieferdecker. "Interpretation of Behaviour Models at Runtime: Performance Benchmark and Case Studies / Edzard Höfig. Betreuer: Ina Schieferdecker." Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2011. http://d-nb.info/101482771X/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії