To see the other types of publications on this topic, follow the link: Autonomic computing models.

Dissertations / Theses on the topic 'Autonomic computing models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 18 dissertations / theses for your research on the topic 'Autonomic computing models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Akour, Mohammed Abd Alwahab. "Towards Change Propagating Test Models In Autonomic and Adaptive Systems." Diss., North Dakota State University, 2012. https://hdl.handle.net/10365/26504.

Full text
Abstract:
The major motivation for self-adaptive computing systems is the self-adjustment of the software according to a changing environment. Adaptive computing systems can add, remove, and replace their own components in response to changes in the system itself and in the operating environment of a software system. Although these systems may provide a certain degree of confidence against new environments, their structural and behavioral changes should be validated after adaptation occurs at runtime. Testing dynamically adaptive systems is extremely challenging because both the structure and behavior of the system may change during its execution. After self adaptation occurs in autonomic software, new components may be integrated to the software system. When new components are incorporated, testing them becomes vital phase for ensuring that they will interact and behave as expected. When self adaptation is about removing existing components, a predefined test set may no longer be applicable due to changes in the program structure. Investigating techniques for dynamically updating regression tests after adaptation is therefore necessary to ensure such approaches can be applied in practice. We propose a model-driven approach that is based on change propagation for synchronizing a runtime test model for a software system with the model of its component structure after dynamic adaptation. A workflow and meta-model to support the approach was provided, referred to as Test Information Propagation (TIP). To demonstrate TIP, a prototype was developed that simulates a reductive and additive change to an autonomic, service-oriented healthcare application. To demonstrate the generalization of our TIP approach to be instantiated into the domain of up-to-date runtime testing for self-adaptive software systems, the TIP approach was applied to the self-adaptive JPacman 3.0 system. To measure the accuracy of the TIP engine, we consider and compare the work of a developer who manually identifyied changes that should be performed to update the test model after self-adaptation occurs in self-adaptive systems in our study. The experiments show how TIP is highly accurate for reductive change propagation across self-adaptive systems. Promising results have been achieved in simulating the additive changes as well.
APA, Harvard, Vancouver, ISO, and other styles
2

Cetina, Englada Carlos. "Achieving Autonomic Computing through the Use of Variability Models at Run-time." Doctoral thesis, Universitat Politècnica de València, 2010. http://hdl.handle.net/10251/7484.

Full text
Abstract:
Increasingly, software needs to dynamically adapt its behavior at run-time in response to changing conditions in the supporting computing infrastructure and in the surrounding physical environment. Adaptability is emerging as a necessary underlying capability, particularly for highly dynamic systems such as context-aware or ubiquitous systems. By automating tasks such as installation, adaptation, or healing, Autonomic Computing envisions computing environments that evolve without the need for human intervention. Even though there is a fair amount of work on architectures and their theoretical design, Autonomic Computing was criticised as being a \hype topic" because very little of it has been implemented fully. Furthermore, given that the autonomic system must change states at runtime and that some of those states may emerge and are much less deterministic, there is a great challenge to provide new guidelines, techniques and tools to help autonomic system development. This thesis shows that building up on the central ideas of Model Driven Development (Models as rst-order citizens) and Software Product Lines (Variability Management) can play a signi cant role as we move towards implementing the key self-management properties associated with autonomic computing. The presented approach encompass systems that are capable of modifying their own behavior with respect to changes in their operating environment, by using variability models as if they were the policies that drive the system's autonomic recon guration at runtime. Under a set of recon guration commands, the components that make up the architecture dynamically cooperate to change the con guration of the architecture to a new con guration. This work also provides the implementation of a Model-Based Recon guration Engine (MoRE) to blend the above ideas. Given a context event, MoRE queries the variability models to determine how the system should evolve, and then it provides the mechanisms for modifying the system.
Cetina Englada, C. (2010). Achieving Autonomic Computing through the Use of Variability Models at Run-time [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7484
Palancia
APA, Harvard, Vancouver, ISO, and other styles
3

Alférez, Salinas Germán Harvey. "Achieving Autonomic Web Service Compositions with Models at Runtime." Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/34672.

Full text
Abstract:
Over the last years, Web services have become increasingly popular. It is because they allow businesses to share data and business process (BP) logic through a programmatic interface across networks. In order to reach the full potential of Web services, they can be combined to achieve specifi c functionalities. Web services run in complex contexts where arising events may compromise the quality of the system (e.g. a sudden security attack). As a result, it is desirable to count on mechanisms to adapt Web service compositions (or simply called service compositions) according to problematic events in the context. Since critical systems may require prompt responses, manual adaptations are unfeasible in large and intricate service compositions. Thus, it is suitable to have autonomic mechanisms to guide their self-adaptation. One way to achieve this is by implementing variability constructs at the language level. However, this approach may become tedious, difficult to manage, and error-prone as the number of con figurations for the service composition grows. The goal of this thesis is to provide a model-driven framework to guide autonomic adjustments of context-aware service compositions. This framework spans over design time and runtime to face arising known and unknown context events (i.e., foreseen and unforeseen at design time) in the close and open worlds respectively. At design time, we propose a methodology for creating the models that guide autonomic changes. Since Service-Oriented Architecture (SOA) lacks support for systematic reuse of service operations, we represent service operations as Software Product Line (SPL) features in a variability model. As a result, our approach can support the construction of service composition families in mass production-environments. In order to reach optimum adaptations, the variability model and its possible con figurations are verifi ed at design time using Constraint Programming (CP). At runtime, when problematic events arise in the context, the variability model is leveraged for guiding autonomic changes of the service composition. The activation and deactivation of features in the variability model result in changes in a composition model that abstracts the underlying service composition. Changes in the variability model are refl ected into the service composition by adding or removing fragments of Business Process Execution Language (WS-BPEL) code, which are deployed at runtime. Model-driven strategies guide the safe migration of running service composition instances. Under the closed-world assumption, the possible context events are fully known at design time. These events will eventually trigger the dynamic adaptation of the service composition. Nevertheless, it is diffi cult to foresee all the possible situations arising in uncertain contexts where service compositions run. Therefore, we extend our framework to cover the dynamic evolution of service compositions to deal with unexpected events in the open world. If model adaptations cannot solve uncertainty, the supporting models self-evolve according to abstract tactics that preserve expected requirements.
Alférez Salinas, GH. (2013). Achieving Autonomic Web Service Compositions with Models at Runtime [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34672
TESIS
APA, Harvard, Vancouver, ISO, and other styles
4

Ferreira, Leite Alessandro. "A user-centered and autonomic multi-cloud architecture for high performance computing applications." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112355/document.

Full text
Abstract:
Le cloud computing a été considéré comme une option pour exécuter des applications de calcul haute performance. Bien que les plateformes traditionnelles de calcul haute performance telles que les grilles et les supercalculateurs offrent un environnement stable du point de vue des défaillances, des performances, et de la taille des ressources, le cloud computing offre des ressources à la demande, généralement avec des performances imprévisibles mais à des coûts financiers abordables. Pour surmonter les limites d’un cloud individuel, plusieurs clouds peuvent être combinés pour former une fédération de clouds, souvent avec des coûts supplémentaires légers pour les utilisateurs. Une fédération de clouds peut aider autant les fournisseurs que les utilisateurs à atteindre leurs objectifs tels la réduction du temps d’exécution, la minimisation des coûts, l’augmentation de la disponibilité, la réduction de la consommation d’énergie, pour ne citer que ceux-Là. Ainsi, la fédération de clouds peut être une solution élégante pour éviter le sur-Approvisionnement, réduisant ainsi les coûts d’exploitation en situation de charge moyenne, et en supprimant des ressources qui, autrement, resteraient inutilisées et gaspilleraient ainsi de énergie. Cependant, la fédération de clouds élargit la gamme des ressources disponibles. En conséquence, pour les utilisateurs, des compétences en cloud computing ou en administration système sont nécessaires, ainsi qu’un temps d’apprentissage considérable pour maîtrises les options disponibles. Dans ce contexte, certaines questions se posent: (a) Quelle ressource du cloud est appropriée pour une application donnée? (b) Comment les utilisateurs peuvent-Ils exécuter leurs applications HPC avec un rendement acceptable et des coûts financiers abordables, sans avoir à reconfigurer les applications pour répondre aux normes et contraintes du cloud ? (c) Comment les non-Spécialistes du cloud peuvent-Ils maximiser l’usage des caractéristiques du cloud, sans être liés au fournisseur du cloud ? et (d) Comment les fournisseurs de cloud peuvent-Ils exploiter la fédération pour réduire la consommation électrique, tout en étant en mesure de fournir un service garantissant les normes de qualité préétablies ? À partir de ces questions, la présente thèse propose une solution de consolidation d’applications pour la fédération de clouds qui garantit le respect des normes de qualité de service. On utilise un système multi-Agents pour négocier la migration des machines virtuelles entre les clouds. En nous basant sur la fédération de clouds, nous avons développé et évalué une approche pour exécuter une énorme application de bioinformatique à coût zéro. En outre, nous avons pu réduire le temps d’exécution de 22,55% par rapport à la meilleure exécution dans un cloud individuel. Cette thèse présente aussi une architecture de cloud baptisée « Excalibur » qui permet l’adaptation automatique des applications standards pour le cloud. Dans l’exécution d’une chaîne de traitements de la génomique, Excalibur a pu parfaitement mettre à l’échelle les applications sur jusqu’à 11 machines virtuelles, ce qui a réduit le temps d’exécution de 63% et le coût de 84% par rapport à la configuration de l’utilisateur. Enfin, cette thèse présente un processus d’ingénierie des lignes de produits (PLE) pour gérer la variabilité de l’infrastructure à la demande du cloud, et une architecture multi-Cloud autonome qui utilise ce processus pour configurer et faire face aux défaillances de manière indépendante. Le processus PLE utilise le modèle étendu de fonction avec des attributs pour décrire les ressources et les sélectionner en fonction des objectifs de l’utilisateur. Les expériences réalisées avec deux fournisseurs de cloud différents montrent qu’en utilisant le modèle proposé, les utilisateurs peuvent exécuter leurs applications dans un environnement de clouds fédérés, sans avoir besoin de connaître les variabilités et contraintes du cloud
Cloud computing has been seen as an option to execute high performance computing (HPC) applications. While traditional HPC platforms such as grid and supercomputers offer a stable environment in terms of failures, performance, and number of resources, cloud computing offers on-Demand resources generally with unpredictable performance at low financial cost. Furthermore, in cloud environment, failures are part of its normal operation. To overcome the limits of a single cloud, clouds can be combined, forming a cloud federation often with minimal additional costs for the users. A cloud federation can help both cloud providers and cloud users to achieve their goals such as to reduce the execution time, to achieve minimum cost, to increase availability, to reduce power consumption, among others. Hence, cloud federation can be an elegant solution to avoid over provisioning, thus reducing the operational costs in an average load situation, and removing resources that would otherwise remain idle and wasting power consumption, for instance. However, cloud federation increases the range of resources available for the users. As a result, cloud or system administration skills may be demanded from the users, as well as a considerable time to learn about the available options. In this context, some questions arise such as: (a) which cloud resource is appropriate for a given application? (b) how can the users execute their HPC applications with acceptable performance and financial costs, without needing to re-Engineer the applications to fit clouds' constraints? (c) how can non-Cloud specialists maximize the features of the clouds, without being tied to a cloud provider? and (d) how can the cloud providers use the federation to reduce power consumption of the clouds, while still being able to give service-Level agreement (SLA) guarantees to the users? Motivated by these questions, this thesis presents a SLA-Aware application consolidation solution for cloud federation. Using a multi-Agent system (MAS) to negotiate virtual machine (VM) migrations between the clouds, simulation results show that our approach could reduce up to 46% of the power consumption, while trying to meet performance requirements. Using the federation, we developed and evaluated an approach to execute a huge bioinformatics application at zero-Cost. Moreover, we could decrease the execution time in 22.55% over the best single cloud execution. In addition, this thesis presents a cloud architecture called Excalibur to auto-Scale cloud-Unaware application. Executing a genomics workflow, Excalibur could seamlessly scale the applications up to 11 virtual machines, reducing the execution time by 63% and the cost by 84% when compared to a user's configuration. Finally, this thesis presents a product line engineering (PLE) process to handle the variabilities of infrastructure-As-A-Service (IaaS) clouds, and an autonomic multi-Cloud architecture that uses this process to configure and to deal with failures autonomously. The PLE process uses extended feature model (EFM) with attributes to describe the resources and to select them based on users' objectives. Experiments realized with two different cloud providers show that using the proposed model, the users could execute their application in a cloud federation environment, without needing to know the variabilities and constraints of the clouds
APA, Harvard, Vancouver, ISO, and other styles
5

Sharrock, Rémi. "Gestion autonomique de performance, d'énergie et de qualité de service : Application aux réseaux filaires, réseaux de capteurs et grilles de calcul." Phd thesis, Toulouse, INPT, 2010. http://oatao.univ-toulouse.fr/11717/1/sharrock.pdf.

Full text
Abstract:
La motivation principale de cette thèse est de faire face à l'accroissement de la complexité des systèmes informatiques, qui, dans un futur proche ( de l'ordre de quelques années) risque fort d'être le principal frein à leur évolution et à leur développement. Aujourd'hui la tendance s'inverse et le coût de gestion humaine dépasse le coût des infrastructures matérielles et logicielles. De plus, l'administration manuelle de grands systèmes (applications distribuées, réseaux de capteurs, équipements réseaux) est non seulement lente mais aussi sujette à de nombreuses erreurs humaines. Un des domaines de recherche émergent est celui de l'informatique autonomique qui a pour but de rendre ces systèmes auto-gérés. Nous proposons une approche qui permet de décrire des politiques de gestion autonomiques de haut niveau. Ces politiques permettent au système d'assurer quatre propriétés fondamentales de l'auto-gestion: l'auto-guérison, l'auto-configuration, l'auto-protection et l'auto-optimisation. Nos contributions portent sur la spécification de diagrammes de description de politiques de gestion autonomiques appelés (S)PDD "(Sensor) Policy Description Diagrams". Ces diagrammes sont implémentés dans le gestionnaire autonomique TUNe et l'approche a été validée sur de nombreux systèmes: simulation électromagnétique répartie sur grille de calcul, réseaux de capteurs SunSPOT, répartiteur de calcul DIET. Une deuxième partie présente une modélisation mathématique de l’auto-optimisation pour un « datacenter ». Nous introduisons un problème de minimisation d’un critère intégrant d’une part la consommation électrique des équipements du réseau du « datacenter » et d’autre part la qualité de service des applications déployées sur le « datacenter ». Une heuristique permet de prendre en compte les contraintes dues aux fonctions de routage utilisées.
APA, Harvard, Vancouver, ISO, and other styles
6

Maiden, Wendy Marie. "Dualtrust a trust management model for swarm-based autonomic computing systems /." Pullman, Wash. : Washington State University, 2010. http://www.dissertations.wsu.edu/Thesis/Spring2010/W_Maiden_6041310.pdf.

Full text
Abstract:
Thesis (M.A. in electrical engineering and computer science)--Washington State University, May 2010.
Title from PDF title page (viewed on May 3, 2010). "Department of Electrical Engineering and Computer Science." Includes bibliographical references (p. 110-117).
APA, Harvard, Vancouver, ISO, and other styles
7

Thompson, Ruth. "Viable computing systems : a set theory decomposition of Anthony Stafford Beer's viable system model : aspirant of surpassing autonomic computing." Thesis, Liverpool John Moores University, 2011. http://researchonline.ljmu.ac.uk/6016/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Salfner, Felix. "Event-based failure prediction an extended hidden Markov model approach." Berlin dissertation.de, 2008. http://d-nb.info/990430626/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bourret, Pierre. "Modèle à Composant pour Plate-forme Autonomique." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM083/document.

Full text
Abstract:
Ces dernières décennies, les environnements informatiques sont devenus de plus en plus complexes, parsemés de dispositifs miniatures et sophistiqués gérant la mobilité et communiquant sans fil. L'informatique ubiquitaire, telle qu'imaginée par Mark Weiser en 1991, favorise l'intégration transparente de ces environnements avec le monde réel pour offrir de nouveaux types d'applications. La conception de programmes pour environnements ubiquitaires soulève cependant de nombreux défis, en particulier le problème de rendre une application auto-adaptable dans un contexte en constante évolution. Parallèlement, alors que la taille et la complexité de systèmes plus classiques ont explosé, IBM a proposé le concept d'informatique autonomique afin de réduire le fardeau de l'administration de systèmes imposants et largement disséminés. Cette thèse se base sur une approche où les applications sont conçues sous la forme de composants utilisant et fournissant des services. Un modèle de développement fondé sur une architecture de référence pour la conception d'applications ubiquitaires est proposée, fortement inspiré des recherches dans le domaine de l'informatique autonomique. Dans ce modèle, les applications sont prises en charge par une hiérarchie de gestionnaires autonomiques, qui appuient leurs décisions sur une représentation centrale du système. La mise en œuvre de cette contribution requiert de rendre la couche d'exécution sous-jacente plus réflexive, en vue de supporter de nouveaux types d'adaptations à l'exécution. Nous proposons également un modèle qui décrit le système à l'exécution et reflète sa dynamique de manière uniforme, suivant les principes du style d'architecture REST. Les applications reposant sur ce cette couche d'exécution réflexive et représentées par ce modèle sont qualifiées d'Autonomic-Ready. L'implantation de nos propositions ont été intégrées dans le modèle à composant orienté service Apache Felix iPOJO. Le modèle de représentation du système, nommé Everest, est publié en tant que sous-projet d'OW2 Chameleon. Ces propositions ont été évaluées et validées par la conception et l'exécution d'une application ubiquitaire sur iCASA, un environnement de développement et de simulation
In the last decades, computing environments have been getting more and more complex, filled with miniaturized and sophisticated devices that can handle mobility and wireless communications. Ubiquitous computing, as envisioned by Mark Weiser in 1991, promote the seamless integration of those computing environments with the real world in order to offer new kinds of applications. However, writing software for ubiquitous environments raises numerous challenges, mainly the problem of how to make an application adapt itself in an ever changing context. From another perspective, as classical softwares were growing in size and complexity, IBM proposed the concept of autonomic computing to help to contain the burden of administering massive and numerous systems. This PhD thesis is based on an approach where applications are designed in terms of components using and providing services. A development model based on a reference architecture for the conception of ubiquitous applications is proposed, greatly inspired by researches in the autonomic computing field. In this model, the application is managed by a hierarchy of autonomic managers, that base their decisions on a central representation of the system. The fulfilment of this contribution requires to make the underlying middleware more reflexive, in order to support new kinds of runtime adaptations. We also provide a model that depicts the running system and its dynamics in a uniform way, based on REST principles. Applications relying on this reflexive middleware and represented by this model are what we called Autonomic-Ready. Implementations of our proposals have been integrated in the Apache Felix iPOJO service-oriented component model. The system representation, named Everest, is provided as a OW2 Chameleon subproject. Validation is based on the iCASA pervasive environment development and simulation environment
APA, Harvard, Vancouver, ISO, and other styles
10

Mezghani, Emna. "Towards Autonomic and Cognitive IoT Systems, Application to Patients’ Treatments Management." Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0016/document.

Full text
Abstract:
Dans cette thèse, nous proposons une méthodologie basée sur les modèles pour gérer la complexité de la conception des systèmes autonomiques cognitifs intégrant des objets connectés. Cette méthodologie englobe un ensemble de patrons de conception dont nous avons défini pour modéliser la coordination dynamique des processus autonomiques pour gérer l’évolution des besoins du système, et pour enrichir les systèmes avec des propriétés cognitives qui permettent de comprendre les données et de générer des nouvelles connaissances. De plus, pour gérer les problèmes reliés à la gestion des big data et à la scalabilité du système lors du déploiement des processus, nous proposons une plate-forme sémantique supportant le traitement des grandes quantités de données afin d’intégrer des sources de données distribuées et hétérogènes déployées sur le cloud pour générer des connaissances qui seront exposées en tant que service (KaaS). Comme application de nos contributions, nous proposons un système cognitif prescriptif pour la gestion du plan de traitement du patient. Ainsi, nous élaborons des modèles ontologiques décrivant les capteurs et le contexte du patient, ainsi que la connaissance médicale pour la prise de décision. Le système proposé est évalué de point de vue clinique en collaborant avec des experts médicaux, et de point de vue performance en proposant des différentes configurations dans le KaaS
In this thesis, we propose a collaborative model driven methodology for designing Autonomic Cognitive IoT systems to deal with IoT design complexity. We defined within this methodology a set of autonomic cognitive design patterns that aim at (1) delineating the dynamic coordination of the autonomic processes to deal with the system's context changeability and requirements evolution at run-time, and (2) adding cognitive abilities to IoT systems to understand big data and generate new insights. To address challenges related to big data and scalability, we propose a generic semantic big data platform that aims at integrating heterogeneous distributed data sources deployed on the cloud and generating knowledge that will be exposed as a service (Knowledge as a Service--KaaS). As an application of the proposed contributions, we instantiated and combined a set of patterns for the development of prescriptive cognitive system for the patient treatment management. Thus, we elaborated two ontological models describing the wearable devices and the patient context as well as the medical knowledge for decision-making. The proposed system is evaluated from the clinical prescriptive through collaborating with medical experts, and from the performance perspective through deploying the system within the KaaS following different configurations
APA, Harvard, Vancouver, ISO, and other styles
11

Etchevers, Xavier. "Déploiement d’applications patrimoniales en environnements de type informatique dans le nuage." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM100/document.

Full text
Abstract:
L'objectif de cette thèse est d'offrir une solution de bout en bout permettant de décrire et de déployer de façon fiable une application distribuée dans un environnement virtualisé. Ceci passe par la définition d'un formalisme permettant de décrirer une application ainsi que son environnement d'exécution, puis de fournir les outils capable d'interpéter ce formalisme pour déployer (installer, instancier et configurer) l'application sur une plate-forme de type cloud computing
Cloud computing aims to cut down on the outlay and operational expenses involved in setting up and running applications. To do this, an application is split into a set of virtualized hardware and software resources. This virtualized application can be autonomously managed, making it responsive to the dynamic changes affecting its running environment. This is referred to as Application Life-cycle Management (ALM). In cloud computing, ALM is a growing but immature market, with many offers claiming to significantly improve productivity. However, all these solutions are faced with a major restriction: the duality between the level of autonomy they offer and the type of applications they can handle. To address this, this thesis focuses on managing the initial deployment of an application to demonstrate that the duality is artificial. The main contributions of this work are presented in a platform named VAMP (Virtual Applications Management Platform). VAMP can deploy any legacy application distributed in the cloud, in an autonomous, generic and reliable way. It consists of: • a component-based model to describe the elements making up an application and their projection on the running infrastructure, as well as the dependencies binding them in the applicative architecture; • an asynchronous, distributed and reliable protocol for self-configuration and self-activation of the application; • mechanisms ensuring the reliability of the VAMP system itself. Beyond implementing the solution, the most critical aspects of running VAMP have been formally verified using model checking tools. A validation step was also used to demonstrate the genericity of the proposal through various real-life implementations
APA, Harvard, Vancouver, ISO, and other styles
12

Pacheco, Ramirez Jesus Horacio, and Ramirez Jesus Horacio Pacheco. "An Anomaly Behavior Analysis Methodology for the Internet of Things: Design, Analysis, and Evaluation." Diss., The University of Arizona, 2017. http://hdl.handle.net/10150/625581.

Full text
Abstract:
Advances in mobile and pervasive computing, social network technologies and the exponential growth in Internet applications and services will lead to the development of the Internet of Things (IoT). The IoT services will be a key enabling technology to the development of smart infrastructures that will revolutionize the way we do business, manage critical services, and how we secure, protect, and entertain ourselves. Large-scale IoT applications, such as critical infrastructures (e.g., smart grid, smart transportation, smart buildings, etc.) are distributed systems, characterized by interdependence, cooperation, competition, and adaptation. The integration of IoT premises with sensors, actuators, and control devices allows smart infrastructures to achieve reliable and efficient operations, and to significantly reduce operational costs. However, with the use of IoT, we are experiencing grand challenges to secure and protect such advanced information services due to the significant increase in the attack surface. The interconnections between a growing number of devices expose the vulnerability of IoT applications to attackers. Even devices which are intended to operate in isolation are sometimes connected to the Internet due to careless configuration or to satisfy special needs (e.g., they need to be remotely managed). The security challenge consists of identifying accurately IoT devices, promptly detect vulnerabilities and exploitations of IoT devices, and stop or mitigate the impact of cyberattacks. An Intrusion Detection System (IDS) is in charge of monitoring the behavior of protected systems and is looking for malicious activities or policy violations in order to produce reports to a management station or even perform proactive countermeasures against the detected threat. Anomaly behavior detection is a technique that aims at creating models for the normal behavior of the network and detects any significant deviation from normal operations. With the ability to detect new and novel attacks, the anomaly detection is a promising IDS technique that is actively pursued by researchers. Since each IoT application has its own specification, it is hard to develop a single IDS which works properly for all IoT layers. A better approach is to design customized intrusion detection engines for different layers and then aggregate the analysis results from these engines. On the other hand, it would be cumbersome and takes a lot of effort and knowledge to manually extract the specification of each system. So it will be appropriate to formulate our methodology based on machine learning techniques which can be applied to produce efficient detection engines for different IoT applications. In this dissertation we aim at formalizing a general methodology to perform anomaly behavior analysis for IoT. We first introduce our IoT architecture for smart infrastructures that consists of four layers: end nodes (devices), communications, services, and application. Then we show our multilayer IoT security framework and IoT architecture that consists of five planes: function specification or model plane, attack surface plane, impact plane, mitigation plane, and priority plane. We then present a methodology to develop a general threat model in order to recognize the vulnerabilities in each layer and the possible countermeasures that can be deployed to mitigate their exploitation. In this scope, we show how to develop and deploy an anomaly behavior analysis based intrusion detection system (ABA-IDS) to detect anomalies that might be triggered by attacks against devices, protocols, information or services in our IoT framework. We have evaluated our approach by launching several cyberattacks (e.g. Sensor Impersonation, Replay, and Flooding attacks) against our testbeds developed at the University of Arizona Center for Cloud and Autonomic Computing. The results show that our approach can be used to deploy effective security mechanisms to protect the normal operations of smart infrastructures integrated to the IoT. Moreover, our approach can detect known and unknown attacks against IoT with high detection rate and low false alarms.
APA, Harvard, Vancouver, ISO, and other styles
13

MOURA, Eduardo Henrique de Carvalho. "MODELO DE SEGURANÇA AUTONÔMICA PARA COMPUTAÇÃO EM NUVEM COM USO DE HONEYPOT." Universidade Federal do Maranhão, 2013. http://tedebc.ufma.br:8080/jspui/handle/tede/516.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:53:28Z (GMT). No. of bitstreams: 1 Dissertacao Eduardo Henrique.pdf: 3617295 bytes, checksum: 9340e7d8d280cd0e83cf78ad24f4e7b8 (MD5) Previous issue date: 2013-11-26
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Cloud computing is a new computing paradigm which aims to provide on-demand service. Characteristics such as scalability and availability of infinite resources have attracted many users and companies. As they come along too many malicious users who want to take advantage of this possibility of resource sharing. Also migration networks and servers for cloud means hacking techniques are now destined to cloud-based servers. Attacks can originate until even within the environment, when a virtual machine that is being performed on one of his Vlans is used to probe, capture data or insert server attacks that are instantiated in the cloud. All this combined with a difficult to administer due to the complexity of the infrastructure leaves the safety of the environment to be a critical point. The purpose of this study is to use an autonomic framework with a methodology for disappointment to propose a security model for autonomic computing clouds that assist in the security of servers and instances works against attacks from other instances.
A Computação em Nuvem é um novo paradigma da computação que visa oferecer serviço sob demanda. Suas características como escalabilidade e disponibilidade de recursos infinitos vêm atraindo muitos usuários e empresas. Junto como eles vem também muitos usuários mal intencionados que querem se aproveitar dessa possibilidade de compartilhamento de recurso. Também migração de redes e servidores para nuvem significa que técnicas de invasão estão agora destinados a servidores baseados em nuvem . Ataques podem ser originados ate mesmo dentro do ambiente, quando uma de máquina virtual que esta sendo executada em uma de suas Vlans é utilizada para sondar, capturar dados ou inserir ataques a servidores que estão instanciados na nuvem. Tudo isso aliado a uma difícil administração devido à complexidade da infraestrutura do ambiente deixa a segurança sendo um ponto critico. A proposta desse trabalho é utilizar um framework autonômico juntamente com uma metodologia de decepção para propor um modelo segurança autonômica para nuvens computacionais que auxiliem na segurança de servidores e instâncias works contra ataques oriundos de outras instâncias.
APA, Harvard, Vancouver, ISO, and other styles
14

Malvaut-Martiarena, Willy. "Vers une architecture pair-à-pair pour l'informatique dans le nuage." Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENM044/document.

Full text
Abstract:
Avec l'émergence de l'informatique dans les nuages, une nouvelle approche consiste à externaliser des tâches de calcul, de façon à réduire les coûts d'hébergement et à augmenter la flexibilité des systèmes. L'infrastructure actuelle des services permettant cette externalisation repose sur l'utilisation de centres de traitement de données centralisés, qui sont dédiés à l'approvisionnement de ressources de calcul. Dans cette thèse, nous étudions la possibilité de fournir de tels services en utilisant une infrastructure pair-à-pair, c'est-à-dire une infrastructure totalement décentralisée pouvant être déployée sur une fédération de noeuds de calcul hétérogénes et de provenances diverses. Nous nous focalisons sur le problème de l'allocation des noeuds et présentons Salute, un service d'allocation de noeuds, qui organise les noeuds en réseaux virtuels non-structurés et repose sur des mécanismes de prédiction de disponibilité pour assurer, avec une grande probabilité, que les requêtes d'allocation sont satisfaites dans le temps, malgré le dynamisme de l'environnement hôte. Pour ce faire, le service Salute repose sur la collaboration de plusieurs protocoles pair-à-pair appartenant à la catégorie des protocoles épidémiques. Afin de valider nos propositions, nous évaluons Salute en utilisant des traces provenant d'un échantillonnage de plusieurs systèmes pair-à-pair de référence
With the emergence of Cloud computing, a new trend is to externalize computing tasks in order to decrease costs and increase flexibility. Current Cloud infrastructures rely on the usage of large-scale centralized data centers, for computing resources provisioning. In this thesis, we study the possibility to provide a peer-to-peer based Cloud infrastructure, which is totally decentralized and can be deployed on any computing nodes federation. We focus on the nodes allocation problem and present Salute, a nodes allocation service that organizes nodes in unstructured overlay networks and relies on mechanisms to predict node availability in order to ensure, with high probability, that allocation requests will be satisfied over time, and this despite churn. Salute's implementation relies on the collaboration of several peer-to-peer protocols belonging to the category of epidemic protocols. To convey our claims, we evaluate Salute using real traces
APA, Harvard, Vancouver, ISO, and other styles
15

Avouac, Pierre-Alain. "Plateforme autonomique dirigée par les modèles pour la construction d'interfaces multimodales dans les environnements pervasifs." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM084/document.

Full text
Abstract:
La construction d’interfaces homme-machine au dessus d’applications complexes soulève aujourd’hui des problèmes importants et requiert des efforts de recherche conséquents et soutenus. Il s’agit en effet d’aborder des technologies de plus en plus diverses et complexes de façon à construire des interfaces modulaires, évolutives et tirant profits des récents progrès dans les domaines de la programmation et des intergiciels. Il s’agit également de permettre à des non informaticiens, spécialistes de l’ergonomie, de définir et de mettre en place des interfaces appropriées. L’approche orientée service (Service-oriented Computing - SOC) constitue une avancée récente en Génie Logiciel. Cette approche promeut la mise en place de solutions modulaires et dynamiques permettant de faire évoluer, possiblement à l’exécution, les interfaces. L’approche orientée service est très prometteuse et de nombreux projets de recherche sont en cours dans les domaines de l’intégration d’entreprise, des équipements mobiles ou encore de l’informatique pervasive. L’approche orientée service demeure néanmoins complexe et demande un haut niveau d’expertise. Elle est difficilement accessible par des informaticiens non formés et totalement hors de portée des ingénieurs d’autres métiers, ergonomes par exemple. L’approche proposée dans cette thèse est de construire un atelier manipulant des services IHM abstraits. Ces services abstraits décrivent leurs fonctionnalités et leurs dépendances à un haut niveau d’abstraction. Ils peuvent ainsi être composés de façon plus aisée par des ingénieurs non experts en SOC. Le rôle de l’atelier est ensuite d’identifier des services concrets, implantant les services abstraits, de les composer en générant le code nécessaire (glue code) et de les déployer sur une plate-forme d’exécution. Un deuxième point concerne la spécialisation de l’atelier. Il est effet important de proposer un langage de composition de services proches des concepts métiers manipulés par les experts, notamment les ergonomes. Un tel langage se base sur les concepts métiers et intègre les contraintes de composition propres au domaine. L’approche actuelle passe par l’utilisation de méta-modèles, exprimant les connaissances métier, pour la spécialisation de l’atelier
N pervasive environments, with the proliferation of communicating devices in the environments (e.g., remoter controller, gamepad, mobile phone, augmented object), the users will express their needs or desires to an enormous variety of services with a multitude of available interaction modalities, expecting concurrently the environment and its equipment to react accordingly. Addressing the challenge of dynamic management at runtime of multimodal interaction in pervasive environments, our contribution is dedicated to software engineering of dynamic multimodal interfaces by providing: a specification language for multimodal interaction, an autonomic manager and an integration platform. The autonomic manager uses models to generate and maintain a multimodal interaction adapted to the current conditions of the environment. The multimodal interaction data-flow from input devices to a service is then effectively realized by the integration platform. Our conceptual solution is implemented by our DynaMo platform that is fully operational and stable. DynaMo is based on iPOJO, a dynamic service-oriented component framework built on top of OSGi and on Cilia, a component-based mediation framework
APA, Harvard, Vancouver, ISO, and other styles
16

Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.

Full text
Abstract:
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. The findings also demonstrate examples of ensemble classifiers being effectively consolidated into unified models using the distillation technique. An analogue arrangement with optimization against multiple output targets, incorporating additional information, showed accuracy gains comparable to ensembling. For use of the classifier on test data with statistics different than those of the dataset, results indicate that augmentation of the input data during classifier creation helps performance, but would, in the current case, likely need to be guided by information about the distribution shift to have sufficiently positive impact to enable a practical application. I suggest, for future development, updated architectures, automated hyperparameter search and leveraging the bountiful unlabeled data potentially available from production lines.
APA, Harvard, Vancouver, ISO, and other styles
17

Brousseau, Scott A. "A model for touchpoint simulation of grid services." Thesis, 2010. http://hdl.handle.net/1828/2456.

Full text
Abstract:
Advances in technologies have made an unprecedented range and variety of computing resources available. A number of fields have sought to take maximum advantage of these resources, with grid computing being one of the more successful. However, the increasing complexity of these heterogeneous, distributed systems has compromised users’ ability to manage them effectively. Autonomic computing, which seeks to hide the complexity of systems by making them self-managing, offers a potential solution. In order to produce autonomic managers for grid systems, realistic input is required for development and testing. This thesis proposes a model that can be used to provide simulated input, utilizing existing system logs. The simulator adheres to the standards and specifications recognized in both autonomic and grid services, and provides the detailed, accurate information that is required by developers.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhu, Qin. "Adaptive root cause analysis and diagnosis." Thesis, 2010. http://hdl.handle.net/1828/3153.

Full text
Abstract:
In this dissertation we describe the event processing autonomic computing reference architecture (EPACRA), an innovative reference architecture that solves many important problems related to adaptive root cause analysis and diagnosis (RCAD). Along with the research progress for defining EPACRA, we also identified a set of autonomic computing architecture patterns and proposed a new information seeking model called net-casting model. EPACRA is important because today, root cause analysis and diagnosis (RCAD) in enterprise systems is still largely performed manually by experienced system administrators. The goal of this research is to characterize, simplify, improve, and automate RCAD processes to ease selected tasks for system administrators and end-users. Research on RCAD processes involves three domains: (1) autonomic computing architecture patterns, (2) information seeking models, and (3) complex event processing (CEP) technologies. These domains as well as existing technologies and standards contribute to the synthesized knowledge of this dissertation. To minimize human involvement in RCAD, we investigated architecture patterns to be utilized in RCAD processes. We identified a set of autonomic computing architecture patterns and analyzed the interactions among the feedback loops in these individual architecture patterns and how the autonomic elements interact with each other. By illustrating the architecture patterns, we recognized ambiguity in the aggregator-escalator-peer pattern. This problem has been solved by adding a new architecture pattern, namely the chain-of-monitors pattern, to the lattice of autonomic computing architecture patterns. To facilitate the autonomic information seeking process, we developed the net-casting information seeking model. After identifying the commonalities among three traditional information seeking models, we defined the net-casting model as a five stage process and then tailored it to describe our automated RCAD process. One of the main contributions of this dissertation is an innovative autonomic computing reference architecture called event processing autonomic computing reference architecture (EPACRA). This reference architecture is based on (1) complex event processing (CEP) concepts, (2) autonomic computing architecture patterns, (3) real use-case workflows, and (4) our net-casting information seeking model. This reference architecture can be leveraged to relieve the system administrator’s burden of routinely performing RCAD tasks in a heterogeneous environment. EPACRA can be viewed as a variant of the IBM ACRA model—extended with CEP to deal with large event clouds in real-time environments. In the middle layer of the reference model, EPACRA introduces an innovative design referred to as use-case-unit—a use case is the scenario of an RCAD process initiated by a symptom—event processing network (EPN) for RCAD. Each use-case-unit EPN reflects our automation approach, including identification of events from the use cases and classifying those events into event types. Apart from defining individual event processing agents (EPAs) to process the different types of events, dynamically constructing use-case unit EPNs is also an innovative approach which may lead to fully autonomic RCAD systems in the future. Finally, this dissertation presents a case study for EPACRA. As a case study we use a prototype of a Web application intrusion detection tool to demonstrate the autonomic mechanisms of our RCAD process. Specifically, this tool recognizes two types of malicious attacks on web application systems and then takes actions to prevent intrusion attempts. This case study validates both our chain-of-monitors autonomic architecture pattern and our net-casting model. It also validates our use-case-unit EPN approach as an innovative approach to realizing RCAD workflows. Hopefully, this research platform will be beneficial for other RCAD projects and researchers with similar interests and goals.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography