To see the other types of publications on this topic, follow the link: Fog systems.

Dissertations / Theses on the topic 'Fog systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Fog systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bozios, Athanasios. "Fog Computing : Architecture and Security aspects." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-80178.

Full text
Abstract:
As the number of Internet of Things (IoT) devices that are used daily is increasing, the inadequacy of cloud computing to provide neseccary IoT-related features, such as low latency, geographic distribution and location awareness, is becoming more evident. Fog computing is introduced as a new computing paradigm, in order to solve this problem by extending the cloud‟s storage and computing resources to the network edge. However, the introduction of this new paradigm is also confronted by various security threats and challenges since the security practices that are implemented in cloud computing cannot be applied directly to this new architecture paradigm. To this end, various papers have been published in the context of fog computing security, in an effort to establish the best security practices towards the standardization of fog computing. In this thesis, we perform a systematic literature review of current research in order to provide with a classification of the various security threats and challenges in fog computing. Furthermore, we present the solutions that have been proposed so far and which security challenge do they address. Finally, we attempt to distinguish common aspects between the various proposals, evaluate current research on the subject and suggest directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
2

Struhar, Vaclav. "Improving Soft Real-time Performance of Fog Computing." Licentiate thesis, Mälardalens högskola, Inbyggda system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55679.

Full text
Abstract:
Fog computing is a distributed computing paradigm that brings data processing from remote cloud data centers into the vicinity of the edge of the network. The computation is performed closer to the source of the data, and thus it decreases the time unpredictability of cloud computing that stems from (i) the computation in shared multi-tenant remote data centers, and (ii) long distance data transfers between the source of the data and the data centers. The computation in fog computing provides fast response times and enables latency sensitive applications. However, industrial systems require time-bounded response times, also denoted as RT. The correctness of such systems depends not only on the logical results of the computations but also on the physical time instant at which these results are produced. Time-bounded responses in fog computing are attributed to two main aspects: computation and communication.    In this thesis, we explore both aspects targeting soft RT applications in fog computing in which the usefulness of the produced computational results degrades with real-time requirements violations. With regards to the computation, we provide a systematic literature survey on a novel lightweight RT container-based virtualization that ensures spatial and temporal isolation of co-located applications. Subsequently, we utilize a mechanism enabling RT container-based virtualization and propose a solution for orchestrating RT containers in a distributed environment. Concerning the communication aspect, we propose a solution for a dynamic bandwidth distribution in virtualized networks.
APA, Harvard, Vancouver, ISO, and other styles
3

Butterfield, Ellis H. "Fog Computing with Go: A Comparative Study." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/cmc_theses/1348.

Full text
Abstract:
The Internet of Things is a recent computing paradigm, de- fined by networks of highly connected things – sensors, actuators and smart objects – communicating across networks of homes, buildings, vehicles, and even people. The Internet of Things brings with it a host of new problems, from managing security on constrained devices to processing never before seen amounts of data. While cloud computing might be able to keep up with current data processing and computational demands, it is unclear whether it can be extended to the requirements brought forth by Internet of Things. Fog computing provides an architectural solution to address some of these problems by providing a layer of intermediary nodes within what is called an edge network, separating the local object networks and the Cloud. These edge nodes provide interoperability, real-time interaction, routing, and, if necessary, computational delegation to the Cloud. This paper attempts to evaluate Go, a distributed systems language developed by Google, in the context of requirements set forth by Fog computing. Similar methodologies of previous literature are simulated and benchmarked against in order to assess the viability of Go in the edge nodes of Fog computing architecture.
APA, Harvard, Vancouver, ISO, and other styles
4

Ismahil, Dlovan. "Investigating Fog- and Cloud-based Control Loops for Future Smart Factories." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-36705.

Full text
Abstract:
In the recent years we have seen that internet connectivity has multiplied vastly and that more and more computation and information storage are moved to the cloud. Similarly to other types of networks, industrial systems also see an increase in the number of communicating devices. Introduction of wireless communication into industrial systems instead of currently used wired networks will allow interconnection of all kinds of stationary and mobile machinery, robots and sensors and thereby bring multiple benefits. Moreover, recent developments in cloud and fog computing open many new opportunities in control, analysis and maintenance of industrial systems. Wireless systems are easy to install and maintain and relocation of data analysis and control services from local controllers to the cloud can make possible computations requiring a lot of resources, improve efficiency of collaboration between different parts of a plant or several plans as cloud servers will be able to store information and be accessible from all of them. However, even though introduction of wireless communication and cloud services brings a lot of benefits, new challenges in fulfilling industrial requirements arise as, e.g., packet delivery rates might be affected by disturbances introduced in wireless channels, data storage on distant servers might introduce timing and security issues, resource allocation and reservation for controllers supervising multiple processes should be considered to provide real-time services. The main goal of this thesis work is to consider design possibilities for a factory including local and cloud controllers, i.e. look at how the work of the factory should be organized, where control decisions should be made, analyze pros and cons of making the decisions at local (fog) and cloud servers. To narrow down the problem, an example factory with who independent wireless networks (each consisting of one sensor, one actuator and one local control node) and a cloud controller controlling both of them will be considered. Selected structure allows all the questions of interest to be considered, while its prototype can be built using available for this thesis work equipment
APA, Harvard, Vancouver, ISO, and other styles
5

Rahafrouz, Amir. "Distributed Orchestration Framework for Fog Computing." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-77118.

Full text
Abstract:
The rise of IoT-based system is making an impact on our daily lives and environment. Fog Computing is a paradigm to utilize IoT data and process them at the first hop of access network instead of distant clouds, and it is going to bring promising applications for us. A mature framework for fog computing still lacks until today. In this study, we propose an approach for monitoring fog nodes in a distributed system using the FogFlow framework. We extend the functionality of FogFlow by adding the monitoring capability of Docker containers using cAdvisor. We use Prometheus for collecting distributed data and aggregate them. The monitoring data of the entire distributed system of fog nodes is accessed via an API from Prometheus. Furthermore, the monitoring data is used to perform the ranking of fog nodes to choose the place to place the serverless functions (Fog Function). The ranking mechanism uses Analytical Hierarchy Processes (AHP) to place the fog function according to resource utilization and saturation of fog nodes’ hardware. Finally, an experiment test-bed is set up with an image-processing application to detect faces. The effect of our ranking approach on the Quality of Service is measured and compared to the current FogFlow.
APA, Harvard, Vancouver, ISO, and other styles
6

Bhal, Siddharth. "Fog computing for robotics system with adaptive task allocation." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78723.

Full text
Abstract:
The evolution of cloud computing has finally started to affect robotics. Indeed, there have been several real-time cloud applications making their way into robotics as of late. Inherent benefits of cloud robotics include providing virtually infinite computational power and enabling collaboration of a multitude of connected devices. However, its drawbacks include higher latency and overall higher energy consumption. Moreover, local devices in proximity incur higher latency when communicating among themselves via the cloud. At the same time, the cloud is a single point of failure in the network. Fog Computing is an extension of the cloud computing paradigm providing data, compute, storage and application services to end-users on a so-called edge layer. Distinguishing characteristics are its support for mobility and dense geographical distribution. We propose to study the implications of applying fog computing concepts in robotics by developing a middle-ware solution for Robotic Fog Computing Cluster solution for enabling adaptive distributed computation in heterogeneous multi-robot systems interacting with the Internet of Things (IoT). The developed middle-ware has a modular plug-in architecture based on micro-services and facilitates communication of IOT devices with the multi-robot systems. In addition, the developed middle-ware solutions support different load balancing or task allocation algorithms. In particular, we establish that we can enhance the performance of distributed system by decreasing overall system latency by using already established multi-criteria decision-making algorithms like TOPSIS and TODIM with naive Q-learning and with Neural Network based Q-learning.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
7

Bakhshi, Valojerdi Zeinab. "Persistent Fault-Tolerant Storage at the Fog Layer." Licentiate thesis, Mälardalens högskola, Inbyggda system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55680.

Full text
Abstract:
Clouds are powerful computer centers that provide computing and storage facilities that can be remotely accessed. The flexibility and cost-efficiency offered by clouds have made them very popular for business and web applications. The use of clouds is now being extended to safety-critical applications such as factories. However, cloud services do not provide time predictability which creates a hassle for such time-sensitive applications. Moreover, delays in the data communication between clouds and the devices the clouds control are unpredictable. Therefore, to increase predictability an intermediate layer between devices and the cloud is introduced. This layer, the Fog layer, aims to provide computational resources closer to the edge of the network. However, the fog computing paradigm relies on resource-constrained nodes, creating new potential challenges in resource management, scalability, and reliability. Solutions such as lightweight virtualization technologies can be leveraged for solving the dichotomy between performance and reliability in fog computing. In this context, container-based virtualization is a key technology providing lightweight virtualization for cloud computing that can be applied in fog computing as well. Such container-based technologies provide fault tolerance mechanisms that improve the reliability and availability of application execution.  By the study of a robotic use-case, we have realized that persistent data storage for stateful applications at the fog layer is particularly important. In addition, we identified the need to enhance the current container orchestration solution to fit fog applications executing in container-based architectures. In this thesis, we identify open challenges in achieving dependable fog platforms. Among these, we focus particularly on scalable, lightweight virtualization, auto-recovery, and re-integration solutions after failures in fog applications and nodes. We implement a testbed to deploy our use-case on a container-based fog platform and investigate the fulfillment of key dependability requirements. We enhance the architecture and identify the lack of persistent storage for stateful applications as an important impediment for the execution of control applications. We propose a solution for persistent fault-tolerant storage at the fog layer, which dissociates storage from applications to reduce application load and separates the concern of distributed storage. Our solution includes a replicated data structure supported by a consensus protocol that ensures distributed data consistency and fault tolerance in case of node failures. Finally, we use the UPPAAL verification tool to model and verify the fault tolerance and consistency of our solution.
APA, Harvard, Vancouver, ISO, and other styles
8

Nan, Yucen. "Cost-effective Offloading Strategy for Delay-sensitive Applications in Cloud of Things Systems." Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/16789.

Full text
Abstract:
The steep rise of Internet of Things (IoT) applications along with the limitations of Cloud Computing to address all IoT requirements leveraged a new distributed computing paradigm called Fog Computing, which aims to process data at the edge of the network. With the help of Fog Computing, some of the uncertainties of the communication among different tiers like the transmission latency, monetary spending and application loss caused by Cloud Computing can be effectively reduced. However, as the processing capacity of Fog nodes is more limited than that of cloud platforms, running all applications indiscriminately on these nodes can cause some QoS requirement to be violated. Therefore, we apply Three-tier Cloud of Things Systems (including Things Tier, Fog Tier and Cloud Tier), a simple yet powerful model of IoT to do the applications offloading between Fog Tier and Cloud Tier to guarantee some processing requirements. And there is important decision-making as to where executing each application in order to produce a cost effective solution and fully meet application requirements. This thesis makes the original contribution of an appropriate dynamical optimization about the application offloading. We call this the study of online decision making and this study is based on queueing theory. Additionally, we clarify the relationship between processing time and QoS requirements (like the money cost) to provide cost-effective application processing service. In particular, we are interested in the tradeoff in terms of average response time, average cost and average number of application loss. In this thesis, we present an online algorithm, called Unit-slot Optimization, based on the technique of Lyapunov Optimization. The Unit-slot Optimization is a quantified near-optimal online solution to balance the two-way tradeoff between average response time and average cost. Meanwhile, the proposed optimization is satisfied with the three-way tradeoff among average response time, average cost and average number of application loss as well. After that, we evaluate the performance of the Unit-slot Optimization algorithm by a number of experiments. The experimental results not only match up the theoretical analyses properly, but also demonstrate that our proposed algorithm can successfully provide cost-effective processing, while guaranteeing average response time in a Three-tier Cloud of Things System.
APA, Harvard, Vancouver, ISO, and other styles
9

Ahlcrona, Felix. "Sakernas Internet : En studie om vehicular fog computing påverkan i trafiken." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15713.

Full text
Abstract:
Framtidens fordon kommer vara väldigt annorlunda jämfört med dagens fordon. Stor del av förändringen kommer ske med hjälp av IoT. Världen kommer bli oerhört uppkopplat, sensorer kommer kunna ta fram data som de flesta av oss inte ens visste fanns. Mer data betyder även mer problem. Enorma mängder data kommer genereras och distribueras av framtidens IoT-enheter och denna data behöver analyseras och lagras på effektiva sätt med hjälp av Big data principer. Fog computing är en utveckling av Cloud tekniken som föreslås som en lösning på många av de problem IoT lider utav. Är tradionella lagringsmöjligheter och analyseringsverktyg tillräckliga för den enorma volymen data som kommer produceras eller krävs det nya tekniker för att stödja utvecklingen? Denna studie kommer försöka besvara frågeställningen: ”Vilka problem och möjligheter får utvecklingen av Fog computing i personbilar för konsumenter?” Frågeställningen besvaras genom en systematisk litteraturstudie. Den systematiska litteraturstudien syfte är identifiera och tolka tidigare litteratur och forskning. Analys av materialet har skett med hjälp av öppen kodning som har använts för att sortera och kategorisera data. Resultat visar att tekniker som IoT, Big data och Fog computing är väldigt integrerade i varandra. I framtidens fordon kommer det finns mycket IoTenheter som producerar enorma mängder data. Fog computing kommer bli en effektiv lösning för att hantera de mängder data från IoT-enheterna med låg fördröjning. Möjligheterna blir nya applikationer och system som hjälper till med att förbättra säkerheten i trafiken, miljön och information om bilens tillstånd. Det finns flera risker och problem som behöver lösas innan en fullskalig version kan börja användas, risker som autentisering av data, integriteten för användaren samt bestämma vilken mobilitetsmodell som är effektivast.
Future vehicles will be very different from today's vehicles. Much of the change will be done using the IoT. The world will be very connected, sensors will be able to access data that most of us did not even know existed. More data also means more problems. Enormous amounts of data will be generated and distributed by the future's IoT devices, and this data needs to be analyzed and stored efficiently using Big data Principles. Fog computing is a development of Cloud technology that is suggested as a solution to many of the problems IoT suffer from. Are traditional storage and analysis tools sufficient for the huge volume of data that will be produced or are new technologies needed to support development? This study will try to answer the question: "What problems and opportunities does the development of Fog computing in passenger cars have for consumers?" The question is answered by a systematic literature study. The objective of the systematic literature study is to identify and interpret previous literature and research. Analysis of the material has been done by using open coding where coding has been used to sort and categorize data. Results show that technologies like IoT, Big data and Fog computing are very integrated in each other. In the future vehicles there will be a lot of IoT devices that produce huge amounts of data. Fog computing will be an effective solution for managing the amount of data from IoT devices with a low latency. The possibilities will create new applications and systems that help improve traffic safety, the environment and information about the car's state and condition. There are several risks and problems that need to be resolved before a full-scale version can be used, such as data authentication, user integrity, and deciding on the most efficient mobility model.
APA, Harvard, Vancouver, ISO, and other styles
10

Awad, Hiba. "Quality of service assurance before deployment of fog systems with model-based engineering and DevOps." Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2025. http://www.theses.fr/2025IMTA0468.

Full text
Abstract:
Le Fog Computing décentralise le Cloud en rapprochant les services de calcul, de stockage et de réseau de la périphérie du réseau. Cette approche réduit la latence, l’utilisation de la bande passante et améliore le traitement en temps réel. Cependant, la complexité et l’hétérogénéité des systèmes Fog rendent leur gestion difficile et coûteuse. Identifier les erreurs en phase d’exécution nécessite souvent de revenir aux étapes de conception, ce qui entraîne des coûts élevés en temps et en ressources. La vérification pré-déploiement est donc essentielle pour garantir fiabilité et efficacité. Les systèmes Fog, utilisés dans des domaines variés (santé, automobile, villes intelligentes), ajoutent de la complexité aux processus de vérification et de déploiement. Pour y répondre, nous proposons une approche générique et personnalisable basée sur un processus de vérification en deux étapes, combinant les phases de conception et de pré-déploiement, tout en automatisant les activités de vérification et de déploiement. Notre solution repose sur un langage de modélisation Fog adaptable, la vérification des propriétés non fonctionnelles à la conception (sécurité, énergie), et la génération de configurations de déploiement indépendantes des outils. La vérification pré-déploiement associe outils de déploiement et solutions QoS pour garantir la conformité aux critères définis avant le déploiement final. Validée par trois cas d’usage (campus intelligent, parking intelligent, hôpital intelligent), cette approche réduit les coûts, simplifie la gestion des systèmes Fog et garantit leur QoS. En intégrant des pratiques DevOps, elle répond aux exigences des environnements industriels et académiques modernes
Fog Computing decentralizes the Cloud by bringing computation, storage, and network services closer to the edge. This reduces latency and bandwidth usage while improving real-time processing. However, the complexity and heterogeneity of Fog systems, often comprising diverse entities, make lifecycle management challenging and costly. Runtime error handling frequently requires revisiting earlier phases, which is both timeconsuming and expensive. Ensuring reliability through pre-deployment verification is therefore essential. Fog systems are deployed in domains such as healthcare, automotive, and smart cities, further complicating verification and deployment processes. To address these challenges, we propose a generic and customizable approach based on a two-step verification process. This approach focuses on the design-time and pre-deployment phases, automating key verification and deployment activities. Our solution features a customizable Fog modeling language, design-time verification of non-functional properties (e.g., security, energy), preparation of pre-deployment configurations, and integration with industrial DevOps tools and Quality of Service (QoS) solutions. By combining Model-Based Engineering and DevOps practices, our approach ensures QoS, reduces deployment costs, and enhances automation to tackle the complexity of Fog systems. We validated this approach using three literature-based use cases—smart campus, smart parking, and smart hospital. Results demonstrate its effectiveness in QoS verification, deployment automation, and reducing complexity and costs, highlighting its relevance to state-of-the-art engineering and DevOps practices
APA, Harvard, Vancouver, ISO, and other styles
11

Seydoux, Nicolas. "Towards interoperable IOT systems with a constraint-aware semantic web of things." Thesis, Toulouse, INSA, 2018. http://www.theses.fr/2018ISAT0035.

Full text
Abstract:
Cette thèse porte sur le Web Sémantique des Objets (WSdO), un domaine de recherche à l'interface de l'Internet des Objets (IdO) et du Web Sémantique (WS). L’intégration des approche du WS à l'IdO permettent de traiter l'importante hétérogénéité des ressources, des technologies et des applications de l'IdO, laquelle est une source de problèmes d'interopérabilité freinant le déploiement de systèmes IdO. Un premier verrou scientifique est lié à la consommation en ressource des technologies du WS, là où l'IdO s’appuie sur des objets aux capacités de calcul et de communication limitées. De plus, les réseaux IdO sont déployés à grande échelle, quand la montée en charge est difficile pour les technologies du WS. Cette thèse a pour objectif de traiter ce double défi, et comporte deux contributions. La première porte sur l'identification de critères de qualité pour les ontologies de l'IdO, et l’élaboration de IoT-O, une ontologie modulaire pour l'IdO. IoT-O a été implantée pour enrichir les données d'un bâtiment instrumenté, et pour être moteur de semIoTics, notre application de gestion autonomique. La seconde contribution est EDR (Emergent Distributed Reasoning), une approche générique pour distribuer dynamiquement le raisonnement à base de règles. Les règles sont propagées de proche en proche en s'appuyant sur les descriptions échangées entre noeuds. EDR est évaluée dans deux scénario concrets, s'appuyant sur un serveur et des noeuds contraints pour simuler le déploiement
This thesis is situated in the Semantic Web of things (SWoT) domain, at the interface between the Internet of Things (IoT) and the Semantic Web (SW). The integration of SW approaches into the IoT aim at tackling the important heterogeneity of resources, technologies and applications in the IoT, which creates interoperability issues impeding the deployment of IoT systems. A first scientific challenge is risen by the resource consumption of the SW technologies, inadequated to the limites computation and communication capabilities of IoT devices. Moreover, IoT networks are deployed at a large scale, when SW technologies have scalability issues. This thesis addresses this double challenge by two contributions. The first one is the identification of quality criteria for IoT ontologies, leading to the proposition of IoT-O, a modular IoT ontology. IoT-O is deployed to enrich data from a smart building, and drive semIoTics, our autonomic computing application. The second contribution is EDR (Emergent Distributed Reasoning), a generic approach to dynamically distributed rule-based reasoning. Rules are propagated peer-to-peer, guided by descriptions exchanged among nodes. EDR is evaluated in two use-cases, using both a server and some constrained nodes to simulate the deployment
APA, Harvard, Vancouver, ISO, and other styles
12

Terneborg, Martin. "Enabling container failover by extending current container migration techniques." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85380.

Full text
Abstract:
Historically virtual machines have been the backbone of the cloud-industry, allowing cloud-providers to offer virtualized multi-tenant solutions. A key aspect of the cloud is its flexibility and abstraction of the underlying hardware. Virtual machines can enhance this aspect by enabling support for live migration and failover. Live migration is the process of moving a running virtual machine from one host to another and failover ensures that a failed virtual machine will automatically be restarted (possibly on another host). Today, as containers continue to increase in popularity and make up a larger portion of the cloud, often replacing virtual machines, it becomes increasingly important for these processes to be available to containers as well. However, little support for container live migration and failover exists and remains largely experimental. Furthermore, no solution seems to exists that offers both live migration and failover for containers in a unified solution. The thesis presents a proof-of-concept implementation and description of a system that enables support for both live migration and failover for containers by extending current container migration techniques. It is able to offer this to any OCI-compliant container, and could therefore potentially be integrated into current container and container orchestration frameworks. In addition, measurements for the proof-of-concept implementation are provided and used to compare the proof-of-concept implementation to a current container migration technique. Furthermore, the thesis presents an overview of the history and implementation of containers, current migration techniques, and metrics that can be used for measuring different migration techniques are introduced. The paper concludes that current container migration techniques can be extended in order to support both live migration and failover, and that in doing so one might expect to achieve a downtime equal to, and total migration time lower than that of pre-copy migration. Supporting both live migration and failover, however, comes at a cost of an increased amount of data needed to be transferred between the hosts.
APA, Harvard, Vancouver, ISO, and other styles
13

Wiss, Thomas. "Evaluation of Internet of Things Communication Protocols Adapted for Secure Transmission in Fog Computing Environments." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-35298.

Full text
Abstract:
A current challenge in the Internet of Things is the seeking after conceptual structures to connect the presumably billions of devices of innumerable forms and capabilities. An emerging architectural concept, the fog cloud computing, moves the seemingly unlimited computational power of the distant cloud to the edge of the network, closer to the potentially computationally limited things, effectively diminishing the experienced latency. To allow computationally-constrained devices partaking in the network they have to be relieved from the burden of constant availability and extensive computational execution. Establishing a publish/subscribe communication pattern with the utilization of the popular Internet of Things application layer protocol Constrained Application Protocol is depicted one approach of overcoming this issue. In this project, a Java based library to establish a publish/subscribe communication pattern for the Constrained Application Protocol was develop. Furthermore, efforts to build and assess prototypes of several publish/subscribe application layer protocols executed over varying common as well as secured versions of the standard and non-standard transport layer protocols were made to take advantage, evaluate, and compare the developed library. The results indicate that the standard protocol stacks represent solid candidates yet one non-standard protocol stack is the considered prime candidate which still maintains a low response time while not adding a significant amount of communication overhead.
APA, Harvard, Vancouver, ISO, and other styles
14

Ozeer, Umar Ibn Zaid. "Autonomic resilience of distributed IoT applications in the Fog." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM054.

Full text
Abstract:
Les dernières tendances de l'informatique distribuées préconisent le Fog computing quiétend les capacités du Cloud en bordure du réseau, à proximité des objets terminaux etdes utilisateurs finaux localisé dans le monde physique. Le Fog est un catalyseur clé desapplications de l'Internet des Objets (IoT), car il résout certains des besoins que le Cloud neparvient à satisfaire, tels que les faibles latences, la confidentialité des données sensibles, laqualité de service ainsi que les contraintes géographiques. Pour cette raison, le Fog devientde plus en plus populaire et trouve des cas d'utilisations dans de nombreux domaines telsque la domotique, l'agriculture, la e-santé, les voitures autonomes, etc.Le Fog, cependant, est instable car il est constitué de milliards d'objets hétérogènes ausein d'un écosystème dynamique. Les objets de l'IoT tombent en pannes régulièrementparce qu'ils sont produits en masse à des couts très bas. De plus, l'écosystème Fog-IoTest cyber-physique et les objets IoT sont donc affectés par les conditions météorologiquesdu monde physique. Ceci accroît la probabilité et la fréquence des défaillances. Dansun tel écosystème, les défaillances produisent des comportements incohérents qui peuventprovoquer des situations dangereuses et coûteuses dans le monde physique. La gestion dela résilience dans un tel écosystème est donc primordiale.Cette Thèse propose une approche autonome de gestion de la résilience des applicationsIoT déployées en environnement Fog. L'approche proposée comprend quatre tâches fonctionnelles:(i) sauvegarde d'état, (ii) surveillance, (iii) notification des défaillances, et(iv) reprise sur panne. Chaque tâche est un regroupement de rôles similaires et est miseen oeuvre en tenant compte les spécificités de l'écosystème (e.g., hétérogénéité, ressourceslimitées). La sauvegarde d'état vise à sauvegarder les informations sur l'état de l'application.Ces informations sont constituées des données d'exécution et de la mémoire volatile, ainsique des messages échangés et fonctions exécutées par l'application. La surveillance vise àobserver et à communiquer des informations sur le cycle de vie de l'application.Il est particulièrement utile pour la détection des défaillances. Lors d'une défaillance, des notificationssont propagées à la partie de l'application affectée par cette défaillance. La propagationdes notifications vise à limiter la portée de l'impact de la défaillance et à fournir un servicepartiel ou dégradé. Pour établir une reprise sur panne, l'application est reconfigurée et lesdonnées enregistrées lors de la tâche de sauvegarde d'état sont utilisées afin de restaurer unétat cohérent de l'application par rapport au monde physique. Cette réconciliation entrel'état de l'application et celui du monde physique est appelé cohérence cyber-physique. Laprocédure de reprise sur panne en assurant la cohérence cyber-physique évite les impactsdangereux et coûteux de la défaillance sur le monde physique.L'approche proposée a été validée à l'aide de techniques de vérification par modèle afin devérifier que certaines propriétés importantes sont satisfaites. Cette approche de résilience aété mise en oeuvre sous la forme d'une boîte à outils, F3ARIoT, destiné aux développeurs.F3ARIoT a été évalué sur une application domotique. Les résultats montrent la faisabilité de son utilisation sur des déploiements réels d'applications Fog-IoT, ainsi que desperformances satisfaisantes par rapport aux utilisateurs
Recent computing trends have been advocating for more distributed paradigms, namelyFog computing, which extends the capacities of the Cloud at the edge of the network, thatis close to end devices and end users in the physical world. The Fog is a key enabler of theInternet of Things (IoT) applications as it resolves some of the needs that the Cloud failsto provide such as low network latencies, privacy, QoS, and geographical requirements. Forthis reason, the Fog has become increasingly popular and finds application in many fieldssuch as smart homes and cities, agriculture, healthcare, transportation, etc.The Fog, however, is unstable because it is constituted of billions of heterogeneous devicesin a dynamic ecosystem. IoT devices may regularly fail because of bulk production andcheap design. Moreover, the Fog-IoT ecosystem is cyber-physical and thus devices aresubjected to external physical world conditions which increase the occurrence of failures.When failures occur in such an ecosystem, the resulting inconsistencies in the applicationaffect the physical world by inducing hazardous and costly situations.In this Thesis, we propose an end-to-end autonomic failure management approach for IoTapplications deployed in the Fog. The approach manages IoT applications and is composedof four functional steps: (i) state saving, (ii) monitoring, (iii) failure notification,and (iv) recovery. Each step is a collection of similar roles and is implemented, taking intoaccount the specificities of the ecosystem (e.g., heterogeneity, resource limitations). Statesaving aims at saving data concerning the state of the managed application. These includeruntime parameters and the data in the volatile memory, as well as messages exchangedand functions executed by the application. Monitoring aims at observing and reportinginformation on the lifecycle of the application. When a failure is detected, failure notificationsare propagated to the part of the application which is affected by that failure.The propagation of failure notifications aims at limiting the impact of the failure and providinga partial service. In order to recover from a failure, the application is reconfigured and thedata saved during the state saving step are used to restore a cyber-physical consistent stateof the application. Cyber-physical consistency aims at maintaining a consistent behaviourof the application with respect to the physical world, as well as avoiding dangerous andcostly circumstances.The approach was validated using model checking techniques to verify important correctnessproperties. It was then implemented as a framework called F3ARIoT. This frameworkwas evaluated on a smart home application. The results showed the feasibility of deployingF3ARIoT on real Fog-IoT applications as well as its good performances in regards to enduser experience
APA, Harvard, Vancouver, ISO, and other styles
15

TURKI, Meriem. "TOWARD ELASTIC PARTITIONING OF MULTI-TENANT COMPUTING SYSTEMS AT THE EDGE." Doctoral thesis, Università degli studi di Ferrara, 2021. http://hdl.handle.net/11392/2488083.

Full text
Abstract:
Fog computing is gaining momentum to extend Cloud resources in close proximity to data sources, end users or both. Among the explored Fog deployment models, the Public Fog offers compute and memory resources for open use to IoT service providers, and is emerging as a fundamental component for an Edge-Fog-Cloud complete compute continuum along which IoT services can be flexibly instantiated. The multi-tenant nature of public Fog nodes represents a major design and management challenge at the intersection of different yet related research disciplines, ranging from dynamic mapping of manycore architectures to resource management for Cloud and Fog resources, and from computing acceleration to software virtualization. The fundamental challenge is to efficiently share the limited pool of Fog resources among multiple consolidated IoT services sharing the same hardware platform. This thesis revolves around the key intuition that multi-tenancy could be reconciled with limited resource capacity through an elastic provisioning of Fog resources. As a result, the thesis proposes a holistic support for elastic Fog computing, following a bottom-up methodology. The support is fundamentally rooted in the capability of the on-chip interconnection network (network-on-chip, NoC) to spatially and temporally isolate communication flows originated by different IoT services. The isolation in space enables to partition the Fog node architecture into spatially-isolated execution environments that provide enhanced security with respect to software-only isolation, and the strictest notion of service composability. At this level, the thesis proposes pLBDR, a lightweight routing mechanism that prevents functional and non-functional interference of intra-partition communication flows with one another. Above all, it combines low complexity with fast dynamic reconfigurability of the partitioning pattern, thus delivering a NoC-supported elastic partitioning in space that is out-of-reach for current NoC technology. In space-multiplexed parallel computing architectures, some communications unavoidably break the spatial locality, especially those associated with memory controller and system configuration traffic. For these flows, this thesis provides efficient time-multiplexing while meeting the distinctive requirements of an elastic Fog environment: low-latency communication scheduling in time, and runtime reconfigurability of the number of time slots. This new set of requirements make the proposed time-multiplexed NoC a unique design point in the open literature. In compliance with its bottom-up approach, the thesis finally tackles the resource management challenge to master the elasticity properties of the underlying compute and memory partitions. In line with mainstream approaches to resource management for manycore systems, the thesis assumes a hierarchical framework where virtual resource reassignments are dynamically changed into the actual reallocation of physical resources. At this level, the shape, size and location of space partitions have to be adjusted in a non-overlapping way to fulfil the variations. The thesis proposes an Integer Linear Programming shape-based model that strives to deliver prioritized latency guarantees to IoT services while perturbing the system state the least possible. The modest running times enable the deployment of the proposed Partition Manager for online use, in combination with a "prior provisioning prompt allocation" scheme for resource utilization in Fog computing. Overall, this thesis is a highly interdisciplinary piece of work that provides an integrated hardware/software support for elastic Fog computing, and paves the way for a dynamically-orchestrated Edge-Fog-Cloud continuum serving as a seamless hosting environment for the next generation of smart IoT services.
APA, Harvard, Vancouver, ISO, and other styles
16

Badokhon, Alaa. "An Adaptable, Fog-Computing Machine-to-Machine Internet of Things Communication Framework." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1492450137643915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Azimi, Seyyed, Osvaldo Simeone, and Ravi Tandon. "Content Delivery in Fog-Aided Small-Cell Systems with Offline and Online Caching: An Information—Theoretic Analysis." MDPI AG, 2017. http://hdl.handle.net/10150/625476.

Full text
Abstract:
The storage of frequently requested multimedia content at small-cell base stations (BSs) can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB) is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average) DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results.
APA, Harvard, Vancouver, ISO, and other styles
18

Lillethun, David. "ssIoTa: A system software framework for the internet of things." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53531.

Full text
Abstract:
Sensors are widely deployed in our environment, and their number is increasing rapidly. In the near future, billions of devices will all be connected to each other, creating an Internet of Things. Furthermore, computational intelligence is needed to make applications involving these devices truly exciting. In IoT, however, the vast amounts of data will not be statically prepared for batch processing, but rather continually produced and streamed live to data consumers and intelligent algorithms. We refer to applications that perform live analysis on live data streams, bringing intelligence to IoT, as the Analysis of Things. However, the Analysis of Things also comes with a new set of challenges. The data sources are not collected in a single, centralized location, but rather distributed widely across the environment. AoT applications need to be able to access (consume, produce, and share with each other) this data in a way that is natural considering its live streaming nature. The data transport mechanism must also allow easy access to sensors, actuators, and analysis results. Furthermore, analysis applications require computational resources on which to run. We claim that system support for AoT can reduce the complexity of developing and executing such applications. To address this, we make the following contributions: - A framework for systems support of Live Streaming Analysis in the Internet of Things, which we refer to as the Analysis of Things (AoT), including a set of requirements for system design - A system implementation that validates the framework by supporting Analysis of Things applications at a local scale, and a design for a federated system that supports AoT on a wide geographical scale - An empirical system evaluation that validates the system design and implementation, including simulation experiments across a wide-area distributed system We present five broad requirements for the Analysis of Things and discuss one set of specific system support features that can satisfy these requirements. We have implemented a system, called \textsubscript{SS}IoTa, that implements these features and supports AoT applications running on local resources. The programming model for the system allows applications to be specified simply as operator graphs, by connecting operator inputs to operator outputs and sensor streams. Operators are code components that run arbitrary continuous analysis algorithms on streaming data. By conforming to a provided interface, operators may be developed that can be composed into operator graphs and executed by the system. The system consists of an Execution Environment, in which a Resource Manager manages the available computational resources and the applications running on them, a Stream Registry, in which available data streams can be registered so that they may be discovered and used by applications, and an Operator Store, which serves as a repository for operator code so that components can be shared and reused. Experimental results for the system implementation validate its performance. Many applications are also widely distributed across a geographic area. To support such applications, \textsubscript{SS}IoTa must be able to run them on infrastructure resources that are also distributed widely. We have designed a system that does so by federating each of the three system components: Operator Store, Stream Registry, and Resource Manager. The Operator Store is distributed using a distributed hast table (DHT), however since temporal locality can be expected and data churn is low, caching may be employed to further improve performance. Since sensors exist at particular locations in physical space, queries on the Stream Registry will be based on location. We also introduce the concept of geographical locality. Therefore, range queries in two dimensions must be supported by the federated Stream Registry, while taking advantage of geographical locality for improved average-case performance. To accomplish these goals, we present a design sketch for SkipCAN, a modification of the SkipNet and Content Addressable Network DHTs. Finally, the fundamental issue in the federated Resource Manager is how to distributed the operators of multiple applications across the geographically distributed sites where computational resources can execute them. To address this, we introduce DistAl, a fully distributed algorithm that assigns operators to sites. DistAl also respects the system resource constraints and application preferences for performance and quality of results (QoR), using application-specific utility functions to allow applications to express their preferences. DistAl is validated by simulation results.
APA, Harvard, Vancouver, ISO, and other styles
19

Wheeler, Nathan. "On the Effectiveness of an IOT - FOG - CLOUD Architecture for a real-world application." UNF Digital Commons, 2018. https://digitalcommons.unf.edu/etd/855.

Full text
Abstract:
Fog Computing is an emerging computing paradigm that shifts certain processing closer to the Edge of a network, generally within one network hop, where latency is minimized, and results can be obtained the quickest. However, not a lot of research has been done on the effectiveness of Fog in real-world applications. The aim of this research is to show the effectiveness of the Fog Computing paradigm as the middle layer in a 3-tier architecture between the Internet of Things (IoT) and the Cloud. Two applications were developed: one utilizing Fog in a 3-tier architecture and another application using IoT and Cloud with no Fog. A quantitative and qualitative analysis followed the application development, with studies focused on application response time and walkthroughs for AWS Greengrass and Amazon Machine Learning. Furthermore, the application itself demonstrates an architecture which is of both business and research value, providing a real-life coffee shop use-case and utilizing a newly available Fog offering from Amazon known as Greengrass. At the Cloud level, the newly available Amazon Machine Learning API was used to perform predictive analytics on the data provided by the IoT devices. Results suggest that Fog-enabled applications have a much lower range of response times as well as lower response times overall. These results suggest Fog-enabled solutions are suitable for applications which require network stability and reliably lower latency.
APA, Harvard, Vancouver, ISO, and other styles
20

Lusito, E. "A Network-based Approach to Breast Cancer Systems Medicine." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/265572.

Full text
Abstract:
Breast cancer is the most commonly diagnosed cancer and the second leading cause of cancer death in women. Although recent improvements in the prevention, early detection, and treatment of breast cancer have led to a significant decrease in the mortality rate, the identification of an optimal therapeutic strategy for each patient remains a difficult task because of the heterogeneous nature of the disease. Clinical heterogeneity of breast cancer is in part explained by the vast genetic and molecular heterogeneity of this disease, which is now emerging from large-scale screening studies using “-omics” technologies (e.g. microarray gene expression profiling, next-generation sequencing). This genetic and molecular heterogeneity likely contributes significantly to therapy response and clinical outcome. The recent advances in our understanding of the molecular nature of breast cancer due, in particular, to the explosion of high-throughput technologies, is driving a shift away from the “one-dose-fits-all” paradigm in healthcare, to the new “Personalized Cancer Care” paradigm. The aim of “Personalized Cancer Care” is to select the optimal course of clinical intervention for individual patients, maximizing the likelihood of effective treatment and reducing the probability of adverse drug reactions, according to the molecular features of the patient. In light to this medical scenario, the aim of this project is to identify novel molecular mechanisms that are altered in breast cancer through the development of a computational pipeline, in order to propose putative biomarkers and druggable target genes for the personalized management of patients. Through the application of a Systems Biology approach to reverse engineer Gene Regulatory Networks (GRNs) from gene expression data, we built GRNs around “hub” genes transcriptionally correlating with clinical-pathological features associated with breast tumor expression profiles. The relevance of the GRNs as putative cancer-related mechanisms was reinforced by the occurrence of mutational events related to breast cancer in the “hub” genes, as well as in the neighbor genes. Moreover, for some networks, we observed mutually exclusive mutational patterns in the neighbors genes, thus supporting their predicted role as oncogenic mechanisms. Strikingly, a substantial fraction of GRNs were overexpressed in Triple Negative Breast Cancer patients who acquired resistance to therapy, suggesting the involvement of these networks in mechanisms of chemoresistance. In conclusion, our approach allowed us to identify cancer molecular mechanisms frequently altered in breast cancer and in chemorefractory tumors, which may suggest novel cancer biomarkers and potential drug targets for the development of more effective therapeutic strategies in metastatic breast cancer patients.
APA, Harvard, Vancouver, ISO, and other styles
21

Pradilla, Ceron Juan Vicente. "SOSLite: Soporte para Sistemas Ciber-Físicos y Computación en la Nube." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/76808.

Full text
Abstract:
Cyber-Physical Systems (CPS) have become one of the greatest research topics today; because they pose a new complex discipline, which addresses big existing and future systems as the Internet, the Internet of Things, sensors networks and smart grids. As a recent discipline, there are many possibilities to improve the state of the art, interoperability being one of the most relevant. Thus, this thesis has been created within the framework of interoperability for CPS, by using the SOS (Sensor Observation Service) standard, which belongs to the SWE (Sensor Web Enablement) framework of the OGC (Open Geospatial Consortium). It has been developed to give rise to a new line of research within the Distributed Real-Time Systems and Applications group (SATRD for its acronym in Spanish) from the Communications Department of the Polytechnic University of Valencia (UPV for its acronym in Valencian). The approach, with which the interoperability in the CPS has been addressed, is of synthetic type (from parts to whole), starting from a verifiable and workable solution for interoperability in sensor networks, one of the most significant CPSs because it is integrated in many other CPSs, next adapting and testing the solution in more complex CPS, such as the Internet of Things. In this way, an interoperability solution in sensor networks is proposed based on the SOS, but adapted to some requirements that makes of this mechanism a lighter version of the standard, which facilitates the deployment of future implementations due to the possibility of using limited devices for this purpose. This theoretical solution is brought to a first implementation, called SOSLite, which is tested to determine its characteristic behavior and to verify the fulfillment of its purpose. Analogously, and starting from the same theoretical solution, a second implementation is projected called SOSFul, which proposes an update to the SOS standard so that it is lighter, more efficient and easier to use. The SOSFul, has a more ambitious projection by addressing the Internet of Things, a more complex CPS than sensors networks. As in the case of the SOSLite, tests are performed and validation is made through a use case. So, both the SOSLite and the SOSFul are projected as interoperability solutions in the CPS. Both implementations are based on the theoretical proposal of a light SOS and are available for free and under open source licensing so that it can be used by the research community to continue its development and increase its use.
Los Sistemas Ciber-Físicos (CPS) se han convertido en uno de los temas de investigación con mayor proyección en la actualidad; debido a que plantean una nueva disciplina compleja, que aborda sistemas existentes y futuros de gran auge como: la Internet, la Internet de las Cosas, las redes de sensores y las redes eléctricas inteligentes. Como disciplina en gestación, existen muchas posibilidades para aportar al estado del arte, siendo la interoperabilidad uno de los más relevantes. Así, esta tesis se ha creado en el marco de la interoperabilidad para los CPS, mediante la utilización del estándar SOS (Sensor Observation Service) perteneciente al marco de trabajo SWE (Sensor Web Enablement) del OGC (Open Geospatial Consortium). Se ha desarrollado para dar surgimiento a una nueva línea de investigación dentro del grupo SATRD (Sistemas y Aplicaciones de Tiempo Real Distribuidos) del Departamento de Comunicaciones de la UPV (Universitat Politècnica de València). La aproximación con la cual se ha abordado la interoperabilidad en los CPS es de tipo sintética (pasar de las partes al todo), iniciando desde una solución, verificable y realizable, para la interoperabilidad en las redes de sensores, uno de los CPS más significativos debido a que se integra en muchos otros CPS, y pasando a adaptar y comprobar dicha solución en CPS de mayor complejidad, como la Internet de las Cosas. De esta forma, se propone una solución de interoperabilidad en las redes de sensores fundamentada en el SOS, pero adaptada a unos requerimientos que hacen de este mecanismo una versión más ligera del estándar, con lo que se facilita el despliegue de futuras implementaciones debido a la posibilidad de emplear dispositivos limitados para tal fin. Dicha solución teórica, se lleva a una primera implementación, denominada SOSLite, la cual se prueba para determinar su comportamiento característico y verificar el cumplimiento de su propósito. De forma análoga y partiendo de la misma solución teórica, se proyecta una segunda implementación, llamada SOSFul, la cual propone una actualización del estándar SOS de forma que sea más ligero, eficiente y fácil de emplear. El SOSFul, tiene una proyección más ambiciosa al abordar la Internet de las Cosas, un CPS más complejo que las redes de sensores. Como en el caso del SOSLite, se realizan pruebas y se valida mediante un caso de uso. Así, tanto el SOSLite como el SOSFul se proyectan como soluciones de interoperabilidad en los CPS. Ambas implementaciones parten de la propuesta teórica de SOS ligero y se encuentran disponibles de forma gratuita y bajo código libre, para ser empleados por la comunidad investigativa para continuar su desarrollo y aumentar su uso.
Els sistemes ciberfísics (CPS, Cyber-Physical Systems) s'han convertit en un dels temes de recerca amb major projecció en l'actualitat, a causa del fet que plantegen una nova disciplina complexa que aborda sistemes existents i futurs de gran auge, com ara: la Internet, la Internet de les Coses, les xarxes de sensors i les xarxes elèctriques intel·ligents. Com a disciplina en gestació, hi ha moltes possibilitats per a aportar a l'estat de la qüestió, sent la interoperabilitat una de les més rellevants. Així, aquesta tesi s'ha creat en el marc de la interoperabilitat per als CPS, mitjançant la utilització de l'estàndard SOS (Sensor Observation Service) pertanyent al marc de treball SWE (Sensor Web Enablement) de l'OGC (Open Geospatial Consortium). S'ha desenvolupat per a iniciar una nova línia de recerca dins del Grup de SATRD (Sistemes i Aplicacions de Temps Real Distribuïts) del Departament de Comunicacions de la UPV (Universitat Politècnica de València). L'aproximació amb la qual s'ha abordat la interoperabilitat en els CPS és de tipus sintètic (passar de les parts al tot), iniciant des d'una solució, verificable i realitzable, per a la interoperabilitat en les xarxes de sensors, un dels CPS més significatius pel fet que s'integra en molts altres CPS, i passant a adaptar i comprovar aquesta solució en CPS de major complexitat, com la Internet de les Coses. D'aquesta forma, es proposa una solució d'interoperabilitat en les xarxes de sensors fonamentada en el SOS, però adaptada a uns requeriments que fan d'aquest mecanisme una versió més lleugera de l'estàndard, amb la qual cosa es facilita el desplegament de futures implementacions per la possibilitat d'emprar dispositius limitats a aquest fi. Aquesta solució teòrica es porta a una primera implementació, denominada SOSLite, que es prova per a determinar el seu comportament característic i verificar el compliment del seu propòsit. De forma anàloga i partint de la mateixa solució teòrica, es projecta una segona implementació, anomenada SOSFul, que proposa una actualització de l'estàndard SOS de manera que siga més lleuger, eficient i fàcil d'emprar. El SOSFul té una projecció més ambiciosa quan aborda la Internet de les Coses, un CPS més complex que les xarxes de sensors. Com en el cas del SOSLite, es realitzen proves i es valida mitjançant un cas d'ús. Així, tant el SOSLite com el SOSFul, es projecten com a solucions d'interoperabilitat en els CPS. Ambdues implementacions parteixen de la proposta teòrica de SOS lleuger, i es troben disponibles de forma gratuïta i en codi lliure per a ser emprades per la comunitat investigadora a fi de continuar el seu desenvolupament i augmentar-ne l'ús.
Pradilla Ceron, JV. (2016). SOSLite: Soporte para Sistemas Ciber-Físicos y Computación en la Nube [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/76808
TESIS
APA, Harvard, Vancouver, ISO, and other styles
22

Venkatesh, Saligrama Ramaswamy. "System-identification for complex-systems." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Nordberg, Jörgen. "Signal enhancement in wireless communication systems /." Ronneby : Department of Telecommunications and Signal Processing, Blekinge Institute of Technology, 2002. http://www.bth.se/fou.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Siying. "Context-aware recommender system for system of information systems." Thesis, Compiègne, 2021. http://www.theses.fr/2021COMP2602.

Full text
Abstract:
Travailler en collaboration n’est plus une question mais une réalité, la question qui se pose aujourd’hui concerne la mise en œuvre de la collaboration de façon à ce qu’elle soit la plus réussie possible. Cependant, une collaboration réussie n’est pas facile et est conditionnée par différents facteurs qui peuvent l’influencer. Il est donc nécessaire de considérer ces facteurs au sein du contexte de collaboration pour favoriser l’efficacité de collaboration. Parmi ces facteurs, le collaborateur est un facteur principal, qui est étroitement associé à l’efficacité et à la réussite des collaborations. Le choix des collaborateurs et/ou la recommandation de ces derniers en tenant compte du contexte de la collaboration peut grandement influencer la réussite de cette dernière. En même temps, grâce au développement des technologies de l’information, de nombreux outils numériques de collaboration sont mis à la disposition tels que les outils de mail et de chat en temps réel. Ces outils numériques peuvent eux-mêmes être intégrés dans un environnement de travail collaboratif basé sur le web. De tels environnements permettent aux utilisateurs de collaborer au-delà de la limite des distances géographiques. Ces derniers laissent ainsi des traces d’activités qu’il devient possible d’exploiter. Cette exploitation sera d’autant plus précise que le contexte sera décrit et donc les traces enregistrées riches en description. Il devient donc intéressant de développer les environnements de travail collaboratif basé sur le web en tenant d’une modélisation du contexte de la collaboration. L’exploitation des traces enregistrés pourra alors prendre la forme de recommandation contextuelle de collaborateurs pouvant renforcer la collaboration. Afin de générer des recommandations de collaborateurs dans des environnements de travail collaboratifs basés sur le web, cette thèse se concentre sur la génération des recommandations contextuelles de collaborateurs en définissant, modélisant et traitant le contexte de collaboration. Pour cela, nous proposons d’abord une définition du contexte de collaboration et choisissons de créer une ontologie du contexte de collaboration compte tenu des avantages de l’approche de modélisation en l’ontologie. Ensuite, une similarité sémantique basée sur l’ontologie est développée et appliquée dans trois algorithmes différents (i.e., PreF1, PoF1 et PoF2) afin de générer des recommandations contextuelles des collaborateurs. Par ailleurs, nous déployons l’ontologie de contexte de collaboration dans des environnements de travail collaboratif basés sur le web en considérant une architecture de système des systèmes d’informations du point de vue des environnements de travail collaboratif basés sur le web. À partir de cette architecture, un prototype correspondant d’environnement de travail collaboratif basé sur le web est alors construit. Enfin, un ensemble de données de collaborations scientifiques est utilisé pour tester et évaluer les performances des trois algorithmes de recommandation contextuelle des collaborateurs
Working collaboratively is no longer an issue but a reality, what matters today is how to implement collaboration so that it is as successful as possible. However, successful collaboration is not easy and is conditioned by different factors that can influence it. It is therefore necessary to take these impacting factors into account within the context of collaboration for promoting the effectiveness of collaboration. Among the impacting factors, collaborator is a main one, which is closely associated with the effectiveness and success of collaborations. The selection and/or recommendation of collaborators, taking into account the context of collaboration, can greatly influence the success of collaboration. Meanwhile, thanks to the development of information technology, many collaborative tools are available, such as e-mail and real-time chat tools. These tools can be integrated into a web-based collaborative work environment. Such environments allow users to collaborate beyond the limit of geographical distances. During collaboration, users can utilize multiple integrated tools, perform various activities, and thus leave traces of activities that can be exploited. This exploitation will be more precise when the context of collaboration is described. It is therefore worth developing web-based collaborative work environments with a model of the collaboration context. Processing the recorded traces can then lead to context-aware collaborator recommendations that can reinforce the collaboration. To generate collaborator recommendations in web-based Collaborative Working Environments, this thesis focuses on producing context-aware collaborator recommendations by defining, modeling, and processing the collaboration context. To achieve this, we first propose a definition of the collaboration context and choose to build a collaboration context ontology given the advantages of the ontology-based modeling approach. Next, an ontologybased semantic similarity is developed and applied in three different algorithms (i.e., PreF1, PoF1, and PoF2) to generate context-aware collaborator recommendations. Furthermore, we deploy the collaboration context ontology into web-based Collaborative Working Environments by considering an architecture of System of Information Systems from the viewpoint of web-based Collaborative Working Environments. Based on this architecture, a corresponding prototype of web-based Collaborative Working Environment is then constructed. Finally, a dataset of scientific collaborations is employed to test and evaluate the performances of the three context-aware collaborator recommendation algorithms
APA, Harvard, Vancouver, ISO, and other styles
25

Sylvan, Andreas. "Internet of Things in Surface Mount TechnologyElectronics Assembly." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209243.

Full text
Abstract:
Currently manufacturers in the European Surface Mount Technology (SMT) industry seeproduction changeover, machine downtime and process optimization as their biggestchallenges. They also see a need for collecting data and sharing information betweenmachines, people and systems involved in the manufacturing process. Internet of Things (IoT)technology provides an opportunity to make this happen. This research project gives answers tothe question of what the potentials and challenges of IoT implementation are in European SMTmanufacturing. First, key IoT concepts are introduced. Then, through interviews with expertsworking in SMT manufacturing, the current standpoint of the SMT industry is defined. The studypinpoints obstacles in SMT IoT implementation and proposes a solution. Firstly, local datacollection and sharing needs to be achieved through the use of standardized IoT protocols andAPIs. Secondly, because SMT manufacturers do not trust that sensitive data will remain securein the Cloud, a separation of proprietary data and statistical data is needed in order take a stepfurther and collect Big Data in a Cloud service. This will allow for new services to be offered byequipment manufacturers.
I dagsläget upplever tillverkare inom den europeiska ytmonteringsindustrin för elektronikproduktionsomställningar, nedtid för maskiner och processoptimering som sina störstautmaningar. De ser även ett behov av att samla data och dela information mellan maskiner,människor och system som som är delaktiga i tillverkningsprocessen.Sakernas internet, även kallat Internet of Things (IoT), erbjuder teknik som kan göra dettamöjligt. Det här forskningsprojektet besvarar frågan om vilken potential som finns samt vilkautmaningar en implementation av sakernas internet inom europeisk ytmonteringstillverkning avelektronik innebär. Till att börja med introduceras nyckelkoncept inom sakernas internet. Sedandefinieras utgångsläget i elektroniktillverkningsindustrin genom intervjuer med experter.Studien belyser de hinder som ligger i vägen för implementation och föreslår en lösning. Dettainnebär först och främst att datainsamling och delning av data måste uppnås genomanvändning av standardiserade protokoll för sakernas internet ochapplikationsprogrammeringsgränssnitt (APIer). På grund av att elektroniktillverkare inte litar påatt känslig data förblir säker i molnet måste proprietär data separeras från statistisk data. Dettaför att möjliggöra nästa steg som är insamling av så kallad Big Data i en molntjänst. Dettamöjliggör i sin tur för tillverkaren av produktionsmaskiner att erbjuda nya tjänster.
APA, Harvard, Vancouver, ISO, and other styles
26

Martin, Philippe J. F. "Large scale C3 systems : experiment design and system improvement." Thesis, Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, 1986. http://hdl.handle.net/1721.1/15061.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1986.
Includes bibliographical references (p. 105-106).
Research supported by the Joint Directors of Laboratories through the Office of Naval Research. N00014-85-K-0782
Philippe J. F. Martin.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
27

Iacobucci, Joseph Vincent. "Rapid Architecture Alternative Modeling (RAAM): a framework for capability-based analysis of system of systems architectures." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43697.

Full text
Abstract:
The current national security environment and fiscal tightening make it necessary for the Department of Defense to transition away from a threat based acquisition mindset towards a capability based approach to acquire portfolios of systems. This requires that groups of interdependent systems must regularly interact and work together as systems of systems to deliver desired capabilities. Technological advances, especially in the areas of electronics, computing, and communications also means that these systems of systems are tightly integrated and more complex to acquire, operate, and manage. In response to this, the Department of Defense has turned to system architecting principles along with capability based analysis. However, because of the diversity of the systems, technologies, and organizations involved in creating a system of systems, the design space of architecture alternatives is discrete and highly non-linear. The design space is also very large due to the hundreds of systems that can be used, the numerous variations in the way systems can be employed and operated, and also the thousands of tasks that are often required to fulfill a capability. This makes it very difficult to fully explore the design space. As a result, capability based analysis of system of systems architectures often only considers a small number of alternatives. This places a severe limitation on the development of capabilities that are necessary to address the needs of the war fighter. The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular problem domain by establishing an effective means to communicate the semantics from the RAAM framework. These techniques make it possible to include diverse multi-metric models within the RAAM framework in addition to system and operational level trades. A canonical example was used to explore the uses of the methodology. The canonical example contains all of the features of a full system of systems architecture analysis study but uses fewer tasks and systems. Using RAAM with the canonical example it was possible to consider both system and operational level trades in the same analysis. Once the methodology had been tested with the canonical example, a Suppression of Enemy Air Defenses (SEAD) capability model was developed. Due to the sensitive nature of analyses on that subject, notional data was developed. The notional data has similar trends and properties to realistic Suppression of Enemy Air Defenses data. RAAM was shown to be traceable and provided a mechanism for a unified treatment of a variety of metrics. The SEAD capability model demonstrated lower computer runtimes and reduced model creation complexity as compared to methods currently in use. To determine the usefulness of the implementation of the methodology on current computing hardware, RAAM was tested with system of system architecture studies of different sizes. This was necessary since system of systems may be called upon to accomplish thousands of tasks. It has been clearly demonstrated that RAAM is able to enumerate and evaluate the types of large, complex design spaces usually encountered in capability based design, oftentimes providing the ability to efficiently search the entire decision space. The core algorithms for generation and evaluation of alternatives scale linearly with expected problem sizes. The SEAD capability model outputs prompted the discovery a new issue, the data storage and manipulation requirements for an analysis. Two strategies were developed to counter large data sizes, the use of portfolio views and top `n' analysis. This proved the usefulness of the RAAM framework and methodology during Pre-Milestone A capability based analysis.
APA, Harvard, Vancouver, ISO, and other styles
28

Darweesh, Turki H. "Capacity and performance analysis of a multi-user, mixed traffic GSM network." Ottawa, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Tuo. "Analytical modeling of HSUPA-enabled UMTS networks for capacity planning." Connect to full text, 2008. http://ses.library.usyd.edu.au/handle/2123/4055.

Full text
Abstract:
Thesis (Ph. D.)--University of Sydney, 2009.
Title from title screen (viewed February 20, 2009). Includes graphs and tables. Includes list of publications co-authored with others. Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Information Technologies, Faculty of Engineering and Information Technologies. Degree awarded 2009; thesis submitted 2008. Includes bibliographical references. Also available in print form.
APA, Harvard, Vancouver, ISO, and other styles
30

Erturk, Alper. "An expert system for reward systems design." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA383532.

Full text
Abstract:
Thesis (M.S. in Information Technology Management and M.S. in Systems Management) Naval Postgraduate School, Sept. 2000.
Thesis advisor(s): Jansen, Erik; Nissen, Mark E. Includes bibliographical references (p. 93-94). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
31

Tosun, Suleyman. "Reliability-centric system design for embedded systems." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2005. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Haidar, Ghayath. "Reasoning system for real time reactive systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ47844.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Papalexopoulos, Alexis D. "Modeling techniques for power system grounding systems." Diss., Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/13529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Freeman, Isaac. "A modular system for constructing dynamical systems." Thesis, University of Canterbury. Mathematics, 1998. http://hdl.handle.net/10092/8888.

Full text
Abstract:
This thesis discusses a method based on the dual principle of Rössler, and developed by Deng, for systematically constructing robust dynamical systems from lower dimensional subsystems. Systems built using this method may be modified easily, and are suitable for mathematical modelling. Extensions are made to this scheme, which allow one to describe a wider range of dynamical behaviour. These extensions allow the creation of systems that reproduce qualitative features of the Lorenz Attractor (including bifurcation properties) and of Chua's circuit, but which are easily extensible.
APA, Harvard, Vancouver, ISO, and other styles
35

Kristofersson, Filip, and Sara Elfberg. "Maximizing Solar Energy Production for Västra Stenhagenskolan : Designing an Optimal PV System." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-384723.

Full text
Abstract:
Skolfastigheter is a municipality owned real estate company that manages most of the buildings used for lower education in Uppsala. The company is working in line with the environmental goals of the municipality by installing photovoltaic systems in schools and other educational buildings. Skolfastigheter are planning to install a photovoltaic system in a school in Stenhagen. The purpose of this study is to optimally design the proposed system. The system will be maximized, which in this study entails that the modules will be placed on every part of the roof where the insolation is sufficient. The system will also be grid connected. The design process includes finding an optimal placement of the modules, matching them with a suitable inverter bank and evaluating the potential of a battery storage. Economic aspects such as taxes, subsidies and electricity prices are taken into account when the system is simulated and analyzed. A sensitivity analysis is carried out to evaluate how the capacity of a battery bank affects the self-consumption, self-sufficiency and cost of the system. It is concluded that the optimal system has a total peak power of almost 600 kW and a net present value of 826 TSEK, meaning that it would be a profitable investment. A battery bank is excluded from the optimal design, since increasing the capacity of the bank steadily decreased the net present value and only marginally increased the self-consumption and self-sufficiency of the system.
APA, Harvard, Vancouver, ISO, and other styles
36

Sharma, Vivek. "Colloidal gold nanorods, iridescent beetles and breath figure templated assembly of ordered array of pores in polymer films." Diss., Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/37168.

Full text
Abstract:
Water drops that nucleate and grow over an evaporating polymer solution exposed to a current of moist air remain noncoalescent and self-assemble into close packed arrays. The hexagonally close packed, nearly monodisperse drops, eventually evaporate away, leaving a polymer film, with ordered array of pores. Meanwhile, typical breath figures or dew that form when moist air contacts cold surfaces involve coalescence-assisted growth of highly polydisperse, disordered array of water drops. This dissertation provides the first quantitative attempt aimed at the elucidation of the mechanism of the breath figure templated assembly of the ordered arrays of pores in polymer films. The creation and evolution of a population of close packed drops occur in response to the heat and mass fluxes involved in water droplet condensation and solvent evaporation. The dynamics of drop nucleation, growth, noncoalescence and self-assembly are modeled by accounting for various transport and thermodynamic processes. The theoretical results for the rate and extent of evaporative cooling and growth are compared with experiments. Further, the dissertation describes a rich array of experimental observations about water droplet growth, noncoalescence, assembly and drying that have not been reported in the published literature so far. The theoretical framework developed in this study allows one to rationalize and predict the structure and size of pores formed in different polymer-solvent systems under given air flow conditions. While the ordered arrays of water drops present an example of dynamics, growth and assembly of spherical particles, the study on colloidal gold nanorods focuses on the behavior of rodlike particles. A comprehensive set of theoretical arguments based on the shape dependent hydrodynamics of rods were developed and used for centrifugation-assisted separation of rodlike particles from nanospheres that are typical byproducts of seed mediated growth of nanorods. Since the efficiency of shape separation is assessed using UV-Vis-NIR spectroscopy and transmission electron microscopy (TEM), the present dissertation elucidates the shape dependent parameters that affect the optical response and phase behavior of colloidal gold nanorods. The drying of a drop of colloidal gold nanorods on glass slides creates coffee ring like deposits near the contact line, which is preceded by the formation of a liquid crystalline phase. The assemblies of rods on TEM grids are shown to be the result of equilibrium and non-equilibrium processes, and the ordered phases are compared with two dimensional liquid crystals. The methodology of pattern characterization developed in this dissertation is then used to analyze the structure of the exocuticle of iridescent beetle Chrysina gloriosa. The patterns were characterized using Voronoi analysis and the effect of curvature on the fractions on hexagonal order of tiles was determined. Further, these patterns were found to be analogous to the focal conic domains formed spontaneously on the free surface of a cholesteric liquid crystal. In summary, the dissertation provides the crucial understanding required for the widespread use of breath figure templated assembly as a method for manufacturing porous films, that requires only a drop of polymer solution (dilute) and a whiff of breath! Further, the dissertation establishes the physical basis and methodology for separating and characterizing colloidal gold nanorods. The dissertation also suggests the basis for the formation and structure of tiles that decorate the exoskeleton of an iridescent beetle Chrysina gloriosa.
APA, Harvard, Vancouver, ISO, and other styles
37

CRESTO, ALEINA SARA. "Design methodologies for space systems in a System of Systems (SoS) architecture." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2790162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Darrous, Jad. "Scalable and Efficient Data Management in Distributed Clouds : Service Provisioning and Data Processing." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN077.

Full text
Abstract:
Cette thèse porte sur des solutions pour la gestion de données afin d'accélérer l'exécution efficace d'applications de type « Big Data » (très consommatrices en données) dans des centres de calculs distribués à grande échelle. Les applications de type « Big Data » sont de plus en plus souvent exécutées sur plusieurs sites. Les deux principales raisons de cette tendance sont 1) le déplacement des calculs vers les sources de données pour éliminer la latence due à leur transmission et 2) le stockage de données sur un site peut ne pas être réalisable à cause de leurs tailles de plus en plus importantes.La plupart des applications s'exécutent sur des clusters virtuels et nécessitent donc des images de machines virtuelles (VMI) ou des conteneurs d’application. Par conséquent, il est important de permettre l’approvisionnement rapide de ces services afin de réduire le temps d'attente avant l’exécution de nouveaux services ou applications. Dans la première partie de cette thèse, nous avons travaillé sur la récupération et le placement des données, en tenant compte de problèmes difficiles, notamment l'hétérogénéité des connexions au réseau étendu (WAN) et les besoins croissants en stockage pour les VMIs et les conteneurs d’application.Par ailleurs, les applications de type « Big Data » reposent sur la réplication pour fournir des services fiables et rapides, mais le surcoût devient de plus en plus grand. La seconde partie de cette thèse constitue l'une des premières études sur la compréhension et l'amélioration des performances des applications utilisant la technique, moins coûteuse en stockage, des codes d'effacement (erasure coding), en remplacement de la réplication
This thesis focuses on scalable data management solutions to accelerate service provisioning and enable efficient execution of data-intensive applications in large-scale distributed clouds. Data-intensive applications are increasingly running on distributed infrastructures (multiple clusters). The main two reasons for such a trend are 1) moving computation to data sources can eliminate the latency of data transmission, and 2) storing data on one site may not be feasible given the continuous increase of data size.On the one hand, most applications run on virtual clusters to provide isolated services, and require virtual machine images (VMIs) or container images to provision such services. Hence, it is important to enable fast provisioning of virtualization services to reduce the waiting time of new running services or applications. Different from previous work, during the first part of this thesis, we worked on optimizing data retrieval and placement considering challenging issues including the continuous increase of the number and size of VMIs and container images, and the limited bandwidth and heterogeneity of the wide area network (WAN) connections.On the other hand, data-intensive applications rely on replication to provide dependable and fast services, but it became expensive and even infeasible with the unprecedented growth of data size. The second part of this thesis provides one of the first studies on understanding and improving the performance of data-intensive applications when replacing replication with the storage-efficient erasure coding (EC) technique
APA, Harvard, Vancouver, ISO, and other styles
39

Wernstedt, Fredrik. "Multi-agent systems for distributed control of district heating systems /." Karlskrona : Blekinge Institute of Technology, 2005. http://www.bth.se/fou/Forskinfo.nsf/allfirst2/51e3dfb98bb6ba6bc1257107002f6d29?OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Scott, Wesley Dane. "A flexible control system for flexible manufacturing systems." Diss., Texas A&M University, 2003. http://hdl.handle.net/1969.1/158.

Full text
Abstract:
A flexible workcell controller has been developed using a three level control hierarchy (workcell, workstation, equipment). The cell controller is automatically generated from a model input by the user. The model consists of three sets of graphs. One set of graphs describes the process plans of the parts produced by the manufacturing system, one set describes movements into, out of and within workstations, and the third set describes movements of parts/transporters between workstations. The controller uses an event driven Petri net to maintain state information and to communicate with lower level controllers. The control logic is contained in an artificial neural network. The Petri net state information is used as the input to the neural net and messages that are Petri net events are output from the neural net. A genetic algorithm was used to search over alternative operation choices to find a "good" solution. The system was fully implemented and several test cases are described.
APA, Harvard, Vancouver, ISO, and other styles
41

Chadha, Sanjay. "A real-time system for multi-transputer systems." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29465.

Full text
Abstract:
Two important problems namely a versatile, efficient communication system and allocation of processors to processes are analysed. An efficient communication system has been developed, in which a central controller, the bus-master, dynamically configures the point-to-point network formed by the links of the transputers. The links are used to form a point-to-point network. An identical kernel resides on each of the nodes. This kernel is responsible for all communications on behalf of the user processes. It makes ConnectLink and ReleaseLink requests to the central controller and when the connections are made it sends the the messages through the connected link to the destination node. If direct connection to the destination node cannot be made then the message is sent to an intermediate node, the message hops through intermediate nodes until it reaches the destination node. The communication system developed provides low latency communication facility, and the system can easily be expanded to include a large number of transputers without increasing interprocess communication overhead by great extent. Another problem, namely the Module Assignment Problem (MAP) is an important issue at the time of development of distributed systems. MAPs are computationally intractable, i.e. the computational requirement grows with power of the number of tasks to be assigned. The load of a distributed system depends on both module execution times, and intermodule communication cost (IMC). If assignment is not done with due consideration, a module assignment can cause computer saturation. Therefore a good assignment should balance the processing load among the processors and generate minimum inter-processor communication (IPC) ( communication between modules not residing on the same processor). Since meeting the deadline constraint is the most important performance measure for RTDPS, meeting the response time is the most important criteria for module assignment. Understanding this we have devised a scheme which assigns processes to processors such that both response time constraints and periodicity constraints are met. If such an assignment is not possible, assignment would fail and an error would be generated. Our assignment algorithm does not take into consideration factors such as load balancing. We believe that the most important factor for RTDPS is meeting the deadline constraints and that's what our algorithm accomplishes.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
42

Pinnix, Justin Everett. "Operating System Kernel for All Real Time Systems." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010310-181302.

Full text
Abstract:

PINNIX, JUSTIN EVERETT. Operating System Kernel for All Real Time Systems.(Under the direction of Robert J. Fornaro and Vicki E. Jones.)

This document describes the requirements, design, and implementation of OSKAR, ahard real time operating system for Intel Pentium compatible personal computers.OSKAR provides rate monotonic scheduling, fixed and dynamic priority scheduling,semaphores, message passing, priority ceiling protocols, TCP/IP networking, and globaltime synchronization using the Global Positioning System (GPS). It is intended toprovide researchers a test bed for real time projects that is inexpensive, simple tounderstand, and easy to extend.

The design of the system is described with special emphasis on design tradeoffs made toimprove real time requirements compliance. The implementation is covered in detail atthe source code level. Experiments to qualify functionality and obtain performanceprofiles are included and the results explained.

APA, Harvard, Vancouver, ISO, and other styles
43

Mahajan, Nikhil Ravindra. "A System Simulator For Shipboard Electrical Distribution Systems." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010911-103858.

Full text
Abstract:

The development of a distribution system simulator that can model new power electronic devices as well as new novel distribution schemes, such as DC distribution has been donehere. The simulator adopts the Electro-Magnetic Transient Programs (EMTP) platform to facilitate the simulation. Basic power electronic building blocks have been developed to extend the capabilities of the EMTP. These blocks include a rectifier module, a DC buck converter module, a 3-phase inverter module and a single-phase inverter module. The paper shows simulation of a new distribution scheme for naval ships to illustrate that such a simulator facilitates the study of new distribution system designs, especially theprotection and control issues associated with new designs.

APA, Harvard, Vancouver, ISO, and other styles
44

Kim, Minyoung. "Surveillance System for Biological Agents in Water Systems." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1308%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Khoo, N. K. "An integrated system for reconfigurable cellular manufacturing systems." Thesis, University of Liverpool, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Thulnoon, A. A. T. "Efficient runtime security system for decentralised distributed systems." Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/9043/.

Full text
Abstract:
Distributed systems can be defined as systems that are scattered over geographical distances and provide different activities through communication, processing, data transfer and so on. Thus, increasing the cooperation, efficiency, and reliability to deal with users and data resources jointly. For this reason, distributed systems have been shown to be a promising infrastructure for most applications in the digital world. Despite their advantages, keeping these systems secure, is a complex task because of the unconventional nature of distributed systems which can produce many security problems like phishing, denial of services or eavesdropping. Therefore, adopting security and privacy policies in distributed systems will increase the trustworthiness between the users and these systems. However, adding or updating security is considered one of the most challenging concerns and this relies on various security vulnerabilities which existing in distributed systems. The most significant one is inserting or modifying a new security concern or even removing it according to the security status which may appear at runtime. Moreover, these problems will be exacerbated when the system adopts the multi-hop concept as a way to deal with transmitting and processing information. This can pose many significant security challenges especially if dealing with decentralized distributed systems and the security must be furnished as end-to-end. Unfortunately, existing solutions are insufficient to deal with these problems like CORBA which is considered a one-to-one relationship only, or DSAW which deals with end-to-end security but without taking into account the possibility of changing information sensitivity during runtime. This thesis provides a proposed mechanism for enforcing security policies and dealing with distributed systems’ security weakness in term of the software perspective. The proposed solution utilised Aspect-Oriented Programming (AOP), to address security concerns during compilation and running time. The proposed solution is based on a decentralized distributed system that adopts the multi-hop concept to deal with different requested tasks. The proposed system focused on how to achieve high accuracy, data integrity and high efficiency of the distributed system in real time. This is done through modularising the most efficient security solutions, Access Control and Cryptography, by using Aspect-Oriented Programming language. The experiments’ results show the proposed solution overcomes the shortage of the existing solutions by fully integrating with the decentralized distributed system to achieve dynamic, high cooperation, high performance and end-to-end holistic security.
APA, Harvard, Vancouver, ISO, and other styles
47

Rixner, Scott. "Memory system architecture for real-time multitasking systems." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bischoff, Alexander S. "User-experience-aware system optimisation for mobile systems." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/386570/.

Full text
Abstract:
This thesis considers the concept of Quality of Experience (QoE) in the context of mobile electronic consumer devices, such as smartphones. The modern smartphone is expected to deliver a high level of user experience across a wide variety of tasks, whilst remaining as power efficient as possible. Commonly, mobile devices undergo runtime optimisation to achieve the required level of performance, with the energy consumption being a secondary concern. In this thesis, we stress that it is vital to not focus on the raw performance of the device, but instead to concentrate on the needs and desires of the end user. This approach ensures that the end-user is satisfied at all times, and that the power consumption for a given level of user experience is minimised. Hence, we advocate user-experience-aware system optimisation. We introduce the concept of Quality of Experience, which has traditionally been used only in the telecommunications industry, to mobile system optimisation. We develop user experience models in the form of utility functions, and use these to translate lowlevel metrics into the delivered user experience. Upon these models we build simple, yet effective, QoE-aware Central Processing Unit (CPU) and Graphics Processing Unit (GPU) governing algorithms which adjust the performance and power consumption at runtime to meet user experience requirements. When creating our algorithms, we first analyse and characterise the operation of both CPU and GPU workloads. Specifically, we investigate how the level of compute-boundedness or memory-boundedness of CPU workloads affects frequency scalability, as well as determining how the available bandwidth and core count for a GPU affects the rendering performance. We combine both gem5-based simulation driven analysis and hardware-based verification in order to validate our QoE-aware governing algorithms. Additionally, we validate the operation of our algorithms using a variety of common mobile workloads. As part of this work, we have also extended the gem5 simulator to allow use to investigate the potential for finegrained Dynamic Voltage and Frequency Scaling (DVFS) adjustment, and use this as a platform to investigate the operation of the Linux CPUFreq governors used on modern mobile platforms.
APA, Harvard, Vancouver, ISO, and other styles
49

Huff, Alison. "A Hydrostatic Pressure Perfusion System for Biological Systems." Miami University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=miami1343970397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Caffall, Dale Scott. "Developing dependable software for a system-of-systems." Diss., Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Mar%5FCaffall.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography