Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Distributed computing and systems software.

Дисертації з теми "Distributed computing and systems software"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Distributed computing and systems software".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Mellor, Paul Vincent. "An adaptation of Modula-2 for distributed computing systems." Thesis, University of Hull, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327802.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tarafdar, Ashis. "Software fault tolerance in distributed systems using controlled re-execution /." Digital version accessible at:, 2000. http://wwwlib.umi.com/cr/utexas/main.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tilevich, Eli. "Software Tools for Separating Distribution Concerns." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7518.

Повний текст джерела
Анотація:
With the advent of the Internet, distributed programming has become a necessity for the majority of application domains. Nevertheless, programming distributed systems remains a delicate and complex task. This dissertation explores separating distribution concerns, the process of transforming a centralized monolithic program into a distributed one. This research develops algorithms, techniques, and tools for separating distribution concerns and evaluates the applicability of the developed artifacts by identifying the distribution concerns that they separate and the common architectural characteristics of the centralized programs that they transform successfully. The thesis of this research is that software tools working with standard mainstream languages, systems software, and virtual machines can effectively and efficiently separate distribution concerns from application logic for object-oriented programs that use multiple distinct sets of resources. Among the specific technical contributions of this dissertation are (1) a general algorithm for call-by-copy-restore semantics in remote procedure calls for linked data structures, (2) an analysis heuristic that determines which application objects get passed to which parts of native (i.e., platform-specific) code in the language runtime system for platform-independent binary code applications, (3) a technique for injecting code in such applications that will convert objects to the right representation so that they can be accessed correctly inside both application and native code, (4) an approach to maintaining the Java centralized concurrency and synchronization semantics over remote procedure calls efficiently, and (5) an approach to enabling the execution of legacy Java code remotely from a web browser. The technical contributions of this dissertation have been realized in three software tools for separating distribution concerns: NRMI, middleware with copy-restore semantics; GOTECH, a program generator for distribution; and J-Orchestra, an automatic partitioning system. This dissertation presents several case studies of successfully applying the developed tools to third-party programs.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Singh, Neeta S. "An automatic code generation tool for partitioned software in distributed computing." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001129.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wang, Koping. "Spider II: A component-based distributed computing system." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1874.

Повний текст джерела
Анотація:
Spider II system is the second version implementation of the Spider project. This system is the first distributed computation research project in the Department of Computer Science at CSUSB. Spider II is a distributed virtual machine on top of the UNIX or LINUX operating system. Spider II features multi-tasking, load balancing and fault tolerance, which optimize the performance and stability of the system.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Okonoboh, Matthias Aifuobhokhan, and Sudhakar Tekkali. "Real-Time Software Vulnerabilities in Cloud Computing : Challenges and Mitigation Techniques." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2645.

Повний текст джерела
Анотація:
Context: Cloud computing is rapidly emerging in the area of distributed computing. In the meantime, many organizations also attributed the technology to be associated with several business risks which are yet to be resolved. These challenges include lack of adequate security, privacy and legal issues, resource allocation, control over data, system integrity, risk assessment, software vulnerabilities and so on which all have compromising effect in cloud environment. Organizations based their worried on how to develop adequate mitigation strategies for effective control measures and to balancing common expectation between cloud providers and cloud users. However, many researches tend to focus on cloud computing adoption and implementation and with less attention to vulnerabilities and attacks in cloud computing. This paper gives an overview of common challenges and mitigation techniques or practices, describes general security issues and identifies future requirements for security research in cloud computing, given the current trend and industrial practices. Objectives: We identified common challenges and linked them with some compromising attributes in cloud as well as mitigation techniques and their impacts in cloud practices applicable in cloud computing. We also identified frameworks we consider relevant for identifying threats due to vulnerabilities based on information from the reviewed literatures and findings. Methods: We conducted a systematic literature review (SLR) specifically to identify empirical studies focus on challenges and mitigation techniques and to identify mitigation practices in addressing software vulnerabilities and attacks in cloud computing. Studies were selected based on the inclusion/exclusion criteria we defined in the SLR process. We search through four databases which include IEEE Xplore, ACM Digital Library, SpringerLinks and SciencDirect. We limited our search to papers published from 2001 to 2010. In additional, we then used the collected data and knowledge from finding after the SLR, to design a questionnaire which was used to conduct industrial survey which also identifies cloud computing challenges and mitigation practices persistent in industry settings. Results: Based on the SLR a total of 27 challenges and 20 mitigation techniques were identified. We further identified 7 frameworks we considered relevant for mitigating the prevalence real-time software vulnerabilities and attacks in the cloud. The identified challenges and mitigation practices were linked to compromised cloud attributes and the way mitigations practices affects cloud computing, respectively. Furthermore, 5 and 3 additional challenges and suggested mitigation practices were identified in the survey. Conclusion: This study has identified common challenges and mitigation techniques, as well as frameworks practices relevant for mitigating real-time software vulnerabilities and attacks in cloud computing. We cannot make claim on exhaustive identification of challenges and mitigation practices associated with cloud computing. We acknowledge the fact that our findings might not be sufficient to generalize the effect of the different service models which include SaaS, IaaS and PaaS, and also true for the different deployment models such as private, public, community and hybrid. However, this study we assist both cloud provider and cloud customers on the security, privacy, integrity and other related issues and useful in the part of identifying further research area that can help in enhancing security, privacy, resource allocation and maintain integrity in the cloud environment.
Kungsmarksvagen 67 SE-371 44 Karlskrona Sweden Tel: 0737159290
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Darling, James Campbell Charles. "The application of distributed and mobile computing techniques to advanced simulation and virtual reality systems." Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/843917/.

Повний текст джерела
Анотація:
Current technologies for creating distributed simulations or virtual environments are too limited in terms of scalability and flexibility, particularly in the areas of network saturation, distribution of VR scenes, and co-ordination of large systems of active objects. This thesis proposes the use of mobile and distributed computing technology to alleviate some of these limitations. A study of contemporary technologies for distributed simulation and networked virtual environments has been made, examining the benefits and drawbacks of different techniques. The main theory that has been investigated is that processing of a global simulation space should be spread over a network of computers, the principle of locality cutting the network bandwidth required. Using a prototype language for distributed graph processing, which fully supports mobile programming, experimental systems have been developed to demonstrate the use of distributed processing in creating large-scale virtual environments. The working examples created show that the ideas proposed for distribution of interactive virtual environments are valid, and that mobile programming techniques provide a new direction of development for the field of simulation. A more detailed summary of the work is given in Appendix D. Five publications to date (shown overleaf) have resulted from my involvement in the work, and a number of others have resulted from the overall project.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lillethun, David. "ssIoTa: A system software framework for the internet of things." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53531.

Повний текст джерела
Анотація:
Sensors are widely deployed in our environment, and their number is increasing rapidly. In the near future, billions of devices will all be connected to each other, creating an Internet of Things. Furthermore, computational intelligence is needed to make applications involving these devices truly exciting. In IoT, however, the vast amounts of data will not be statically prepared for batch processing, but rather continually produced and streamed live to data consumers and intelligent algorithms. We refer to applications that perform live analysis on live data streams, bringing intelligence to IoT, as the Analysis of Things. However, the Analysis of Things also comes with a new set of challenges. The data sources are not collected in a single, centralized location, but rather distributed widely across the environment. AoT applications need to be able to access (consume, produce, and share with each other) this data in a way that is natural considering its live streaming nature. The data transport mechanism must also allow easy access to sensors, actuators, and analysis results. Furthermore, analysis applications require computational resources on which to run. We claim that system support for AoT can reduce the complexity of developing and executing such applications. To address this, we make the following contributions: - A framework for systems support of Live Streaming Analysis in the Internet of Things, which we refer to as the Analysis of Things (AoT), including a set of requirements for system design - A system implementation that validates the framework by supporting Analysis of Things applications at a local scale, and a design for a federated system that supports AoT on a wide geographical scale - An empirical system evaluation that validates the system design and implementation, including simulation experiments across a wide-area distributed system We present five broad requirements for the Analysis of Things and discuss one set of specific system support features that can satisfy these requirements. We have implemented a system, called \textsubscript{SS}IoTa, that implements these features and supports AoT applications running on local resources. The programming model for the system allows applications to be specified simply as operator graphs, by connecting operator inputs to operator outputs and sensor streams. Operators are code components that run arbitrary continuous analysis algorithms on streaming data. By conforming to a provided interface, operators may be developed that can be composed into operator graphs and executed by the system. The system consists of an Execution Environment, in which a Resource Manager manages the available computational resources and the applications running on them, a Stream Registry, in which available data streams can be registered so that they may be discovered and used by applications, and an Operator Store, which serves as a repository for operator code so that components can be shared and reused. Experimental results for the system implementation validate its performance. Many applications are also widely distributed across a geographic area. To support such applications, \textsubscript{SS}IoTa must be able to run them on infrastructure resources that are also distributed widely. We have designed a system that does so by federating each of the three system components: Operator Store, Stream Registry, and Resource Manager. The Operator Store is distributed using a distributed hast table (DHT), however since temporal locality can be expected and data churn is low, caching may be employed to further improve performance. Since sensors exist at particular locations in physical space, queries on the Stream Registry will be based on location. We also introduce the concept of geographical locality. Therefore, range queries in two dimensions must be supported by the federated Stream Registry, while taking advantage of geographical locality for improved average-case performance. To accomplish these goals, we present a design sketch for SkipCAN, a modification of the SkipNet and Content Addressable Network DHTs. Finally, the fundamental issue in the federated Resource Manager is how to distributed the operators of multiple applications across the geographically distributed sites where computational resources can execute them. To address this, we introduce DistAl, a fully distributed algorithm that assigns operators to sites. DistAl also respects the system resource constraints and application preferences for performance and quality of results (QoR), using application-specific utility functions to allow applications to express their preferences. DistAl is validated by simulation results.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Eise, Justin. "A Secure Architecture for Distributed Control of Turbine Engine Systems." University of Dayton / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1552556049435026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Li, Fangxing. "A Software Framework for Advanced Power System Analysis: Case Studies in Networks, Distributed Generation, and Distributed Computation." Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/28124.

Повний текст джерела
Анотація:
This work presents a software framework for power system analysis, PowerFrame. It is composed of four layers. This four-layer architecture is designed for extensibility and reusability so that more complex power system problems can be tackled within the architecture. In the context of PowerFrame, this work explores complex power system problems. Included in these problems are parallel-placed cables with multiple conductors, and distributed resources operating in unbalanced power distribution systems. Mathematical models are derived. Errors between more exact models and conventional approaches are presented. PowerFrame is also designed to handle distributed computation for intensive power system calculations on multiple, networked computers. Distributed power flow algorithms are presented. Tests on Ethernet LANs show the feasibility of distributed computation under current computer network bandwidth.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Dolci, Alessandro. "Traffic Management in Reti Spontanee basato su Software-Defined Networking." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15240/.

Повний текст джерела
Анотація:
La crescente diffusione di dispositivi mobili dotati di interfacce di rete eterogenee ha generato un notevole interesse verso l'ambito delle Mobile Ad-hoc Networks. Tuttavia, un aspetto finora sottovalutato nella progettazione di reti di tale tipologia è quello relativo alle garanzie di Quality of Service fornite per le comunicazioni in corso. Il presente lavoro è dedicato alla presentazione di una soluzione per la gestione del livello di qualità di servizio riguardante le interazioni attive all'interno di reti spontanee, sulla base del paradigma architetturale di Software-Defined Networking. La soluzione ha previsto la progettazione e l'implementazione di un'estensione del middleware RAMP, realizzato in precedenza presso l'Università di Bologna, al fine di introdurre una modalità di gestione centralizzata delle reti. Essa prevede l'organizzazione del traffico relativo ad applicazioni differenti in flow dedicati e la presenza di un controller in grado di dialogare con tutti i nodi attivi, allo scopo di poter mantenere una visione complessiva della topologia e della situazione della rete e di poter imporre le proprie decisioni.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Ghafoor, Sheikh Khaled. "Modeling of an adaptive parallel system with malleable applications in a distributed computing environment." Diss., Mississippi State : Mississippi State University, 2007. http://sun.library.msstate.edu/ETD-db/theses/available/etd-11092007-145420.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Lacks, Daniel Jonathan. "MODELING, DESIGN AND EVALUATION OF NETWORKING SYSTEMS AND PROTOCOLS THROUGH SIMULATION." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3792.

Повний текст джерела
Анотація:
Computer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework that can be used to create new networking simulators on top of an HLA-based federation for distributed simulation. The goals are to model and simulate networking architectures and protocols by developing a common underlying simulation infrastructure and to reduce the time a developer has to learn the semantics of message passing and time management to free more time for experimentation and data collection and reporting. This is accomplished by evolving the simulation engine through three different applications that model three different types of network protocols. Computer networking is a good candidate for simulation because of the Internet's rapid growth that has spawned off the need for new protocols and algorithms and the desire for a common infrastructure to model these protocols and algorithms. One simulation, the 3DInterconnect simulator, simulates data transmitting through a hardware k-array n-cube network interconnect. Performance results show that k-array n-cube topologies can sustain higher traffic load than the currently used interconnects. The second simulator, Cluster Leader Logic Algorithm Simulator, simulates an ad-hoc wireless routing protocol that uses a data distribution methodology based on the GPS-QHRA routing protocol. CLL algorithm can realize a maximum of 45% power savings and maximum 25% reduced queuing delay compared to GPS-QHRA. The third simulator simulates a grid resource discovery protocol for helping Virtual Organizations to find resource on a grid network to compute or store data on. Results show that worst-case 99.43% of the discovery messages are able to find a resource provider to use for computation. The simulation engine was then built to perform basic HLA operations. Results show successful HLA functions including creating, joining, and resigning from a federation, time management, and event publication and subscription.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Engineering PhD
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Mihailescu, Patrik 1977. "MAE : a mobile agent environment for resource limited devices." Monash University, School of Network Computing, 2003. http://arrow.monash.edu.au/hdl/1959.1/5805.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Benda, Klara. "Designing the Sakai Open Academic Environment: A distributed cognition account of the design of a large scale software system." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52233.

Повний текст джерела
Анотація:
Social accounts of technological change make the flexibility and openness of interpretations the starting point of an argument against technological determinism. They suggest that technological change unfolds in the semantic domain, but they focus on the social processes around the interpretations of new technologies, and do not address the conceptual processes of change in interpretations. The dissertation presents an empirically grounded case study of the design process of an open-source online software platform based on the framework of distributed cognition to argue that the cognitive perspective is needed for understanding innovation in software, because it allows us to describe the reflexive and expansive contribution of conceptual processes to new software and the significance of professional epistemic practices in framing the direction of innovation. The framework of distributed cognition brings the social and cognitive perspectives together on account of its understanding of conceptual processes as distributed over time, among people, and between humans and artifacts. The dissertation argues that an evolving open-source software landscape became translated into the open-ended local design space of a new software project in a process of infrastructural implosion, and the design space prompted participants to outline and pursue epistemic strategies of sense-making and learning about the contexts of use. The result was a process of conceptual modeling, which resulted in a conceptually novel user interface. Prototyping professional practices of user-centered design lent directionality to this conceptual process in terms of a focus on individual activities with the user interface. Social approaches to software design under the broad umbrella of human-centered computing have been seeking to inform the design on the basis of empirical contributions about a social context. The analysis has shown that empirical engagement with the contexts of use followed from conceptual modeling, and concern about real world contexts was aligned with the user-centered direction that design was taking. I also point out a social-technical gap in the design process in connection with the repeated performance challenges that the platform was facing, and describe the possibility of a social-technical imagination.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Gadea, Cristian. "Architectures and Algorithms for Real-Time Web-Based Collaboration." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/41944.

Повний текст джерела
Анотація:
Originating in the theory of distributed computing, the optimistic consistency control method known as Operational Transformation (OT) has been studied by researchers since the late 1980s. Algorithms were devised for managing the concurrent nature of user actions and for maintaining the consistency of replicated data as changes are introduced by multiple geographically-distributed users in real-time. Web-Based Collaborative Platforms are now essential components of modern organizations, with real-time protocols and standards such as WebSocket enabling the development of online collaboration tools to facilitate information sharing, content creation, document management, audio and video streaming, and communication among team members. Products such as Google Docs have shown that centralized web-based co-editing is now possible in a reliable way, with benefits in user productivity and efficiency. However, as the demand for effective real-time collaboration between team members continues to increase, web applications require new synchronization algorithms and architectures to resolve the editing conflicts that may appear when multiple individuals are modifying the same data at the same time. In addition, collaborative applications need to be supported by scalable distributed backend services, as can be achieved with "serverless" technologies. While much existing research has addressed problems of optimistic consistency maintenance, previous approaches have not focused on capturing the dynamic client-server interactions of OT systems by modeling them as real-time systems using Finite State Machine (FSM) theory. This thesis includes an exploration of how the principles of control theory and hierarchical FSMs can be applied to model the distributed system behavior when processing and transforming HTML DOM changes initiated by multiple concurrent users. The FSM-based OT implementation is simulated, including with random inputs, and the approach is shown to be invaluable for organizing the algorithms required for synchronizing complex data structures. The real-time feedback control mechanism is used to develop a Web-Based Collaborative Platform based on a new OT integration algorithm and architecture that brings "Virtual DOM" concepts together with state-of-the-art OT principles to enable the next generation of collaborative web-based experiences, as shown with implementations of a rich-text editor and a 3D virtual environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Zhu, Julie. "A peer-to-peer software framework for cooperative robotic system." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16210/1/Julie_Zhu_Thesis.pdf.

Повний текст джерела
Анотація:
Recent developments in embedded systems give robots access to the Internet and make them more flexible and capable of performing more complex applications. However, these robots are still limited in terms of size, CPU power, storage resources and memory. Consequently, these robots have only been manufactured for certain specific applications and cannot be re-used for other applications. This presents us with a challenge to design a software framework - Robot Colony. The Robot Colony enables robots to be suitable for a wide range of applications, not originally received from manufacturers, to achieve greater functionality, flexibility and utility. This research outlines the architecture and functionality of the Robot Colony to support the collaboration between devices in the P2P community and also analyse the JXTA platform, which was the framework originally proposed. Lastly we present a customized P2P architecture that specifically addresses the interaction betweensoftware components across the network. We further discuss the following technologies applied in theframework: * XML-based Directory Service Provider * HTTP-based publish/describe control commands * Remote Process Invoke To fully complete the project, a thorough evaluation of the framework based on either the JXTAplatform or the customized P2P channel has been conducted. This evaluation provides basic statistics data for the proposed framework design and implementation. Further more, we have presented a realtime Demo at the Smart Device lab of the Queensland University of Technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Zhu, Julie. "A peer-to-peer software framework for cooperative robotic system." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16210/.

Повний текст джерела
Анотація:
Recent developments in embedded systems give robots access to the Internet and make them more flexible and capable of performing more complex applications. However, these robots are still limited in terms of size, CPU power, storage resources and memory. Consequently, these robots have only been manufactured for certain specific applications and cannot be re-used for other applications. This presents us with a challenge to design a software framework - Robot Colony. The Robot Colony enables robots to be suitable for a wide range of applications, not originally received from manufacturers, to achieve greater functionality, flexibility and utility. This research outlines the architecture and functionality of the Robot Colony to support the collaboration between devices in the P2P community and also analyse the JXTA platform, which was the framework originally proposed. Lastly we present a customized P2P architecture that specifically addresses the interaction betweensoftware components across the network. We further discuss the following technologies applied in theframework: * XML-based Directory Service Provider * HTTP-based publish/describe control commands * Remote Process Invoke To fully complete the project, a thorough evaluation of the framework based on either the JXTAplatform or the customized P2P channel has been conducted. This evaluation provides basic statistics data for the proposed framework design and implementation. Further more, we have presented a realtime Demo at the Smart Device lab of the Queensland University of Technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Prado, Pedro Felipe do. "Um processo de desenvolvimento de software focado em sistemas distribuídos autonômicos." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-13092017-110656/.

Повний текст джерела
Анотація:
Os Sistemas Distribuídos (SDs) tem apresentado uma crescente complexidade no seu gerenciamento, além de possuir a necessidade de garantir Qualidade de Serviço (QoS) aos seus usuários. A Computação Autonômica (CA) surge como uma forma de transformar os SDs em Sistemas Distribuídos Autonômicos (SDAs), com capacidade de auto-gerenciamento. Entretanto, não foi encontrado um processo de desenvolvimento de software, focado na criação de SDAs. Na grande maioria dos trabalhos relacionados, simplesmente é apresentado um SD, juntamente com qual aspecto da CA deseja-se implementar, a técnica usada e os resultados obtidos. Isso é apenas uma parte do desenvolvimento de um SDA, não abordando desde a definição dos requisitos até a manutenção do software. Mais importante, não mostra como tais requisitos podem ser formalizados e posteriormente solucionados por meio do auto-gerenciamento fornecido pela CA. Esta tese foca na proposta de um processo de desenvolvimento de software voltado para SDAs. Com esse objetivo, foram integradas diferentes áreas de conhecimento, compreendendo: Processo Unificado de Desenvolvimento de Software (PU), SDs, CA, Pesquisa Operacional (PO) e Avaliação de Desempenho de Sistemas Computacionais (ADSC). A prova de conceito foi feita por meio de três estudos de caso, todos focando-se em problemas NP-Difícil, são eles: (i) otimização off-line (problema da mochila com múltiplas escolhas), (ii) otimização online (problema da mochila com múltiplas escolhas) e (iii) criação do módulo planejador de um gerenciador autonômico, visando realizar o escalonamento de requisições (problema de atribuição generalizado). Os resultados do primeiro estudo de caso, mostram que é possível usar PO e ADSC para definir uma arquitetura de base para o SDA em questão, bem como reduzir o tamanho do espaço de busca quando o SDA estiver em execução. O segundo, prova que é possível garantir a QoS do SDA durante sua execução, usando a formalização fornecida pela PO e sua respectiva solução. O terceiro, prova que é possível usar a PO para formalizar o problema de auto-gerenciamento, bem como a ADSC para avaliar diferentes algoritmos ou modelos de arquitetura para o SDA.
Distributed Systems (DSs) have an increasing complexity and do not have their management, besides having a quality of service (QoS) to its users. Autonomic Computing (AC) emerges as a way of transforming the SDs into Autonomous Distributed Systems (ADSs), with a capacity for self-management. However, your software development process is focused on creating SDAs. In the vast majority of related works, simply an SD model, along with what aspect of the AC implement, a technique used and the results obtained. This is only a part of the development of an ADS, not approaching from an definition of requirements for a maintenance of software. More importantly, it does not show how such requirements can be formalized and subsequently solved through the self-management provided by AC. This proposal aims at a software development process for the DASs. To this end, different areas of knowledge were integrated, including: Unified Software Development Process (PU), SDs, CA, Operations Research (OR) and Computer Systems Performance Evaluation (CSPE). The proof of concept was made through three case studies, all focusing on NP-Hard problems, namely: (i) off-line optimization (problem of the backpack with multiple choices), (ii) (Problem of the backpack with multiple choices) and (iii) creation of the scheduling module of an autonomic manager, aiming to carry out the scheduling of requests (problem of generalized assignment). The results of the first case study show that it is possible to use OR and CSPE to define a base architecture for the DAS in question, as well as reduce the size of the search space when SDA is running. The second, proves that it is possible to guarantee the QoS of the DAS during its execution, using the formalization provided by the OR and its respective solution. The third, proves that it is possible to use the PO to formalize the self-management problem, as well as the ADSC to evaluate different algorithms or architecture models for the ADS.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Qhobosheane, Sehlabaka. "Implementation of a proton therapy supervisory system for iThemba Labs." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71676.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Cancela, Paulo Filipe Neves Bento. "Orchestration of heterogeneous middleware services and its application to a comand and control platform." Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/1970.

Повний текст джерела
Анотація:
MSC Dissertation in Computer Engineering
Distributed objects was, until recently, the leading technology in the design and implementation of component-based architectures, such as the ones based on services, better known as Service-Oriented Architectures (SOA). Although established in the market for more than a decade, and therefore mature, these technologies have failed to overcome the porting of the SOA concept to the Web. Web services are a recent technology that has been growing in the last few years. Their acceptance has increased over enterprises and organizations as they seem to overcome the Web and interoperability related problems of the Distributed Objects technology. Web services provide interoperability between systems and that is undoubtedly a strength of this technology since this is a crucial aspect of nowadays business. Moreover, the widespread of services led to the recent introduction of the service composition concept, that although being a technology independent concept,is closely related to Web services and there is no tool support for other technologies. Nonetheless, distributed objects still play an important role in the development of distributed systems, namely due to performance issues that are important when it comes to the internals of a platform. However, the use of service composition in these distributed object-based platforms requires the exposure of their composing services as Web services. The main objective of this masters thesis is improve the state-of-the-art in the support for the composition of services originating from distributed objects-based platforms. Bearing in mind that these kind of platforms are composed by several services, the idea is to present a platform as a set of Web services in order to be able to orchestrate them.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Mupparaju, Naveen. "Performance Evaluation and Comparison of Distributed Messaging Using Message Oriented Middleware." UNF Digital Commons, 2013. http://digitalcommons.unf.edu/etd/456.

Повний текст джерела
Анотація:
Message Oriented Middleware (MOM) is an enabling technology for modern event- driven applications that are typically based on publish/subscribe communication [Eugster03]. Enterprises typically contain hundreds of applications operating in environments with diverse databases and operating systems. Integration of these applications is required to coordinate the business process. Unfortunately, this is no easy task. Enterprise Integration, according to Brosey et al. (2001), "aims to connect and combines people, processes, systems, and technologies to ensure that the right people and the right processes have the right information and the right resources at the right time"[Brosey01]. Communication between different applications can be achieved by using synchronous and asynchronous communication tools. In synchronous communication, both parties involved must be online (for example, a telephone call), whereas in asynchronous communication, only one member needs to be online (email). Middleware is software that helps two applications communicate with one another. Remote Procedure Calls (RPC) and Object Request Brokers (ORB) are two types of synchronous middleware—when they send a request they must wait for an immediate reply. This can decrease an application’s performance when there is no need for synchronous communication. Even though asynchronous distributed messaging using message oriented middleware is widely used in industry, there is not enough work done in evaluating the performance of various open source Message oriented middleware. The objective of this work was to benchmark and evaluate three different open source MOM’s performance in publish/subscribe and point-to-point domains, functional comparison and qualitative study from developers perspective.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Lago, Nelson Posse. ""Processamento distribuído de áudio em tempo real"." Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-05102004-154239/.

Повний текст джерела
Анотація:
Sistemas computadorizados para o processamento de multimídia em tempo real demandam alta capacidade de processamento. Problemas que exigem grandes capacidades de processamento são comumente abordados através do uso de sistemas paralelos ou distribuídos; no entanto, a conjunção das dificuldades inerentes tanto aos sistemas de tempo real quanto aos sistemas paralelos e distribuídos tem levado o desenvolvimento com vistas ao processamento de multimídia em tempo real por sistemas computacionais de uso geral a ser baseado em equipamentos centralizados e monoprocessados. Em diversos sistemas para multimídia há a necessidade de baixa latência durante a interação com o usuário, o que reforça ainda mais essa tendência para o processamento em um único nó. Neste trabalho, implementamos um mecanismo para o processamento síncrono e distribuído de áudio com características de baixa latência em uma rede local, permitindo o uso de um sistema distribuído de baixo custo para esse processamento. O objetivo primário é viabilizar o uso de sistemas computacionais distribuídos para a gravação e edição de material musical em estúdios domésticos ou de pequeno porte, contornando a necessidade de hardware dedicado de alto custo. O sistema implementado consiste em duas partes: uma, genérica, implementada sob a forma de um middleware para o processamento síncrono e distribuído de mídias contínuas com baixa latência; outra, específica, baseada na primeira, voltada para o processamento de áudio e compatível com aplicações legadas através da interface padronizada LADSPA. É de se esperar que pesquisas e aplicações futuras em que necessidades semelhantes se apresentem possam utilizar o middleware aqui descrito para outros tipos de processamento de áudio bem como para o processamento de outras mídias, como vídeo.
Computer systems for real-time multimedia processing require high processing power. Problems that depend on high processing power are usually solved by using parallel or distributed computing techniques; however, the combination of the difficulties of both real-time and parallel programming has led the development of applications for real-time multimedia processing for general purpose computer systems to be based on centralized and single-processor systems. In several systems for multimedia processing, there is a need for low latency during the interaction with the user, which reinforces the tendency towards single-processor development. In this work, we implemented a mechanism for synchronous and distributed audio processing with low latency on a local area network which makes the use of a low cost distributed system for this kind of processing possible. The main goal is to allow the use of distributed systems for recording and editing of musical material in home and small studios, bypassing the need for high-cost equipment. The system we implemented is made of two parts: the first, generic, implemented as a middleware for synchronous and distributed processing of continuous media with low latency; and the second, based on the first, geared towards audio processing and compatible with legacy applications based on the standard LADSPA interface. We expect that future research and applications that share the needs of the system developed here make use of the middleware we developed, both for other kinds of audio processing as well as for the processing of other media forms, such as video.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Shafabakhsh, Benyamin. "Research on Interprocess Communication in Microservices Architecture." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277940.

Повний текст джерела
Анотація:
With the substantial growth of cloud computing over the past decade, microservices has gained significant popularity in the industry as a new architectural pattern. It promises a cloud-native architecture that breaks large applications into a collection of small, independent, and distributed packages. Since microservices-based applications are distributed, one of the key challenges when designing an application is the choice of mechanism by which services communicate with each other. There are several approaches for implementing Interprocess communication (IPC) in microservices, and each comes with different advantages and trade-offs. While theoretical and informal comparison exists between them, this thesis has taken an experimental approach to compare and contrast common forms of IPC communications. In this the- sis, IPC methods have been categorized into Synchronous and Asynchronous categories. The Synchronous type consists of REST API and Google gRPC, while the Asynchronous type is using a message broker known as RabbitMQ. Further, a collection of microservices for an e-commerce scenario has been designed and developed using all the three IPC methods. A load test has been executed against each model to obtain quantitative data related to Performance Efficiency, and Availability of every method. Developing the same set of functionalities using different IPC methods has offered a qualitative data related to Scalability, and Complexity of each IPC model. The evaluation of the experiment indicates that, although there is no universal IPC solution that can be applied in all cases, Asynchronous IPC patterns shall be the preferred option when designing the system. Nevertheless, the findings of this work also suggest there exist scenarios where Synchronous patterns can be more suitable.
Med den kraftiga tillväxten av molntjänster under det senaste decenniet har mikrotjänster fått en betydande popularitet i branschen som ett nytt arkitektoniskt mönster. Det erbjuder en moln-baserad arkitektur som delar stora applikationer i en samling små, oberoende och distribuerade paket. Eftersom microservicebaserade applikationer distribueras och körs på olika maskiner, är en av de viktigaste utmaningarna när man utformar en applikation valet av mekanism med vilken tjänster kommunicerar med varandra. Det finns flera metoder för att implementera Interprocess-kommunikation (IPC) i mikrotjänster och var och en har olika fördelar och nackdelar. Medan det finns teoretisk och in- formell jämförelse mellan dem, har denna avhandling tagit ett experimentellt synsätt för att jämföra och kontrastera vanliga former av IPC-kommunikation. I denna avhandling har IPC-metoder kategoriserats i synkrona och asynkrona kategorier. Den synkrona typen består av REST API och Google gRPC, medan asynkron typ använder en meddelandemäklare känd som RabbitMQ. Dessutom har en samling mikroservice för ett e-handelsscenario utformats och utvecklats med alla de tre olika IPC-metoderna. Ett lasttest har utförts mot var- je modell för att erhålla kvantitativa data relaterade till prestandaeffektivitet, och tillgänglighet för varje metod. Att utveckla samma uppsättning funktionaliteter med olika IPC-metoder har erbjudit en kvalitativ data relaterad till skalbarhet och komplexitet för varje IPC-modell. Utvärderingen av experimentet indikerar att även om det inte finns någon universell IPC-lösning som kan tillämpas i alla fall, ska asynkrona IPC-mönster vara det föredragna alternativet vid utformningen av systemet. Ändå tyder resultaten från detta arbete också på att det finns scenarier där synkrona mönster är mer lämpliga.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Glaab, Markus. "A distributed service delivery platform for automotive environments : enhancing communication capabilities of an M2M service platform for automotive application." Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/11249.

Повний текст джерела
Анотація:
The automotive domain is changing. On the way to more convenient, safe, and efficient vehicles, the role of electronic controllers and particularly software has increased significantly for many years, and vehicles have become software-intensive systems. Furthermore, vehicles are connected to the Internet to enable Advanced Driver Assistance Systems and enhanced In-Vehicle Infotainment functionalities. This widens the automotive software and system landscape beyond the physical vehicle boundaries to presently include as well external backend servers in the cloud. Moreover, the connectivity facilitates new kinds of distributed functionalities, making the vehicle a part of an Intelligent Transportation System (ITS) and thus an important example for a future Internet of Things (IoT). Manufacturers, however, are confronted with the challenging task of integrating these ever-increasing range of functionalities with heterogeneous or even contradictory requirements into a homogenous overall system. This requires new software platforms and architectural approaches. In this regard, the connectivity to fixed side backend systems not only introduces additional challenges, but also enables new approaches for addressing them. The vehicle-to-backend approaches currently emerging are dominated by proprietary solutions, which is in clear contradiction to the requirements of ITS scenarios which call for interoperability within the broad scope of vehicles and manufacturers. Therefore, this research aims at the development and propagation of a new concept of a universal distributed Automotive Service Delivery Platform (ASDP), as enabler for future automotive functionalities, not limited to ITS applications. Since Machine-to-Machine communication (M2M) is considered as a primary building block for the IoT, emergent standards such as the oneM2M service platform are selected as the initial architectural hypothesis for the realisation of an ASDP. Accordingly, this project describes a oneM2M-based ASDP as a reference configuration of the oneM2M service platform for automotive environments. In the research, the general applicability of the oneM2M service platform for the proposed ASDP is shown. However, the research also identifies shortcomings of the current oneM2M platform with respect to the capabilities needed for efficient communication and data exchange policies. It is pointed out that, for example, distributed traffic efficiency or vehicle maintenance functionalities are not efficiently treated by the standard. This may also have negative privacy impacts. Following this analysis, this research proposes novel enhancements to the oneM2M service platform, such as application-data-dependent criteria for data exchange and policy aggregation. The feasibility and advancements of the newly proposed approach are evaluated by means of proof-of-concept implementation and experiments with selected automotive scenarios. The results show the benefits of the proposed enhancements for a oneM2M-based ASDP, without neglecting to indicate their advantages for other domains of the oneM2M landscape where they could be applied as well.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Li, Guangxing. "Supporting distributed realtime computing." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309077.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Freeh, Vincent William 1959. "Software support for distributed and parallel computing." Diss., The University of Arizona, 1996. http://hdl.handle.net/10150/290588.

Повний текст джерела
Анотація:
This dissertation addresses creating portable and efficient parallel programs for scientific computing. Both of these aspects are important. Portability means the program can execute on any parallel machine. Efficiency means there is little or no penalty for using our solution instead of hand-coded, architecture-specific programs. Although parallel programming is necessarily more difficult than sequential programming, it is currently more complicated than it has to be. The Filaments package provides fine-grain parallelism and a shared memory programming model. It can be viewed as a "least common denominator" for parallel scientific computing. Fine-grain parallelism supports any number (even thousands) of threads, and shared memory provides a natural programming model. Consequently, the combination allows the programmer to concentrate on the application and not the architecture of the target machine. The Filaments package makes extensive use of run-time decision making. Run-time decision making has several advantages. First, it is often possible to make a better decision because more information is available at run time. Second, run-time decision making can obviate the need for complex, often intractable, static analysis. Moreover, run-time decision making leads to much of the package's efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Wulf, Lars. "Interaction and security in distributed computing." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362116.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Chevalier, Arthur. "Optimisation du placement des licences logicielles dans le Cloud pour un déploiement économique et efficient." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN071.

Повний текст джерела
Анотація:
Cette thèse s'intéresse au Software Asset Management (SAM) qui correspond à la gestion de licences, de droits d'usage, et du bon respect des règles contractuelles. Lorsque l'on parle de logiciels propriétaires, ces règles sont bien souvent mal interprétées ou totalement incomprises. En échange du fait que nous sommes libres de licencier notre usage comme bon nous semble, dans le respect du contrat, les éditeurs possèdent le droit d'audit. Ces derniers peuvent vérifier le bon respect des règles et imposer, lorsque ces dernières ne sont pas respectées, des pénalités bien souvent d'ordre financières. L'émergence du Cloud a grandement augmenté la problématique du fait que les droits d'usages des logiciels n'étaient pas initialement prévus pour ce type d'architecture. Après un historique académique et industriel du Software Asset Management, des racines aux travaux les plus récents concernant le Cloud et l'identification logicielle, nous nous intéressons aux méthodes de licensing des principaux éditeurs avant d'introduire les différents problèmes intrinsèques au SAM. Le manque de standardisation dans les métriques, des droits d'usages, et la différence de paradigme apportée par le Cloud et prochainement le réseau virtualisé rendent la situation plus compliquée qu'elle ne l'était déjà. Nos recherches s'orientent vers la modélisation de ces licences et métriques afin de s'abstraire du côté juridique et flou des contrats. Cette abstraction nous permet de développer des algorithmes de placement de logiciels qui assurent le bon respect des règles contractuelles en tout temps. Ce modèle de licence nous permet également d'introduire une heuristique de déploiement qui optimise plusieurs critères au moment du placement du logiciel tels que la performance, l'énergie et le coût des licences. Nous introduisons ensuite les problèmes liés au déploiement de plusieurs logiciels simultanément en optimisant ces mêmes critères et nous apportons une preuve de la NPcomplétude du problème de décision associé. Afin de répondre à ces critères, nous présentons un algorithme de placement qui approche l'optimal et utilise l'heuristique ci-dessus. En parallèle, nous avons développé un logiciel SAM qui utilise ces recherches pour offrir une gestion automatisée et totalement générique des logiciels dans une architecture Cloud. Tous ces travaux ont été menés en collaboration avec Orange et testés lors de différentes preuves de concept avant d'être intégrés totalement dans l'outillage SAM
This thesis takes place in the field of Software Asset Management, license management, use rights, and compliance with contractual rules. When talking about proprietary software, these rules are often misinterpreted or totally misunderstood. In exchange for the fact that we are free to license our use as we see fit, in compliance with the contract, the publishers have the right to make audits. They can check that the rules are being followed and, if they are not respected, they can impose penalties, often financial penalties. This can lead to disastrous situations such as the lawsuit between AbInBev and SAP, where the latter claimed a USD 600 million penalty. The emergence of the Cloud has greatly increased the problem because software usage rights were not originally intended for this type of architecture. After an academic and industrial history of Software Asset Management (SAM), from its roots to the most recent work on the Cloud and software identification, we look at the licensing methods of major publishers such as Oracle, IBM and SAP before introducing the various problems inherent in SAM. The lack of standardization in metrics, specific usage rights, and the difference in paradigm brought about by the Cloud and soon the virtualized network make the situation more complicated than it already was. Our research is oriented towards modeling these licenses and metrics in order to abstract from the legal and blurry side of contracts. This abstraction allows us to develop software placement algorithms that ensure that contractual rules are respected at all times. This licensing model also allows us to introduce a deployment heuristic that optimizes several criteria at the time of software placement such as performance, energy and cost of licenses. We then introduce the problems associated with deploying multiple software at the same time by optimizing these same criteria and prove the NP-completeness of the associated decision problem. In order to meet these criteria, we present a placement algorithm that approaches the optimal and uses the above heuristic. In parallel, we have developed a SAM tool that uses these researches to offer an automated and totally generic software management in a Cloud architecture. All this work has been conducted in collaboration with Orange and tested in different Proof-Of-Concept before being fully integrated into the SAM tool
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Kim, Song Hun. "Distributed Reconfigurable Simulation for Communication Systems." Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/29700.

Повний текст джерела
Анотація:
The simulation of physical-layer communication systems often requires long execution times. This is due to the nature of the Monte Carlo simulation. To obtain a valid result by producing enough errors, the number of bits or symbols being simulated must significantly exceed the inverse of the bit error rate of interest. This often results in hours or even days of execution using a personal computer or a workstation. Reconfigurable devices can perform certain functions faster than general-purpose processors. In addition, they are more flexible than Application Specific Integrated Circuit (ASIC) devices. This fast yet flexible property of reconfigurable devices can be used for the simulation of communication systems. However, although reconfigurable devices are more flexible than ASIC devices, they are often not compatible with each other. Programs are usually written in hardware description languages such as Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL). A program written for one device often cannot be used for another device because these devices all have different architectures, and programs are architecture-specific. Distributed computing, which is not a new concept, refers to interconnecting a number of computing elements, often heterogeneous, to perform a given task. By applying distributed computing, reconfigurable devices and digital signal processors can be connected to form a distributed reconfigurable simulator. In this paper, it is shown that using reconfigurable devices can greatly increase the speed of simulation. A simple physical-layer communication system model has been created using a WildForce board, a reconfigurable device, and the performance is compared to a traditional software simulation of the same system. Using the reconfigurable device, the performance was increased by approximately one hundred times. This demonstrates the possibility of using reconfigurable devices for simulation of physical-layer communication systems. Also, an middleware architecture for distributed reconfigurable simulation is proposed and implemented. Using the middleware, reconfigurable devices and various computing elements can be integrated. The proposed middleware has several components. The master works as the server for the system. An object is any device that has computing capability. A resource is an algorithm or function implemented for a certain object. An object and its resources are connected to the system through an agent. This middleware system is tested with three different objects and six resources, and the performance is analyzed. The results shows that it is possible to interconnect various objects to perform a distributed simulation using reconfigurable devices. Possible future research to enhance the architecture is also discussed.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Toor, Salman. "Managing applications and data in distributed computing infrastructures." Licentiate thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-121099.

Повний текст джерела
Анотація:
Over the last few decades, the needs of computational power and data storage by collaborative, distributed scientific communities have increased very rapidly. Distributed computing infrastructures such as computing and storage grids provide means to connect geographically distributed resources and helps in addressing the needs of these communities. Much progress has been made in developing and operating grids, but several issues still need further attention. This thesis discusses three different aspects of managing large-scale scientific applications in grids: • Using large-scale scientific applications is often in itself a complex task, and to set them up and run experiments in a distributed environment adds another level of complexity. It is important to design general purpose and application specific frameworks that enhance the overall productivity for the scientists. The thesis present further development of a general purpose framework where existing portal technology is combined with tools for robust and middleware independent job management. Also, a pilot implementation of a domain-specific problem solving environment based on a grid-enabled R solution is presented. • Many current and future applications will need large-scale storage systems. Centralized systems are eventually not scalable enough to handle huge data volumes and also have can have additional problems with security and availability. An alternative is a reliable and efficient distributed storage system. In the thesis the architecture of a self-healing, grid-aware distributed storage cloud, Chelonia, is described and performance results for a pilot implementation are presented. • In a distributed computing infrastructure it is very important to manage and utilize the available resources efficiently. The thesis presents a review of different resource brokering techniques and how they are implemented in different production level middlewares. Also, a modified resource allocation model for the Advanced Resource Connector (ARC) middleware is described and performance experiments are presented.
eSSENCE
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Faye, Maurice-Djibril. "Déploiement auto-adaptatif d'intergiciel sur plate-forme élastique." Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL1036/document.

Повний текст джерела
Анотація:
Nous avons étudié durant cette thèse les moyens de rendre le déploiement d'un intergiciel auto-adaptatif. Le type d'intergiciel que nous avons considéré ici est hiérarchique (structure de graphe) et distribué. Chaque sommet du graphe modélise un processus qui peut être déployé sur une machine physique ou virtuelle d'une infrastructure de type grille/cloud, les arêtes modélisent des liens de communications entre processus. Il offre aux clients des services de calcul haute performance. Les infrastructures de grilles/cloud étant élastiques (perte et ajout de nœuds), un déploiement statique n'est pas la solution idéale car en cas de panne on risque de tout reprendre à zéro, ce qui est coûteux. Nous avons donc proposé un algorithme auto-stabilisant pour que l'intergiciel puisse retrouver un état stable sans intervention extérieure, au bout d'un temps fini, lorsqu'il est confronté à certains types de pannes. Les types de pannes que nous avons considérés sont les pannes transitoires (simulé par la perte de nœuds, l'ajout de nouveaux nœuds, la perte de liens entre deux nœuds). Pour évaluer ces algorithmes, nous avons conçu un simulateur. Les résultats des simulations montrent qu'un déploiement, sujet à des pannes transitoires, s'auto-adapte. Avant d'en arriver à la phase de programmation du simulateur, nous avons d'abord proposé un modèle d'infrastructure distribuée (ce modèle permet de décrire des environnements de type grille/cloud), un modèle pour décrire certains types d'intergiciels hiérarchiques et enfin un modèle pouvant décrire un intergiciel en cours d'exécution (processus déployés sur les machines)
We have studied the means to make a middleware deployment self-adaptive. Our use case middleware is hierarchical and distributed and can be modeled by a graph. A vertex models a process and an edge models a communication link between two processes. The middleware provides high performance computing services to the users.Once the middleware is deployed on a computing infrastructure like a grid or cloud, how it adapt the changes in dynamic environment? If the deployment is static, it may be necessary to redo all the deployment process, which is a costly operation. A better solution would be to make the deployment self-adaptive. We have proposed a rules-based self-stabilizing algorithm to manage a faulty deployment. Thus, after the detection of an unstable deployment, caused by some transients faults (joining of new nodes or deletion of existing nodes which may modify the deployment topology), the system will eventually recover a stable state, without external help, but only by executing the algorithm.We have designed an ad hoc discrete events simulator to evaluate the proposed algorithm. The simulation results show that, a deployment, subjected to transients faults which make it unstable, adapts itself. Before the simulator design, we had proposed a model to describe a distributed infrastructure, a model to describe hierarchical middleware and a model to describe a deployment, that is the mapping between the middleware processes and the hardware on which they are running on
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Bridges, Christopher P. "Agent computing platform for distributed satellite systems." Thesis, University of Surrey, 2009. http://epubs.surrey.ac.uk/770399/.

Повний текст джерела
Анотація:
Space and satellite systems are considered to be the most extreme environment to design for and are fraught with engineering difficulty. Performance metrics such as fault tolerance, reliability, pre-determinism and heritage are still high on the list of requirements for all satellite missions. The advent of modem day electronics and miniaturisation, state-of-the-art computing and networking technologies has enabled research into 'distributed satellite systems', where multiple spacecraft work collaboratively to perform a mission using intersatellite connectivity. A satellite can be considered one of many nodes in an autonomous and decentralised system, analogous to a mobile ad-hoc network, enabling opportunities in multiple-point sensing, greater communications capabilities, and spacecraft redundancy. Existing satellite constellations can implement distributed satellite system scenarios but provide unpredictable relative ranges and rates due to various space perturbations. This creates a disconnected environment making it difficult to perform distributed mission operations. Without orbit maintenance, limited onboard resources in power and mass could mean lower processing and networking capabilities which need to rise dramatically to meet requirements for these new missions. This thesis investigates the use of an Agent-based distributed computing platform to enable ad-hoc satellites networking. Agents for real-time systems and their applications conclude that, despite being utilised in complex control systems, most Agent middleware is unsuited for mission critical, real-time, networked, embedded systems. Two constellation scenarios are simulated for distributed satellite missions highlighting orbital issues such as relative distance and mission lifetime. Computing requirements for such distributed computing opportunities using intersatellite connectivity and Agent technologies have led to a novel system-on-a-chip design, including a general purpose processor core and a dedicated Java co-processing core to enable hard real-time Agent functionalities and software Agent applications at minimal overhead. Common Agent middleware platforms are compared and a software configuration is chosen with relevant Agent services. A distributed image compression case study is also presented. A picosatellite testbed is also designed to provide realistic computing and power constraints.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

張立新 and Lap-sun Cheung. "Load balancing in distributed object computing systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31224179.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

PINA, FELIPE FREIXO. "UTILIZATION OF DHT IN DISTRIBUTED COMPUTING SYSTEMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=19132@1.

Повний текст джерела
Анотація:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Arquiteturas P2P destacam-se pela descentralização e pelo incentivo a cooperação entre nós. Essas características permitem que sistemas baseados nesta arquitetura sejam tolerantes a falhas e que os recursos sejam distribu ídos entre os nós (via replicação). A utilização da técnica de DHT na criação de redes P2P permite que os sistemas sejam escaláveis. Ao contrário do uso mais comum em sistemas de distribuição de conteúdo, este trabalho investiga aplicacações da técnica de DHT em sistemas de computação distribu ída, onde o recurso compartilhado é a capacidade de processamento de cada nó. Quatro protocolos de roteamento de mensagens foram analisados para identificar os mais adequados aos sistemas de computação distribuída e aplicou-se o conceito de grupo de nós com o objetivo de aumentar a tolerância a falhas e distribuir tarefas entre os nós da rede.
P2P architectures are recognized for decentralization and incentive for the cooperation among nodes. These characteristics allow for fault tolerance and resource distribution among the nodes (by replication) to systems based on the P2P architecture. Systems based in P2P networks built using the DHT technique are scalable. Since this architecture is commonly used in content distribution systems, in this work we investigate the utilization of the DHT technique in distributed computing systems, where the shared resources are the node’s computational power. Four routing protocols were analyzed to identify the most appropriated for use in distributed computing systems and applied the group concept to archive fault tolerance and resource distribution among nodes.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Stößer, Jochen. "Market-based scheduling in distributed computing systems." [S.l. : s.n.], 2009. http://digbib.ubka.uni-karlsruhe.de/volltexte/1000010437.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Cheung, Lap-sun. "Load balancing in distributed object computing systems." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B2329428.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Ajmani, Sameer 1976. "Automatic software upgrades for distributed systems." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28717.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 156-164).
Upgrading the software of long-lived, highly-available distributed systems is difficult. It is not possible to upgrade all the nodes in a system at once, since some nodes may be unavailable and halting the system for an upgrade is unacceptable. Instead, upgrades may happen gradually, and there may be long periods of time when different nodes are running different software versions and need to communicate using incompatible protocols. We present a methodology and infrastructure that address these challenges and make it possible to upgrade distributed systems automatically while limiting service disruption. Our methodology defines how to enable nodes to interoperate across versions, how to preserve the state of a system across upgrades, and how to schedule an upgrade so as to limit service disrup- tion. The approach is modular: defining an upgrade requires understanding only the new software and the version it replaces. The upgrade infrastructure is a generic platform for distributing and installing software while enabling nodes to interoperate across versions. The infrastructure requires no access to the system source code and is transparent: node software is unaware that different versions even exist. We have implemented a prototype of the infrastructure called Upstart that intercepts socket communication using a dynamically-linked C++ library. Experiments show that Upstart has low overhead and works well for both local-area-and Internet systems.
by Sameer Ajmani.
Ph.D.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Chow, Ka-po. "Load-balancing in distributed multi-agent computing /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk:8888/cgi-bin/hkuto%5Ftoc%5Fpdf?B2295644x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Simons, Christof. "Context aware applications in mobile distributed systems /." Aachen : Shaker, 2008. http://d-nb.info/987900757/04.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Olson, Chandra. "Jini an investigation in distributed computing /." [Florida] : State University System of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/ank7122/chandra.PDF.

Повний текст джерела
Анотація:
Thesis (M.E.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains viii, 71 p.; also contains graphics. Vita. Includes bibliographical references (p. 69-70).
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Obrovac, Marko. "Chemical Computing for Distributed Systems: Algorithms and Implementation." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00925257.

Повний текст джерела
Анотація:
Avec l'émergence de plates-formes distribuées très hétérogènes, dynamiques et à large échelle, la nécessité d'un moyen de les programmer efficacement et de les gérer est apparu. Le concept de l'informatique autonomique propose de créer des systèmes auto-gérés c'est-à-dire des systèmes qui sont conscients de leurs composants et de leur environnement, et peuvent se configurer, s'optimiser, se réparer et se protéger. Dans le cadre de la réalisation de tels systèmes, la programmation déclarative, dont l'objectif est de faciliter la tâche du programmeur en séparant le contrôle de la logique du calcul, a retrouvé beaucoup d'intérêt ces derniers temps. En particulier, la programmation à base de des règles est considérée comme un modèle prometteur dans cette quête d'abstractions de programmation adéquates pour ces plates-formes. Cependant, bien que ces modèles gagnent beaucoup d'attention, ils créent une demande pour des outils génériques capables de les exécuter à large échelle. Le modèle de programmation chimique, qui a été conçu suivant la métaphore chimique, est un modèle de programmation à bas de règles et d'ordre supérieur, avec une exécution non-déterministe, où les règles sont appliquées de façon concurrente sur un multi ensemble de données. Dans cette thèse, nous proposons la conception, le développement et l'expérimentation d'un intergiciel distribué pour l'exécution de programmes chimique sur des plates-formes à large échelle et génériques. L'architecture proposée combine une couche de communication pair-à-pair avec un protocole de capture atomique d'objets sur lesquels les règles doivent être appliquées, et un système efficace de détection de terminaison. Nous décrivons le prototype d'intergiciel mettant en oeuvre cette architecture. En s'appuyant sur son déploiement sur une plate-forme expérimentale à large échelle, nous présentons les résultats de performance, qui confirment les complexités analytiques obtenues et montrons expérimentalement la viabilité d'un tel modèle de programmation.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Gerami, Majid. "Coding, Computing, and Communication in Distributed Storage Systems." Doctoral thesis, KTH, Kommunikationsteori, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-193887.

Повний текст джерела
Анотація:
Conventional studies in communication networks mostly focus on securely and reliably transmitting  data from a source node (or multiple source nodes) to multiple destinations. A more general problem appears when the destination nodes are interested in obtaining  functions of the data available in distributed source nodes. For obtaining a function, transmitting all the data to a destination node and then computing the function might be inefficient. In order to exploit the network resources efficiently, the general problem offers distributed computing in combination with coding and communication. This problem has applications in distributed systems, e.g., in wireless sensor networks, in distributed storage systems, and in distributed computing systems. Following this general problem formulation, we study the optimal and secure recovery of the lost data in storage nodes and in reconstructing a version of a file in distributed storage systems.   The significance of this study is due to the fact that the new trends in communications including big data, Internet of things, low latency, and high reliability communications challenge the existing centralized data storage systems. Distributed storage systems can rectify those issues by  distributing  thousands of storage nodes (possibly around the globe), and then benefiting users by bringing data to their proximity.  Yet, distributing the storage nodes brings new challenges. In these distributed systems, where storage nodes  are connected through links and servers, communication plays a main role in their performance. In addition,  a part of network may fail or due to communication failure or delay there might exist multi versions of a file. Moreover, an intruder can overhear the communications between storage nodes and obtain some information about the stored data. Therefore, there are challenges on  reliability, security, availability, and consistency.   To increase reliability, systems need to store redundant data in storage nodes and employ error control codes. To maintain the  reliability  in a dynamic environment where storage nodes can fail, the system should have an autonomous repair process. Namely, it should regenerate the failed nodes by the help of other storage nodes. The repair process demands bandwidth, energy, or in general transmission costs.  We propose novel techniques to reduce the repair cost in distributed storage systems.   First, we propose {surviving nodes cooperation} in repair, meaning that surviving nodes can combine their received data with their own stored data and then transmit toward the new node. In addition, we study the repair problem in multi-hop networks and consider the cost of transmitting data between storage nodes.  While classical repair model assumes the availability of direct links between the new node and surviving nodes, we consider that such links may not be available either due to failure or their costs.  We formulate an optimization problem to minimize the repair cost and compare two systems, namely with and without surviving nodes cooperation.   Second, we study the repair problem where the links between storage nodes are lossy e.g., due to server congestion, load balancing, or unreliable physical layer (wireless links).  We model the lossy links by packet erasure channels and then derive the fundamental bandwidth-storage tradeoff in packet erasure networks. In addition, we propose dedicated-for-repair storage nodes to reduce the repair-bandwidth.   Third, we generalize the repair model by proposing the concept of partial repair. That is, storage nodes may lose parts of their stored data. Then in partial repair, the lost data is recovered by exchanging data between storage nodes and using the available data in storage nodes as side information. For efficient partial-repair,  we propose two-layer coding in distributed storage systems and then we derive the optimal bandwidth in partial repair.   Fourth, we study security in distributed storage systems.  We investigate security in partial repair. In particular, we propose codes that make the partial repair secure in the senses of strong and weak information-theoretic security definitions.   Finally, we study consistency in distributed storage systems. Consistency means that distinct users obtain the latest version of a file in a system that stores multi versions of a file. Given the probability of receiving a version by a storage node and the constraint on the node storage space, we aim to find the optimal encoding of multi versions of a file that maximizes the probability of obtaining the latest version of a file or a version close to the latest version by a read client that connects to a number of storage nodes.

Pages 153-168 are removed due to copyright reasons.

QC 20161012

Стилі APA, Harvard, Vancouver, ISO та ін.
44

Worek, William J. "Matching Genetic Sequences in Distributed Adaptive Computing Systems." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/34374.

Повний текст джерела
Анотація:
Distributed adaptive computing systems (ACS) allow developers to design applications using multiple programmable devices. The ACS API, an API created for distributed adaptive com-puting, gives developers the ability to design scalable ACS systems in a cluster networking environment for large applications. One such application, found in the field of bioinformatics, is the DNA sequence alignment problem. This thesis presents a runtime reconfigurable FPGA implementation of the Smith-Waterman similarity comparison algorithm. Additionally, this thesis presents tools designed for the ACS API that assist developers creating applications in a heterogeneous distributed adaptive computing environment.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Chow, Ka-po, and 周嘉寶. "Load-balancing in distributed multi-agent computing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B3122426X.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Simons, Christof. "Context aware applications in mobile distributed systems." Aachen Shaker, 2007. http://d-nb.info/987900757/04.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Hui, S. C. "Software development of real-time distributed systems." Thesis, University of Sussex, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375841.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Durrett, John Randall. "Distributed information systems design through software teams /." Digital version, 1999. http://wwwlib.umi.com/cr/utexas/fullcit?p9959479.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Lin, Chia-en. "Performance Engineering of Software Web Services and Distributed Software Systems." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc500103/.

Повний текст джерела
Анотація:
The promise of service oriented computing, and the availability of Web services promote the delivery and creation of new services based on existing services, in order to meet new demands and new markets. As Web and internet based services move into Clouds, inter-dependency of services and their complexity will increase substantially. There are standards and frameworks for specifying and composing Web Services based on functional properties. However, mechanisms to individually address non-functional properties of services and their compositions have not been well established. Furthermore, the Cloud ontology depicts service layers from a high-level, such as Application and Software, to a low-level, such as Infrastructure and Platform. Each component that resides in one layer can be useful to another layer as a service. It hints at the amount of complexity resulting from not only horizontal but also vertical integrations in building and deploying a composite service. To meet the requirements and facilitate using Web services, we first propose a WSDL extension to permit specification of non-functional or Quality of Service (QoS) properties. On top of the foundation, the QoS-aware framework is established to adapt publicly available tools for Web services, augmented by ontology management tools, along with tools for performance modeling to exemplify how the non-functional properties such as response time, throughput, or utilization of services can be addressed in the service acquisition and composition process. To facilitate Web service composition standards, in this work we extended the framework with additional qualitative information to the service descriptions using Business Process Execution Language (BPEL). Engineers can use BPEL to explore design options, and have the QoS properties analyzed for the composite service. The main issue in our research is performance evaluation in software system and engineering. We researched the Web service computation as the first half of this dissertation, and performance antipattern detection and elimination in the second part. Performance analysis of software system is complex due to large number of components and the interactions among them. Without the knowledge of experienced experts, it is difficult to diagnose performance anomalies and attempt to pinpoint the root causes of the problems. Software performance antipatterns are similar to design patterns in that they provide what to avoid and how to fix performance problems when they appear. Although the idea of applying antipatterns is promising, there are gaps in matching the symptoms and generating feedback solution for redesign. In this work, we analyze performance antipatterns to extract detectable features, influential factors, and resource involvements so that we can lay the foundation to detect their presence. We propose system abstract layering model and suggestive profiling methods for performance antipattern detection and elimination. Solutions proposed can be used during the refactoring phase, and can be included in the software development life cycle. Proposed tools and utilities are implemented and their use is demonstrated with RUBiS benchmark.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Cederström, Andreas. "On using Desktop Grid Computing in software industry." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5800.

Повний текст джерела
Анотація:
Context. When dealing with large data sets and heavy calculations the common solution is clusters, supercomputers or Grids of these two. However, there are ways of gaining large computational power by utilizing the unused cycles of regular home or office computers, this are referred to as Desktop Grids. Objectives. In this study we review the current field of solutions for open source Desktop Grid computing capable of dealing with a heterogeneous set of clients and dynamic size of the Desktop Grid. We investigate current use, interest of use and priority of key attributes of Desktop Grids. Finally we want to show how time effective Desktop Grids are compared to execution on a single machine and in the process show effort needed to setup a Desktop Grid and start computing. The overall purpose of this study is to provide a path for industry organizations to take when taking the first step into Desktop Grid computing. Methods. We use a systematic review to collect information of existing open source Desktop Grid solutions. Studies are selected based on inclusion criterions and a quality assessment. A survey questioner is used to assess industry usage, interest and prioritization of attributes of Desktop Grids. We will conduct an experiment to show execution speedup as well as setup effort. Results. We found ten open source Desktop Grids fulfilling our requirements. The survey shows that Desktop Grids is used to a very little extent within industry while a majority of the participants state that there is an interest for Desktop Grids. As result of the experiment, we can say that we achieved very high speedup and that effort needed to setup a Desktop Grid is about 40 hours for one person with no prior experience to the selected Desktop Grid system. Conclusions. We conclude that industry organizations have a possible need for Desktop Grids but in order to be more successful, Desktop Grid developers must put more effort into areas as automated testing and code compilation.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії