Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: 4606 Distributed computing and systems software.

Дисертації з теми "4606 Distributed computing and systems software"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-28 дисертацій для дослідження на тему "4606 Distributed computing and systems software".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Mellor, Paul Vincent. "An adaptation of Modula-2 for distributed computing systems." Thesis, University of Hull, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327802.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tarafdar, Ashis. "Software fault tolerance in distributed systems using controlled re-execution /." Digital version accessible at:, 2000. http://wwwlib.umi.com/cr/utexas/main.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tilevich, Eli. "Software Tools for Separating Distribution Concerns." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7518.

Повний текст джерела
Анотація:
With the advent of the Internet, distributed programming has become a necessity for the majority of application domains. Nevertheless, programming distributed systems remains a delicate and complex task. This dissertation explores separating distribution concerns, the process of transforming a centralized monolithic program into a distributed one. This research develops algorithms, techniques, and tools for separating distribution concerns and evaluates the applicability of the developed artifacts by identifying the distribution concerns that they separate and the common architectural characteristics of the centralized programs that they transform successfully. The thesis of this research is that software tools working with standard mainstream languages, systems software, and virtual machines can effectively and efficiently separate distribution concerns from application logic for object-oriented programs that use multiple distinct sets of resources. Among the specific technical contributions of this dissertation are (1) a general algorithm for call-by-copy-restore semantics in remote procedure calls for linked data structures, (2) an analysis heuristic that determines which application objects get passed to which parts of native (i.e., platform-specific) code in the language runtime system for platform-independent binary code applications, (3) a technique for injecting code in such applications that will convert objects to the right representation so that they can be accessed correctly inside both application and native code, (4) an approach to maintaining the Java centralized concurrency and synchronization semantics over remote procedure calls efficiently, and (5) an approach to enabling the execution of legacy Java code remotely from a web browser. The technical contributions of this dissertation have been realized in three software tools for separating distribution concerns: NRMI, middleware with copy-restore semantics; GOTECH, a program generator for distribution; and J-Orchestra, an automatic partitioning system. This dissertation presents several case studies of successfully applying the developed tools to third-party programs.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Singh, Neeta S. "An automatic code generation tool for partitioned software in distributed computing." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001129.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Darling, James Campbell Charles. "The application of distributed and mobile computing techniques to advanced simulation and virtual reality systems." Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/843917/.

Повний текст джерела
Анотація:
Current technologies for creating distributed simulations or virtual environments are too limited in terms of scalability and flexibility, particularly in the areas of network saturation, distribution of VR scenes, and co-ordination of large systems of active objects. This thesis proposes the use of mobile and distributed computing technology to alleviate some of these limitations. A study of contemporary technologies for distributed simulation and networked virtual environments has been made, examining the benefits and drawbacks of different techniques. The main theory that has been investigated is that processing of a global simulation space should be spread over a network of computers, the principle of locality cutting the network bandwidth required. Using a prototype language for distributed graph processing, which fully supports mobile programming, experimental systems have been developed to demonstrate the use of distributed processing in creating large-scale virtual environments. The working examples created show that the ideas proposed for distribution of interactive virtual environments are valid, and that mobile programming techniques provide a new direction of development for the field of simulation. A more detailed summary of the work is given in Appendix D. Five publications to date (shown overleaf) have resulted from my involvement in the work, and a number of others have resulted from the overall project.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wang, Koping. "Spider II: A component-based distributed computing system." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1874.

Повний текст джерела
Анотація:
Spider II system is the second version implementation of the Spider project. This system is the first distributed computation research project in the Department of Computer Science at CSUSB. Spider II is a distributed virtual machine on top of the UNIX or LINUX operating system. Spider II features multi-tasking, load balancing and fault tolerance, which optimize the performance and stability of the system.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Eise, Justin. "A Secure Architecture for Distributed Control of Turbine Engine Systems." University of Dayton / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1552556049435026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Okonoboh, Matthias Aifuobhokhan, and Sudhakar Tekkali. "Real-Time Software Vulnerabilities in Cloud Computing : Challenges and Mitigation Techniques." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2645.

Повний текст джерела
Анотація:
Context: Cloud computing is rapidly emerging in the area of distributed computing. In the meantime, many organizations also attributed the technology to be associated with several business risks which are yet to be resolved. These challenges include lack of adequate security, privacy and legal issues, resource allocation, control over data, system integrity, risk assessment, software vulnerabilities and so on which all have compromising effect in cloud environment. Organizations based their worried on how to develop adequate mitigation strategies for effective control measures and to balancing common expectation between cloud providers and cloud users. However, many researches tend to focus on cloud computing adoption and implementation and with less attention to vulnerabilities and attacks in cloud computing. This paper gives an overview of common challenges and mitigation techniques or practices, describes general security issues and identifies future requirements for security research in cloud computing, given the current trend and industrial practices. Objectives: We identified common challenges and linked them with some compromising attributes in cloud as well as mitigation techniques and their impacts in cloud practices applicable in cloud computing. We also identified frameworks we consider relevant for identifying threats due to vulnerabilities based on information from the reviewed literatures and findings. Methods: We conducted a systematic literature review (SLR) specifically to identify empirical studies focus on challenges and mitigation techniques and to identify mitigation practices in addressing software vulnerabilities and attacks in cloud computing. Studies were selected based on the inclusion/exclusion criteria we defined in the SLR process. We search through four databases which include IEEE Xplore, ACM Digital Library, SpringerLinks and SciencDirect. We limited our search to papers published from 2001 to 2010. In additional, we then used the collected data and knowledge from finding after the SLR, to design a questionnaire which was used to conduct industrial survey which also identifies cloud computing challenges and mitigation practices persistent in industry settings. Results: Based on the SLR a total of 27 challenges and 20 mitigation techniques were identified. We further identified 7 frameworks we considered relevant for mitigating the prevalence real-time software vulnerabilities and attacks in the cloud. The identified challenges and mitigation practices were linked to compromised cloud attributes and the way mitigations practices affects cloud computing, respectively. Furthermore, 5 and 3 additional challenges and suggested mitigation practices were identified in the survey. Conclusion: This study has identified common challenges and mitigation techniques, as well as frameworks practices relevant for mitigating real-time software vulnerabilities and attacks in cloud computing. We cannot make claim on exhaustive identification of challenges and mitigation practices associated with cloud computing. We acknowledge the fact that our findings might not be sufficient to generalize the effect of the different service models which include SaaS, IaaS and PaaS, and also true for the different deployment models such as private, public, community and hybrid. However, this study we assist both cloud provider and cloud customers on the security, privacy, integrity and other related issues and useful in the part of identifying further research area that can help in enhancing security, privacy, resource allocation and maintain integrity in the cloud environment.
Kungsmarksvagen 67 SE-371 44 Karlskrona Sweden Tel: 0737159290
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lillethun, David. "ssIoTa: A system software framework for the internet of things." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53531.

Повний текст джерела
Анотація:
Sensors are widely deployed in our environment, and their number is increasing rapidly. In the near future, billions of devices will all be connected to each other, creating an Internet of Things. Furthermore, computational intelligence is needed to make applications involving these devices truly exciting. In IoT, however, the vast amounts of data will not be statically prepared for batch processing, but rather continually produced and streamed live to data consumers and intelligent algorithms. We refer to applications that perform live analysis on live data streams, bringing intelligence to IoT, as the Analysis of Things. However, the Analysis of Things also comes with a new set of challenges. The data sources are not collected in a single, centralized location, but rather distributed widely across the environment. AoT applications need to be able to access (consume, produce, and share with each other) this data in a way that is natural considering its live streaming nature. The data transport mechanism must also allow easy access to sensors, actuators, and analysis results. Furthermore, analysis applications require computational resources on which to run. We claim that system support for AoT can reduce the complexity of developing and executing such applications. To address this, we make the following contributions: - A framework for systems support of Live Streaming Analysis in the Internet of Things, which we refer to as the Analysis of Things (AoT), including a set of requirements for system design - A system implementation that validates the framework by supporting Analysis of Things applications at a local scale, and a design for a federated system that supports AoT on a wide geographical scale - An empirical system evaluation that validates the system design and implementation, including simulation experiments across a wide-area distributed system We present five broad requirements for the Analysis of Things and discuss one set of specific system support features that can satisfy these requirements. We have implemented a system, called \textsubscript{SS}IoTa, that implements these features and supports AoT applications running on local resources. The programming model for the system allows applications to be specified simply as operator graphs, by connecting operator inputs to operator outputs and sensor streams. Operators are code components that run arbitrary continuous analysis algorithms on streaming data. By conforming to a provided interface, operators may be developed that can be composed into operator graphs and executed by the system. The system consists of an Execution Environment, in which a Resource Manager manages the available computational resources and the applications running on them, a Stream Registry, in which available data streams can be registered so that they may be discovered and used by applications, and an Operator Store, which serves as a repository for operator code so that components can be shared and reused. Experimental results for the system implementation validate its performance. Many applications are also widely distributed across a geographic area. To support such applications, \textsubscript{SS}IoTa must be able to run them on infrastructure resources that are also distributed widely. We have designed a system that does so by federating each of the three system components: Operator Store, Stream Registry, and Resource Manager. The Operator Store is distributed using a distributed hast table (DHT), however since temporal locality can be expected and data churn is low, caching may be employed to further improve performance. Since sensors exist at particular locations in physical space, queries on the Stream Registry will be based on location. We also introduce the concept of geographical locality. Therefore, range queries in two dimensions must be supported by the federated Stream Registry, while taking advantage of geographical locality for improved average-case performance. To accomplish these goals, we present a design sketch for SkipCAN, a modification of the SkipNet and Content Addressable Network DHTs. Finally, the fundamental issue in the federated Resource Manager is how to distributed the operators of multiple applications across the geographically distributed sites where computational resources can execute them. To address this, we introduce DistAl, a fully distributed algorithm that assigns operators to sites. DistAl also respects the system resource constraints and application preferences for performance and quality of results (QoR), using application-specific utility functions to allow applications to express their preferences. DistAl is validated by simulation results.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Dolci, Alessandro. "Traffic Management in Reti Spontanee basato su Software-Defined Networking." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15240/.

Повний текст джерела
Анотація:
La crescente diffusione di dispositivi mobili dotati di interfacce di rete eterogenee ha generato un notevole interesse verso l'ambito delle Mobile Ad-hoc Networks. Tuttavia, un aspetto finora sottovalutato nella progettazione di reti di tale tipologia è quello relativo alle garanzie di Quality of Service fornite per le comunicazioni in corso. Il presente lavoro è dedicato alla presentazione di una soluzione per la gestione del livello di qualità di servizio riguardante le interazioni attive all'interno di reti spontanee, sulla base del paradigma architetturale di Software-Defined Networking. La soluzione ha previsto la progettazione e l'implementazione di un'estensione del middleware RAMP, realizzato in precedenza presso l'Università di Bologna, al fine di introdurre una modalità di gestione centralizzata delle reti. Essa prevede l'organizzazione del traffico relativo ad applicazioni differenti in flow dedicati e la presenza di un controller in grado di dialogare con tutti i nodi attivi, allo scopo di poter mantenere una visione complessiva della topologia e della situazione della rete e di poter imporre le proprie decisioni.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ghafoor, Sheikh Khaled. "Modeling of an adaptive parallel system with malleable applications in a distributed computing environment." Diss., Mississippi State : Mississippi State University, 2007. http://sun.library.msstate.edu/ETD-db/theses/available/etd-11092007-145420.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lacks, Daniel Jonathan. "MODELING, DESIGN AND EVALUATION OF NETWORKING SYSTEMS AND PROTOCOLS THROUGH SIMULATION." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3792.

Повний текст джерела
Анотація:
Computer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework that can be used to create new networking simulators on top of an HLA-based federation for distributed simulation. The goals are to model and simulate networking architectures and protocols by developing a common underlying simulation infrastructure and to reduce the time a developer has to learn the semantics of message passing and time management to free more time for experimentation and data collection and reporting. This is accomplished by evolving the simulation engine through three different applications that model three different types of network protocols. Computer networking is a good candidate for simulation because of the Internet's rapid growth that has spawned off the need for new protocols and algorithms and the desire for a common infrastructure to model these protocols and algorithms. One simulation, the 3DInterconnect simulator, simulates data transmitting through a hardware k-array n-cube network interconnect. Performance results show that k-array n-cube topologies can sustain higher traffic load than the currently used interconnects. The second simulator, Cluster Leader Logic Algorithm Simulator, simulates an ad-hoc wireless routing protocol that uses a data distribution methodology based on the GPS-QHRA routing protocol. CLL algorithm can realize a maximum of 45% power savings and maximum 25% reduced queuing delay compared to GPS-QHRA. The third simulator simulates a grid resource discovery protocol for helping Virtual Organizations to find resource on a grid network to compute or store data on. Results show that worst-case 99.43% of the discovery messages are able to find a resource provider to use for computation. The simulation engine was then built to perform basic HLA operations. Results show successful HLA functions including creating, joining, and resigning from a federation, time management, and event publication and subscription.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Engineering PhD
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Mihailescu, Patrik 1977. "MAE : a mobile agent environment for resource limited devices." Monash University, School of Network Computing, 2003. http://arrow.monash.edu.au/hdl/1959.1/5805.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Gadea, Cristian. "Architectures and Algorithms for Real-Time Web-Based Collaboration." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/41944.

Повний текст джерела
Анотація:
Originating in the theory of distributed computing, the optimistic consistency control method known as Operational Transformation (OT) has been studied by researchers since the late 1980s. Algorithms were devised for managing the concurrent nature of user actions and for maintaining the consistency of replicated data as changes are introduced by multiple geographically-distributed users in real-time. Web-Based Collaborative Platforms are now essential components of modern organizations, with real-time protocols and standards such as WebSocket enabling the development of online collaboration tools to facilitate information sharing, content creation, document management, audio and video streaming, and communication among team members. Products such as Google Docs have shown that centralized web-based co-editing is now possible in a reliable way, with benefits in user productivity and efficiency. However, as the demand for effective real-time collaboration between team members continues to increase, web applications require new synchronization algorithms and architectures to resolve the editing conflicts that may appear when multiple individuals are modifying the same data at the same time. In addition, collaborative applications need to be supported by scalable distributed backend services, as can be achieved with "serverless" technologies. While much existing research has addressed problems of optimistic consistency maintenance, previous approaches have not focused on capturing the dynamic client-server interactions of OT systems by modeling them as real-time systems using Finite State Machine (FSM) theory. This thesis includes an exploration of how the principles of control theory and hierarchical FSMs can be applied to model the distributed system behavior when processing and transforming HTML DOM changes initiated by multiple concurrent users. The FSM-based OT implementation is simulated, including with random inputs, and the approach is shown to be invaluable for organizing the algorithms required for synchronizing complex data structures. The real-time feedback control mechanism is used to develop a Web-Based Collaborative Platform based on a new OT integration algorithm and architecture that brings "Virtual DOM" concepts together with state-of-the-art OT principles to enable the next generation of collaborative web-based experiences, as shown with implementations of a rich-text editor and a 3D virtual environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Prado, Pedro Felipe do. "Um processo de desenvolvimento de software focado em sistemas distribuídos autonômicos." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-13092017-110656/.

Повний текст джерела
Анотація:
Os Sistemas Distribuídos (SDs) tem apresentado uma crescente complexidade no seu gerenciamento, além de possuir a necessidade de garantir Qualidade de Serviço (QoS) aos seus usuários. A Computação Autonômica (CA) surge como uma forma de transformar os SDs em Sistemas Distribuídos Autonômicos (SDAs), com capacidade de auto-gerenciamento. Entretanto, não foi encontrado um processo de desenvolvimento de software, focado na criação de SDAs. Na grande maioria dos trabalhos relacionados, simplesmente é apresentado um SD, juntamente com qual aspecto da CA deseja-se implementar, a técnica usada e os resultados obtidos. Isso é apenas uma parte do desenvolvimento de um SDA, não abordando desde a definição dos requisitos até a manutenção do software. Mais importante, não mostra como tais requisitos podem ser formalizados e posteriormente solucionados por meio do auto-gerenciamento fornecido pela CA. Esta tese foca na proposta de um processo de desenvolvimento de software voltado para SDAs. Com esse objetivo, foram integradas diferentes áreas de conhecimento, compreendendo: Processo Unificado de Desenvolvimento de Software (PU), SDs, CA, Pesquisa Operacional (PO) e Avaliação de Desempenho de Sistemas Computacionais (ADSC). A prova de conceito foi feita por meio de três estudos de caso, todos focando-se em problemas NP-Difícil, são eles: (i) otimização off-line (problema da mochila com múltiplas escolhas), (ii) otimização online (problema da mochila com múltiplas escolhas) e (iii) criação do módulo planejador de um gerenciador autonômico, visando realizar o escalonamento de requisições (problema de atribuição generalizado). Os resultados do primeiro estudo de caso, mostram que é possível usar PO e ADSC para definir uma arquitetura de base para o SDA em questão, bem como reduzir o tamanho do espaço de busca quando o SDA estiver em execução. O segundo, prova que é possível garantir a QoS do SDA durante sua execução, usando a formalização fornecida pela PO e sua respectiva solução. O terceiro, prova que é possível usar a PO para formalizar o problema de auto-gerenciamento, bem como a ADSC para avaliar diferentes algoritmos ou modelos de arquitetura para o SDA.
Distributed Systems (DSs) have an increasing complexity and do not have their management, besides having a quality of service (QoS) to its users. Autonomic Computing (AC) emerges as a way of transforming the SDs into Autonomous Distributed Systems (ADSs), with a capacity for self-management. However, your software development process is focused on creating SDAs. In the vast majority of related works, simply an SD model, along with what aspect of the AC implement, a technique used and the results obtained. This is only a part of the development of an ADS, not approaching from an definition of requirements for a maintenance of software. More importantly, it does not show how such requirements can be formalized and subsequently solved through the self-management provided by AC. This proposal aims at a software development process for the DASs. To this end, different areas of knowledge were integrated, including: Unified Software Development Process (PU), SDs, CA, Operations Research (OR) and Computer Systems Performance Evaluation (CSPE). The proof of concept was made through three case studies, all focusing on NP-Hard problems, namely: (i) off-line optimization (problem of the backpack with multiple choices), (ii) (Problem of the backpack with multiple choices) and (iii) creation of the scheduling module of an autonomic manager, aiming to carry out the scheduling of requests (problem of generalized assignment). The results of the first case study show that it is possible to use OR and CSPE to define a base architecture for the DAS in question, as well as reduce the size of the search space when SDA is running. The second, proves that it is possible to guarantee the QoS of the DAS during its execution, using the formalization provided by the OR and its respective solution. The third, proves that it is possible to use the PO to formalize the self-management problem, as well as the ADSC to evaluate different algorithms or architecture models for the ADS.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Mupparaju, Naveen. "Performance Evaluation and Comparison of Distributed Messaging Using Message Oriented Middleware." UNF Digital Commons, 2013. http://digitalcommons.unf.edu/etd/456.

Повний текст джерела
Анотація:
Message Oriented Middleware (MOM) is an enabling technology for modern event- driven applications that are typically based on publish/subscribe communication [Eugster03]. Enterprises typically contain hundreds of applications operating in environments with diverse databases and operating systems. Integration of these applications is required to coordinate the business process. Unfortunately, this is no easy task. Enterprise Integration, according to Brosey et al. (2001), "aims to connect and combines people, processes, systems, and technologies to ensure that the right people and the right processes have the right information and the right resources at the right time"[Brosey01]. Communication between different applications can be achieved by using synchronous and asynchronous communication tools. In synchronous communication, both parties involved must be online (for example, a telephone call), whereas in asynchronous communication, only one member needs to be online (email). Middleware is software that helps two applications communicate with one another. Remote Procedure Calls (RPC) and Object Request Brokers (ORB) are two types of synchronous middleware—when they send a request they must wait for an immediate reply. This can decrease an application’s performance when there is no need for synchronous communication. Even though asynchronous distributed messaging using message oriented middleware is widely used in industry, there is not enough work done in evaluating the performance of various open source Message oriented middleware. The objective of this work was to benchmark and evaluate three different open source MOM’s performance in publish/subscribe and point-to-point domains, functional comparison and qualitative study from developers perspective.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Lago, Nelson Posse. ""Processamento distribuído de áudio em tempo real"." Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-05102004-154239/.

Повний текст джерела
Анотація:
Sistemas computadorizados para o processamento de multimídia em tempo real demandam alta capacidade de processamento. Problemas que exigem grandes capacidades de processamento são comumente abordados através do uso de sistemas paralelos ou distribuídos; no entanto, a conjunção das dificuldades inerentes tanto aos sistemas de tempo real quanto aos sistemas paralelos e distribuídos tem levado o desenvolvimento com vistas ao processamento de multimídia em tempo real por sistemas computacionais de uso geral a ser baseado em equipamentos centralizados e monoprocessados. Em diversos sistemas para multimídia há a necessidade de baixa latência durante a interação com o usuário, o que reforça ainda mais essa tendência para o processamento em um único nó. Neste trabalho, implementamos um mecanismo para o processamento síncrono e distribuído de áudio com características de baixa latência em uma rede local, permitindo o uso de um sistema distribuído de baixo custo para esse processamento. O objetivo primário é viabilizar o uso de sistemas computacionais distribuídos para a gravação e edição de material musical em estúdios domésticos ou de pequeno porte, contornando a necessidade de hardware dedicado de alto custo. O sistema implementado consiste em duas partes: uma, genérica, implementada sob a forma de um middleware para o processamento síncrono e distribuído de mídias contínuas com baixa latência; outra, específica, baseada na primeira, voltada para o processamento de áudio e compatível com aplicações legadas através da interface padronizada LADSPA. É de se esperar que pesquisas e aplicações futuras em que necessidades semelhantes se apresentem possam utilizar o middleware aqui descrito para outros tipos de processamento de áudio bem como para o processamento de outras mídias, como vídeo.
Computer systems for real-time multimedia processing require high processing power. Problems that depend on high processing power are usually solved by using parallel or distributed computing techniques; however, the combination of the difficulties of both real-time and parallel programming has led the development of applications for real-time multimedia processing for general purpose computer systems to be based on centralized and single-processor systems. In several systems for multimedia processing, there is a need for low latency during the interaction with the user, which reinforces the tendency towards single-processor development. In this work, we implemented a mechanism for synchronous and distributed audio processing with low latency on a local area network which makes the use of a low cost distributed system for this kind of processing possible. The main goal is to allow the use of distributed systems for recording and editing of musical material in home and small studios, bypassing the need for high-cost equipment. The system we implemented is made of two parts: the first, generic, implemented as a middleware for synchronous and distributed processing of continuous media with low latency; and the second, based on the first, geared towards audio processing and compatible with legacy applications based on the standard LADSPA interface. We expect that future research and applications that share the needs of the system developed here make use of the middleware we developed, both for other kinds of audio processing as well as for the processing of other media forms, such as video.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Shafabakhsh, Benyamin. "Research on Interprocess Communication in Microservices Architecture." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277940.

Повний текст джерела
Анотація:
With the substantial growth of cloud computing over the past decade, microservices has gained significant popularity in the industry as a new architectural pattern. It promises a cloud-native architecture that breaks large applications into a collection of small, independent, and distributed packages. Since microservices-based applications are distributed, one of the key challenges when designing an application is the choice of mechanism by which services communicate with each other. There are several approaches for implementing Interprocess communication (IPC) in microservices, and each comes with different advantages and trade-offs. While theoretical and informal comparison exists between them, this thesis has taken an experimental approach to compare and contrast common forms of IPC communications. In this the- sis, IPC methods have been categorized into Synchronous and Asynchronous categories. The Synchronous type consists of REST API and Google gRPC, while the Asynchronous type is using a message broker known as RabbitMQ. Further, a collection of microservices for an e-commerce scenario has been designed and developed using all the three IPC methods. A load test has been executed against each model to obtain quantitative data related to Performance Efficiency, and Availability of every method. Developing the same set of functionalities using different IPC methods has offered a qualitative data related to Scalability, and Complexity of each IPC model. The evaluation of the experiment indicates that, although there is no universal IPC solution that can be applied in all cases, Asynchronous IPC patterns shall be the preferred option when designing the system. Nevertheless, the findings of this work also suggest there exist scenarios where Synchronous patterns can be more suitable.
Med den kraftiga tillväxten av molntjänster under det senaste decenniet har mikrotjänster fått en betydande popularitet i branschen som ett nytt arkitektoniskt mönster. Det erbjuder en moln-baserad arkitektur som delar stora applikationer i en samling små, oberoende och distribuerade paket. Eftersom microservicebaserade applikationer distribueras och körs på olika maskiner, är en av de viktigaste utmaningarna när man utformar en applikation valet av mekanism med vilken tjänster kommunicerar med varandra. Det finns flera metoder för att implementera Interprocess-kommunikation (IPC) i mikrotjänster och var och en har olika fördelar och nackdelar. Medan det finns teoretisk och in- formell jämförelse mellan dem, har denna avhandling tagit ett experimentellt synsätt för att jämföra och kontrastera vanliga former av IPC-kommunikation. I denna avhandling har IPC-metoder kategoriserats i synkrona och asynkrona kategorier. Den synkrona typen består av REST API och Google gRPC, medan asynkron typ använder en meddelandemäklare känd som RabbitMQ. Dessutom har en samling mikroservice för ett e-handelsscenario utformats och utvecklats med alla de tre olika IPC-metoderna. Ett lasttest har utförts mot var- je modell för att erhålla kvantitativa data relaterade till prestandaeffektivitet, och tillgänglighet för varje metod. Att utveckla samma uppsättning funktionaliteter med olika IPC-metoder har erbjudit en kvalitativ data relaterad till skalbarhet och komplexitet för varje IPC-modell. Utvärderingen av experimentet indikerar att även om det inte finns någon universell IPC-lösning som kan tillämpas i alla fall, ska asynkrona IPC-mönster vara det föredragna alternativet vid utformningen av systemet. Ändå tyder resultaten från detta arbete också på att det finns scenarier där synkrona mönster är mer lämpliga.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Chevalier, Arthur. "Optimisation du placement des licences logicielles dans le Cloud pour un déploiement économique et efficient." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN071.

Повний текст джерела
Анотація:
Cette thèse s'intéresse au Software Asset Management (SAM) qui correspond à la gestion de licences, de droits d'usage, et du bon respect des règles contractuelles. Lorsque l'on parle de logiciels propriétaires, ces règles sont bien souvent mal interprétées ou totalement incomprises. En échange du fait que nous sommes libres de licencier notre usage comme bon nous semble, dans le respect du contrat, les éditeurs possèdent le droit d'audit. Ces derniers peuvent vérifier le bon respect des règles et imposer, lorsque ces dernières ne sont pas respectées, des pénalités bien souvent d'ordre financières. L'émergence du Cloud a grandement augmenté la problématique du fait que les droits d'usages des logiciels n'étaient pas initialement prévus pour ce type d'architecture. Après un historique académique et industriel du Software Asset Management, des racines aux travaux les plus récents concernant le Cloud et l'identification logicielle, nous nous intéressons aux méthodes de licensing des principaux éditeurs avant d'introduire les différents problèmes intrinsèques au SAM. Le manque de standardisation dans les métriques, des droits d'usages, et la différence de paradigme apportée par le Cloud et prochainement le réseau virtualisé rendent la situation plus compliquée qu'elle ne l'était déjà. Nos recherches s'orientent vers la modélisation de ces licences et métriques afin de s'abstraire du côté juridique et flou des contrats. Cette abstraction nous permet de développer des algorithmes de placement de logiciels qui assurent le bon respect des règles contractuelles en tout temps. Ce modèle de licence nous permet également d'introduire une heuristique de déploiement qui optimise plusieurs critères au moment du placement du logiciel tels que la performance, l'énergie et le coût des licences. Nous introduisons ensuite les problèmes liés au déploiement de plusieurs logiciels simultanément en optimisant ces mêmes critères et nous apportons une preuve de la NPcomplétude du problème de décision associé. Afin de répondre à ces critères, nous présentons un algorithme de placement qui approche l'optimal et utilise l'heuristique ci-dessus. En parallèle, nous avons développé un logiciel SAM qui utilise ces recherches pour offrir une gestion automatisée et totalement générique des logiciels dans une architecture Cloud. Tous ces travaux ont été menés en collaboration avec Orange et testés lors de différentes preuves de concept avant d'être intégrés totalement dans l'outillage SAM
This thesis takes place in the field of Software Asset Management, license management, use rights, and compliance with contractual rules. When talking about proprietary software, these rules are often misinterpreted or totally misunderstood. In exchange for the fact that we are free to license our use as we see fit, in compliance with the contract, the publishers have the right to make audits. They can check that the rules are being followed and, if they are not respected, they can impose penalties, often financial penalties. This can lead to disastrous situations such as the lawsuit between AbInBev and SAP, where the latter claimed a USD 600 million penalty. The emergence of the Cloud has greatly increased the problem because software usage rights were not originally intended for this type of architecture. After an academic and industrial history of Software Asset Management (SAM), from its roots to the most recent work on the Cloud and software identification, we look at the licensing methods of major publishers such as Oracle, IBM and SAP before introducing the various problems inherent in SAM. The lack of standardization in metrics, specific usage rights, and the difference in paradigm brought about by the Cloud and soon the virtualized network make the situation more complicated than it already was. Our research is oriented towards modeling these licenses and metrics in order to abstract from the legal and blurry side of contracts. This abstraction allows us to develop software placement algorithms that ensure that contractual rules are respected at all times. This licensing model also allows us to introduce a deployment heuristic that optimizes several criteria at the time of software placement such as performance, energy and cost of licenses. We then introduce the problems associated with deploying multiple software at the same time by optimizing these same criteria and prove the NP-completeness of the associated decision problem. In order to meet these criteria, we present a placement algorithm that approaches the optimal and uses the above heuristic. In parallel, we have developed a SAM tool that uses these researches to offer an automated and totally generic software management in a Cloud architecture. All this work has been conducted in collaboration with Orange and tested in different Proof-Of-Concept before being fully integrated into the SAM tool
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Faye, Maurice-Djibril. "Déploiement auto-adaptatif d'intergiciel sur plate-forme élastique." Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL1036/document.

Повний текст джерела
Анотація:
Nous avons étudié durant cette thèse les moyens de rendre le déploiement d'un intergiciel auto-adaptatif. Le type d'intergiciel que nous avons considéré ici est hiérarchique (structure de graphe) et distribué. Chaque sommet du graphe modélise un processus qui peut être déployé sur une machine physique ou virtuelle d'une infrastructure de type grille/cloud, les arêtes modélisent des liens de communications entre processus. Il offre aux clients des services de calcul haute performance. Les infrastructures de grilles/cloud étant élastiques (perte et ajout de nœuds), un déploiement statique n'est pas la solution idéale car en cas de panne on risque de tout reprendre à zéro, ce qui est coûteux. Nous avons donc proposé un algorithme auto-stabilisant pour que l'intergiciel puisse retrouver un état stable sans intervention extérieure, au bout d'un temps fini, lorsqu'il est confronté à certains types de pannes. Les types de pannes que nous avons considérés sont les pannes transitoires (simulé par la perte de nœuds, l'ajout de nouveaux nœuds, la perte de liens entre deux nœuds). Pour évaluer ces algorithmes, nous avons conçu un simulateur. Les résultats des simulations montrent qu'un déploiement, sujet à des pannes transitoires, s'auto-adapte. Avant d'en arriver à la phase de programmation du simulateur, nous avons d'abord proposé un modèle d'infrastructure distribuée (ce modèle permet de décrire des environnements de type grille/cloud), un modèle pour décrire certains types d'intergiciels hiérarchiques et enfin un modèle pouvant décrire un intergiciel en cours d'exécution (processus déployés sur les machines)
We have studied the means to make a middleware deployment self-adaptive. Our use case middleware is hierarchical and distributed and can be modeled by a graph. A vertex models a process and an edge models a communication link between two processes. The middleware provides high performance computing services to the users.Once the middleware is deployed on a computing infrastructure like a grid or cloud, how it adapt the changes in dynamic environment? If the deployment is static, it may be necessary to redo all the deployment process, which is a costly operation. A better solution would be to make the deployment self-adaptive. We have proposed a rules-based self-stabilizing algorithm to manage a faulty deployment. Thus, after the detection of an unstable deployment, caused by some transients faults (joining of new nodes or deletion of existing nodes which may modify the deployment topology), the system will eventually recover a stable state, without external help, but only by executing the algorithm.We have designed an ad hoc discrete events simulator to evaluate the proposed algorithm. The simulation results show that, a deployment, subjected to transients faults which make it unstable, adapts itself. Before the simulator design, we had proposed a model to describe a distributed infrastructure, a model to describe hierarchical middleware and a model to describe a deployment, that is the mapping between the middleware processes and the hardware on which they are running on
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Chenthara, Shekha. "Privacy Preservation of Electronic Health Records Using Blockchain Technology: Healthchain." Thesis, 2021. https://vuir.vu.edu.au/42459/.

Повний текст джерела
Анотація:
The right to privacy is the most fundamental right of a citizen in any country. Electronic Health Records (EHRs) in healthcare has faced problems with privacy breaches, insider outsider attacks and unauthenticated record access in recent years, the most serious being related to the privacy and security of medical data. Ensuring privacy and security while handling patient data is of the utmost importance as a patient’s information should only be released to others with the patient’s permission or if it is allowed by law. Electronic health data (EHD) is an emerging health information exchange model that enables healthcare providers and patients to efficiently store and share their private healthcare information from any place and at any time as required. Generally, cloud services provide the infrastructure by reducing the cost of storing, processing and updating information with improved efficiency and quality. However, the privacy of EHRs is a significant hurdle when outsourcing private health data in the cloud because there is a higher risk of health information being leaked to unauthorized parties. Several existing techniques can analyse the security and privacy issues associated with e-healthcare services. These methods are designed for single databases, or databases with an authentication centre and thus cannot adequately protect the data from insider attacks. In fact, storing EHRs on centralized databases increases the security risk footprint and requires trust in a single authority. Therefore, this research study mainly focuses on how to ensure patient privacy and security while sharing sensitive data between the same or different organisations as well as healthcare providers in a distributed environment. This research successfully proposes and implements a permissioned blockchain framework named Healthchain, which maintains the security, privacy, scalability and integrity of the e-health data. The blockchain is built on Hyperledger Fabric, a permissioned distributed ledger solution by employing Hyperledger Composer and stores EHRs by utilizing InterPlanetary File System (IPFS) to build the decentralized web applications. Healthchain builds a two-pronged solution (i) an on-chain solution implemented on the secure network of Hyperledger Fabric which utilizes the state database Couch DB, (ii) an off-chain solution to securely store encrypted data via IPFS. The Healthchain architecture employs Practical Byzantine Fault Tolerance (PBFT) as the distributed network consensus processes to determine which block is to be added to the blockchain. Healthchain Hyperledger Fabric leverages container technology to host smart contracts called “chaincode” that comprises the application logic of this system. This research aimed at contributing towards the scalability in blockchain by storing the data hashes of health records on chain and the actual data is stored cryptographically off chain in IPFS, the decentralized storage. Moreover, the data stored in the IPFS will be encrypted by using special public key cryptographic algorithms to create robust blockchain solutions for EHD. This research study develops a privacy preserving framework with three main core contributions to the e-Health ecosystem: (i) it contributes a privacy preserving patient-centric framework namely Healthchain; (ii) introduces an efficient referral mechanism for the effective sharing of healthcare records; and (iii) prevents prescription drug abuse by performing drug tracking transactions employing smart contract functionality to create a smart health care ecosystem. The results demonstrates that the developed prototype ensures that healthcare records are not traceable to illegal disclosure as the model only stores the encrypted hash of records and is proven to be effective in terms of enhanced data privacy, data security, improved data scalability, interoperability and data integrity when accessing and sharing medical records among stakeholders across the Healthchain network. This research develops a foolproof security solution against cyber-attacks by exploiting the inherent features of the blockchain, thereby contributing to the robustness of healthcare information sharing systems and also unravels the potential for blockchain in health IT solutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Barnett, Tristan Darrell. "A distributed affective cognitive architecture for cooperative multi-agent learning systems." Thesis, 2012. http://hdl.handle.net/10210/8055.

Повний текст джерела
Анотація:
M.Sc. (Computer Science)
General machine intelligence represents the principal ambition of artificial intelligence research: creating machines that readily adapt to their environment. Machine learning represents the driving force of adaptation in artificial intelligence. However, two pertinent dilemmas emerge from research into machine learning. Firstly, how do intelligent agents learn effectively in real-world environments, in which randomness, perceptual aliasing and dynamics complicate learning algorithms? Secondly, how can intelligent agents exchange knowledge and learn from one another without introducing mathematical anomalies that might impede on the effectiveness of the applied learning algorithms? In a robotic search and rescue scenario, for example, the control system of each robot must learn from its surroundings in a fast-changing and unpredictable environment while at the same time sharing its learned information with others. In well-understood problems, an intelligent agent that is capable of solving task-specific problems will suffice. The challenge behind complex environments comes from fact that agents must solve arbitrary problems (Kaelbling et al. 1996; Ryan 2008). General problem-solving abilities are hence necessary for intelligent agents in complex environments, such as robotic applications. Although specialized machine learning techniques and cognitive hierarchical planning and learning may be a suitable solution for general problem-solving, such techniques have not been extensively explored in the context of cooperative multi-agent learning. In particular, to the knowledge of the author, no cognitive architecture has been designed which can support knowledge-sharing or self-organisation in cooperative multi-agent learning systems. It is therefore social learning in real-world applications that forms the basis of the research presented in this dissertation. This research aims to develop a distributed cognitive architecture for cooperative multi-agent learning in complex environments. The proposed Multi-agent Learning through Distributed Adaptive Contextualization Distributed Cognitive Architecture for Multi-agent Learning (MALDAC) Architecture comprises a self-organising multi-agent system to address the communication constraints that the physical hardware imposes on the system. The individual agents of the system implement their own cognitive learning architecture. The proposed Context-based Adaptive Empathy-deliberation Agent (CAEDA) Architecture investigates the applicability of emotion, ‘consciousness’, embodiment and sociability in cognitive architecture design. Cloud computing is proposed as a method of service delivery for the learning system, in which the MALDAC Architecture governs multiple CAEDA-based agents. An implementation of the proposed architecture is applied to a simulated multi-robot system to best emulate real-world complexities. Analyses indicate favourable results for the cooperative learning capabilities of the proposed MALDAC and CAEDA architectures.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Nienaber, R. C. (Rita Charlotte). "A technology reference model for client/server software development." Diss., 1996. http://hdl.handle.net/10500/15648.

Повний текст джерела
Анотація:
In today's highly competitive global economy, information resources representing enterprise-wide information are essential to the survival of an organization. The development of and increase in the use of personal computers and data communication networks are supporting or, in many cases, replacing the traditional computer mainstay of corporations. The client/server model incorporates mainframe programming with desktop applications on personal computers. The aim of the research is to compile a technology model for the development of client/server software. A comprehensive overview of the individual components of the client/server system is given. The different methodologies, tools and techniques that can be used are reviewed, as well as client/server-specific design issues. The research is intended to create a road map in the form of a Technology Reference Model for Client/Server Software Development.
Computing
M. Sc. (Information Systems)
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Aschenbrenner, Andreas. "Reference Framework for Distributed Repositories." Doctoral thesis, 2009. http://hdl.handle.net/11858/00-1735-0000-0006-B3CF-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

(11132985), Thamir Qadah. "High-performant, Replicated, Queue-oriented Transaction Processing Systems on Modern Computing Infrastructures." Thesis, 2021.

Знайти повний текст джерела
Анотація:
With the shifting landscape of computing hardware architectures and the emergence of new computing environments (e.g., large main-memory systems, hundreds of CPUs, distributed and virtualized cloud-based resources), state-of-the-art designs of transaction processing systems that rely on conventional wisdom suffer from lost performance optimization opportunities. This dissertation challenges conventional wisdom to rethink the design and implementation of transaction processing systems for modern computing environments.

We start by tackling the vertical hardware scaling challenge, and propose a deterministic approach to transaction processing on emerging multi-sockets, many-core, shared memory architecture to harness its unprecedented available parallelism. Our proposed priority-based queue-oriented transaction processing architecture eliminates the transaction contention footprint and uses speculative execution to improve the throughput of centralized deterministic transaction processing systems. We build QueCC and demonstrate up to two orders of magnitude better performance over the state-of-the-art.

We further tackle the horizontal scaling challenge and propose a distributed queue-oriented transaction processing engine that relies on queue-oriented communication to eliminate the traditional overhead of commitment protocols for multi-partition transactions. We build Q-Store, and demonstrate up to 22x improvement in system throughput over the state-of-the-art deterministic transaction processing systems.

Finally, we propose a generalized framework for designing distributed and replicated deterministic transaction processing systems. We introduce the concept of speculative replication to hide the latency overhead of replication. We prototype the speculative replication protocol in QR-Store and perform an extensive experimental evaluation using standard benchmarks. We show that QR-Store can achieve a throughput of 1.9 million replicated transactions per second in under 200 milliseconds and a replication overhead of 8%-25%compared to non-replicated configurations.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Pileththuwasan, Gallege Lahiru Sandakith. "Design, development and experimentation of a discovery service with multi-level matching." Thesis, 2013. http://hdl.handle.net/1805/3695.

Повний текст джерела
Анотація:
Indiana University-Purdue University Indianapolis (IUPUI)
The contribution of this thesis focuses on addressing the challenges of improving and integrating the UniFrame Discovery Service (URDS) and Multi-level Matching (MLM) concepts. The objective was to find enhancements for both URDS and MLM and address the need of a comprehensive discovery service which goes beyond simple attribute based matching. It presents a detailed discussion on developing an enhanced version of URDS with MLM (proURDS). After implementing proURDS, the thesis includes details of experiments with different deployments of URDS components and different configurations of MLM. The experiments and analysis were carried out using proURDS produced MLM contracts. The proURDS referred to a public dataset called QWS dataset. This dataset includes actual information of software components (i.e., web services), which were harvested from the Internet. The proURDS implements the different matching operations as independent operators at each level of matching (i.e., General, Syntactic, Semantic, Synchronization, and QoS). Finally, a case study was carried out with the deployed proURDS. The case study addresses real world component discovery requirements from the earth science domain. It uses the contracts collected from public portals which provide geographical and weather related data.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Lall, Manoj. "Selection of mobile agent systems based on mobility, communication and security aspects." Diss., 2005. http://hdl.handle.net/10500/2397.

Повний текст джерела
Анотація:
The availability of numerous mobile agent systems with its own strengths and weaknesses poses a problem when deciding on a particular mobile agent system. In this dissertation, factors based on mobility, communication and security of the mobile agent systems are presented and used as a means to address this problem. To facilitate in the process of selection, a grouping scheme of the agent system was proposed. Based on this grouping scheme, mobile agent systems with common properties are grouped together and analyzed against the above-mentioned factors. In addition, an application was developed using the Aglet Software Development Toolkit to demonstrate certain features of agent mobility, communication and security.
Theoretical Computing
M. Sc. (Computer Science)
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Schenk, Franz. "An Active Domain Node Architecture for the Semantic Web." Doctoral thesis, 2008. http://hdl.handle.net/11858/00-1735-0000-0006-B3B7-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії