To see the other types of publications on this topic, follow the link: Distributed components.

Dissertations / Theses on the topic 'Distributed components'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Distributed components.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Martins, Helder Ricardo Laximi. "Distributed replicated macro-components." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10766.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
In recent years, several approaches have been proposed for improving application performance on multi-core machines. However, exploring the power of multi-core processors remains complex for most programmers. A Macro-component is an abstraction that tries to tackle this problem by allowing to explore the power of multi-core machines without requiring changes in the programs. A Macro-component encapsulates several diverse implementations of the same specification. This allows to take the best performance of all operations and/or distribute load among replicas, while keeping contention and synchronization overhead to the minimum. In real-world applications, relying on only one server to provide a service leads to limited fault-tolerance and scalability. To address this problem, it is common to replicate services in multiple machines. This work addresses the problem os supporting such replication solution, while exploring the power of multi-core machines. To this end, we propose to support the replication of Macro-components in a cluster of machines. In this dissertation we present the design of a middleware solution for achieving such goal. Using the implemented replication middleware we have successfully deployed a replicated Macro-component of in-memory databases which are known to have scalability problems in multi-core machines. The proposed solution combines multi-master replication across nodes with primary-secondary replication within a node, where several instances of the database are running on a single machine. This approach deals with the lack of scalability of databases on multi-core systems while minimizing communication costs that ultimately results in an overall improvement of the services. Results show that the proposed solution is able to scale as the number of nodes and clients increases. It also shows that the solution is able to take advantage of multi-core architectures.
RepComp project (PTDC/EIAEIA/108963/2008)
APA, Harvard, Vancouver, ISO, and other styles
2

Pohl, Christoph. "Adaptive Caching of Distributed Components." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1117701363347-79965.

Full text
Abstract:
Die Zugriffslokalität referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. Gegenwärtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum Unterstützung für diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frühzeitige Modellierung und spätere Wiederverwendung caching-spezifischer Metadaten gewährleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezüglich der Cachebarkeit von Daten adaptiv an geändertes Nutzungsverhalten anpassen
Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach
APA, Harvard, Vancouver, ISO, and other styles
3

Mitchell, Robert Scott. "Dynamic configuration of distributed multimedia components." Thesis, Queen Mary, University of London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.392369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Leisten, Oliver Paul. "On design, performance, and characterisation of distributed duplexer components." Thesis, University of Kent, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schönefeld, Marc. "Refactoring of security antipatterns in distributed Java components." Bamberg Univ. of Bamberg Press, 2010. http://d-nb.info/1003208398/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dardha, Ornela <1985&gt. "Type Systems for Distributed Programs: Components and Sessions." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6441/.

Full text
Abstract:
Modern software systems, in particular distributed ones, are everywhere around us and are at the basis of our everyday activities. Hence, guaranteeing their cor- rectness, consistency and safety is of paramount importance. Their complexity makes the verification of such properties a very challenging task. It is natural to expect that these systems are reliable and above all usable. i) In order to be reliable, compositional models of software systems need to account for consistent dynamic reconfiguration, i.e., changing at runtime the communication patterns of a program. ii) In order to be useful, compositional models of software systems need to account for interaction, which can be seen as communication patterns among components which collaborate together to achieve a common task. The aim of the Ph.D. was to develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems appeared to be an adequate methodology, considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like, deadlock or livelock freedom in a concurrent setting. The main contributions of this dissertation are twofold. i) On the components side: we design types and a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations related to modifications of communication patterns in a program during execution time. ii) On the communication side: we study advanced safety properties related to communication in complex distributed systems like deadlock-freedom, livelock- freedom and progress. Most importantly, we exploit an encoding of types and terms of a typical distributed language, session π-calculus, into the standard typed π- calculus, in order to understand their expressive power.
APA, Harvard, Vancouver, ISO, and other styles
7

AUGUSTO, CARLOS EDUARDO LARA. "AN INFRASTRUCTURE FOR DISTRIBUTED EXECUTION OF SOFTWARE COMPONENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=13078@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Infra-estruturas de suporte a sistemas baseados em componentes de software tipicamente incluem facilidades para instalação, execução e configuração dinâmica das dependências dos componentes de um sistema. Tais facilidades são especialmente importantes quando os componentes do sistema executam em um ambiente distribuído. Neste trabalho, investigamos alguns dos problemas que precisam ser tratados por infra-estruturas de execução de sistemas distribuídos baseados em componentes de software. Para realizar tal investigação, desenvolvemos um conjunto de servi¸cos para o middleware OpenBus, com o intuito de prover facilidades para a execução de aplicações distribuídas. Para ilustrar e avaliar o uso dos serviços desenvolvidos, apresentamos alguns exemplos onde a infra-estrutura é utilizada para executar cenários de teste de uma aplicação distribuída.
Support infrastructures for component-based software systems usually include facilities for installation, execution and dynamic configuration of the system component`s dependencies. Such facilities are specially important when those system components execute in a distributed environment. In this work, we investigate some of the problems that must be handled by runtime infrastructures for distributed systems based on software components. To perform such investigation, we developed a set of services for the OpenBus middleware, aiming to provide facilities for execution of distributed applications. To illustrate and evaluate the use of the developed services, we present some examples where the infrastructure is used for executing test scenarios of a distributed application.
APA, Harvard, Vancouver, ISO, and other styles
8

ANDREA, EDUARDO FONSECA DE. "MONITORING THE EXECUTION ENVIRONMENT OF DISTRIBUTED SOFTWARE COMPONENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14323@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Sistemas de componentes têm como característica possibilitar a construção de aplicações através da composição de artefatos de software disponíveis. Interações podem ocorrer entre diversos componentes que podem estar distribuídos em diversas máquinas. À medida que aplicações distribuídas aumentam de tamanho, as interações existentes entre os diversos nós que a compõem vão se tornando mais complexas. Assim, torna-se importante para essas aplicações a existência de uma forma de monitorar as interações entre os componentes, com o intuito de identificar falhas e gargalos de processamento e comunicação no sistema. Este trabalho apresenta uma arquitetura capaz de oferecer mecanismos extensíveis para coleta de informações do ambiente de execução desses sistemas, e das interações realizadas entre os seus componentes. São implementadas formas de publicação dessas informações obtidas e testes comparativos para quantificar como a arquitetura desenvolvida onera o desempenho da aplicação.
Component-based systems are characterized by the construction of applications through the composition of available software artifacts. Interactions may occur between different components that can be distributed through several machines. As distributed applications increase in size, the interactions between the various nodes that comprise them become more complex. Therefore it is important for distributed component systems to monitor the interactions between components in order to identify failures and bottlenecks in processing and communication. This dissertation presents an architecture capable of offering extensible mechanisms for monitoring the execution environment of distributed components, and the interactions between their components. It also presents a flexible mechanism for publication of the collected information, and some comparative test to measure the performance penalty imposed by the infrastructure to the application.
APA, Harvard, Vancouver, ISO, and other styles
9

Andersson, Richard. "Evaluation of the Security of Components in Distributed Information Systems." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2091.

Full text
Abstract:

This thesis suggests a security evaluation framework for distributed information systems, responsible for generating a system modelling technique and an evaluation method. The framework is flexible and divides the problem space into smaller, more accomplishable subtasks with the means to focus on specific problems, aspects or system scopes. The information system is modelled by dividing it into increasingly smaller parts, evaluate the separate parts and then build up the system “bottom up” by combining the components. Evaluated components are stored as reusable instances in a component library. The evaluation method is focusing on technological components and is based on the Security Functional Requirements (SFR) of the Common Criteria. The method consists of the following steps: (1) define several security values with different aspects, to get variable evaluations (2) change and establish the set of SFR to fit the thesis, (3) interpret evaluated security functions, and possibly translate them to CIA or PDR, (4) map characteristics from system components to SFR and (5) combine evaluated components into an evaluated subsystem. An ontology is used to, in a versatile and dynamic way, structure the taxonomy and relations of the system components, the security functions, the security values and the risk handling. It is also a step towards defining a common terminology for IT security.

APA, Harvard, Vancouver, ISO, and other styles
10

Rivera, Marcela. "Reconfiguration and life-cycle distributed components : asynchrony, coherence and verification." Nice, 2011. http://www.theses.fr/2011NICE4125.

Full text
Abstract:
En programmation orientée à composants, mais particulièrement dans des environnements distribués, les composants ont besoin d’être adaptatifs. Une majeure partie de cette adaptation repose sur la reconfiguration dynamique. Dans cette thèse, nous introduisons une nouvelle approche pour la reconfiguration des modèles de composants distribués, avec l’objectif de faciliter le processus de reconfiguration et d’assurer la consistance et la cohérence du système. Avant d’exécuter une reconfiguration, il est nécessaire que les composants soient dans un état cohérent et stable, afin d’éviter des incohérences dans le processus de reconfiguration. Pour ceci, nous concevons un algorithme pour l’arrêt d’un composant d’une manière sécurisée et atteignant un état stable. Cela a été réalisé en mettant en œuvre un mécanisme de marquage et d’interception qui permet d’ajouter des informations aux requêtes et de manipuler leurs flux, afin de décider lesquelles doivent être servies avant d’arrêter le composant. Nous avons conçu un ensemble de primitives de reconfiguration de haut niveau qui permettent de réaliser des opérations de reconfiguration plus complexes. Nous fournissons un contrôleur supplémentaire à notre modèle de composant qui implémente ces primitives. Pour le déclenchement des tâches de reconfiguration, nous avons étendu le langage FScript pour lui permettre d’exécuter des reconfigurations distribuées, en déléguant certaines actions à des composants. Pour ceci, nous avons défini un contrôleur additionnel à l’intérieur de la membrane des composants. Nous avons testé notre approche sur deux applications basées sur GCM/ProActive : CoCoME et TurnTable
For component programming, but even more specifically in distributed and Grid environments, components need to be highly adaptive. A great part of adaptativeness relies on dynamic reconfiguration of component systems. We introduce a new approach for reconfiguring distributed components with the main objective to facilitate the reconfiguration process and ensure the consistency and coherence of the system. First, before executing a reconfiguration it’s necessary that the components are a coherent and quiescent state. This is done to avoid inconsistency in the reconfiguration process. To achieve this, we design an algorithm for stopping a component in a safe manner and reach this quiescent state. This was realized by implementing a tagging and interception mechanisms that adds information to the requests and manipulates their flow in order to decide which of them must be served before stopping the component. Next, we designed a set of high-level reconfiguration primitives to achieve more complex reconfiguration operations. These primitives include : add, remove, duplicate, replace, bind, and unbind. We provide an additional controller to our component model which implements these primitives. Additionally, for triggering the reconfiguration tasks, we extended the FScript language to give it the capability of executing distributed reconfiguration actions, by delegating some actions to specific components. To achieve this objective, we defined an additional controller inside the membrane of the components. We tested our implementation over two GCM/Pro Active based applications : the CoCoME example and the TurnTable example
APA, Harvard, Vancouver, ISO, and other styles
11

Rho, Sangig. "A distributed hard real-time Java system for high mobility components." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1350.

Full text
Abstract:
In this work we propose a methodology for providing real-time capabilities to component-based, on-the-fly reconfigurable, distributed systems. In such systems, software components migrate across computational resources at run-time to allow applications to adapt to changes in user requirements or to external events. We describe how we achieve run-time reconfiguration in distributed Java applications by appropriately migrating servers. Guaranteed-rate schedulers at the servers provide the necessary temporal protection and so simplify remote method invocation management. We describe how we manage overhead and resource utilization by controlling the parameters of the server schedulers. According to our measurements, this methodology provides real-time capability to component-based reconfigurable distributed systems in an effcient and effective way. In addition, we propose a new resource discovery protocol, REALTOR, which is based on a combination of pull-based and push-based resource information dissemination. REALTOR has been designed for real-time component-based distributed applications in very dynamic or adverse environments. REALTOR supports survivability and information assurance by allowing the migration of components to safe locations under emergencies suchas externalattack, malfunction, or lackofresources. Simulation studies show that under normal and heavy load conditions REALTOR remains very effective in finding available resources, and does so with a reasonably low communication overhead.REALTOR 1)effectively locates resources under highly dynamic conditions, 2) has an overhead that is system-size independent, and 3) works well in highlyadverse environments.We evaluate the effectiveness of a REALTOR implementation as part of Agile Objects, an infrastructure for real-time capable, highly mobile Java components.
APA, Harvard, Vancouver, ISO, and other styles
12

Schönefeld, Marc [Verfasser]. "Refactoring of security antipatterns in distributed Java components / von Marc Schönefeld." Bamberg : Univ. of Bamberg Press, 2010. http://d-nb.info/1003208398/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Abed, Nagy Youssef. "Physical dynamic simulation of shipboard power system components in a distributed computational environment." FIU Digital Commons, 2007. http://digitalcommons.fiu.edu/etd/1100.

Full text
Abstract:
Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.
APA, Harvard, Vancouver, ISO, and other styles
14

Sharp, Mariana L. "Static analyses for java in the presence of distributed components and large libraries." The Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=osu1186064822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Panagou, Soterios. "Development of the components of a low cost, distributed facial virtual conferencing system." Thesis, Rhodes University, 2000. http://hdl.handle.net/10962/d1006490.

Full text
Abstract:
This thesis investigates the development of a low cost, component based facial virtual conferencing system. The design is decomposed into an encoding phase and a decoding phase, which communicate with each other via a network connection. The encoding phase is composed of three components: model acquisition (which handles avatar generation), pose estimation and expression analysis. Audio is not considered part of the encoding and decoding process, and as such is not evaluated. The model acquisition component is implemented using a visual hull reconstruction algorithm that is able to reconstruct real-world objects using only sets of images of the object as input. The object to be reconstructed is assumed to lie in a bounding volume of voxels. The reconstruction process involves the following stages: - Space carving for basic shape extraction; - Isosurface extraction to remove voxels not part of the surface of the reconstruction; - Mesh connection to generate a closed, connected polyhedral mesh; - Texture generation. Texturing is achieved by Gouraud shading the reconstruction with a vertex colour map; - Mesh decimation to simplify the object. The original algorithm has complexity O(n), but suffers from an inability to reconstruct concave surfaces that do not form part of the visual hull of the object. A novel extension to this algorithm based on Normalised Cross Correlation (NCC) is proposed to overcome this problem. An extension to speed up traditional NCC evaluations is proposed which reduces the NCC search space from a 2D search problem down to a single evaluation. Pose estimation and expression analysis are performed by tracking six fiducial points on the face of a subject. A tracking algorithm is developed that uses Normalised Cross Correlation to facilitate robust tracking that is invariant to changing lighting conditions, rotations and scaling. Pose estimation involves the recovery of the head position and orientation through the tracking of the triangle formed by the subject's eyebrows and nose tip. A rule-based evaluation of points that are tracked around the subject's mouth forms the basis of the expression analysis. A user assisted feedback loop and caching mechanism is used to overcome tracking errors due to fast motion or occlusions. The NCC tracker is shown to achieve a tracking performance of 10 fps when tracking the six fiducial points. The decoding phase is divided into 3 tasks, namely: avatar movement, expression generation and expression management. Avatar movement is implemented using the base VR system. Expression generation is facilitated using a Vertex Interpolation Deformation method. A weighting system is proposed for expression management. Its function is to gradually transform from one expression to the next. The use of the vertex interpolation method allows real-time deformations of the avatar representation, achieving 16 fps when applied to a model consisting of 7500 vertices. An Expression Parameter Lookup Table (EPLT) facilitates an independent mapping between the two phases. It defines a list of generic expressions that are known to the system and associates an Expression ID with each one. For each generic expression, it relates the expression analysis rules for any subject with the expression generation parameters for any avatar model. The result is that facial expression replication between any subject and avatar combination can be performed by transferring only the Expression ID from the encoder application to the decoder application. The ideas developed in the thesis are demonstrated in an implementation using the CoRgi Virtual Reality system. It is shown that the virtual-conferencing application based on this design requires only a bandwidth of 2 Kbps.
Adobe Acrobat Pro 9.4.6
Adobe Acrobat 9.46 Paper Capture Plug-in
APA, Harvard, Vancouver, ISO, and other styles
16

Ruggiero, Eric John. "Modeling and Control of SPIDER Satellite Components." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28489.

Full text
Abstract:
Space satellite technology is heading in the direction of ultra-large, lightweight structures deployable on orbit. Minimal structural mass translates into minimal launch costs, while increased satellite bus size translates into significant bandwidth improvement for both radar and optical applications. However, from a structural standpoint, these two goals are in direct conflict with one another, as large, flexible structures possess terrible dynamic properties and minimal effective bandwidth. Since the next level of research will require active dynamic analysis, vibration control, and shape morphing control of these satellites, a better-suited name for this technology is Super Precise Intelligent Deployables for Engineered Reconnaissance, or SPIDER. Unlike wisps of cobweb caught in the wind, SPIDER technology will dictate the functionality and versatility of the satellite much like an arachnid weaving its own web. In the present work, a rigorous mathematical framework based on distributed parameter system theory is presented in describing the dynamics of augmented membranous structures. In particular, Euler-Bernoulli beam theory and thin plate theory are used to describe the integration of piezoelectric material with membranes. In both the one and two dimensional problems, experimental validation is provided to support the developed models. Next, the linear quadratic regulator (LQR) control problem is defined from a distributed parameter systems approach, and from this formulation, the functional gains of the respective system are gleaned. The functional gains provide an intelligent mapping when designing an observer-based control system as they pinpoint important sensory information (both type and spatial location) within the structure. Further, an experimental investigation into the dynamics of membranes stretched over shallow, air-filled cavities is presented. The presence of the air-filled cavity in close proximity to the membrane creates a distributed spring and damping effect, thus creating desirable system dynamics from an optical or radar application perspective. Finally, in conjunction with the use of a pressurized cavity with a membrane optic, a novel basis is presented for describing incoming wavefront aberrations. The new basis, coined the clamped Zernike polynomials, provides a mapping for distributed spatial actuation of a membrane mirror that is amiable to the clamped boundary conditions of the mechanical lens. Consequently, based on the work presented here and being carried out in cooperation with the Air Force Research Laboratory Directed Energy Directorate (AFRL / DE), it is envisioned that a 1 m adaptive membrane optic is on the verge of becoming a reality.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
17

SOUZA, PAULO ROBERTO FRANCA DE. "A TOOL FOR REBUILDING THE SEQUENCE OF INTERACTIONS BETWEEN COMPONENTS OF A DISTRIBUTED SYSTEM." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=18473@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Sistemas distribuídos frequentemente apresentam um comportamento em tempo de execução diferente do esperado pelo programador. A análise estática, somente, não suficiente para a compreensão do comportamento e para o diagnóstico de problemas nesses sistemas, em razão da sua natureza nao determinística, reflexo de características inerentes como concorrência, latência na comunicação e falha parcial. Sendo assim, torna-se necessário um melhor entendimento das interações entre os diferentes componentes de software que formam o sistema, para que o desenvolvedor possa ter uma melhor visão do comportamento do sistema durante sua execução. Neste trabalho, apresentamos uma ferramenta que faz a reconstrução das interações entre os componentes de uma aplicação distribuída, oferecendo uma visão das linhas de execução distribuídas e permitindo o acompanhamento das sequências de chamadas remotas e a análise das relações de causalidade. Essa ferramenta também faz a persistência do histórico dessas interações ao longo do tempo, correlacionando-as a arquitetura do sistema e aos dados de desempenho. Assim, a ferramenta proposta auxilia o desenvolvedor a melhor compreender cenários que envolvem comportamentos indevido do sistema e a restringir o escopo da análise do erro, facilitando a busca de uma solução.
Distributed systems often present a runtime behavior different than what is expected by the programmer. Static analysis is not enough to understand the runtime behavior and to diagnoses errors. This difficulty is caused by the non-deterministic nature of distributed systems, because of their inherent characteristics, such as concurrency, communication latency and partial failure. Therefore, it’s necessary a better view of the interactions between the system’s software components in order to understand its runtime behavior. In this work we present a tool that rebuilds the interactions among distributed components, presents a view of distributed threads and remote call sequences, and allows the analysis of causality relationships. Our tool also stores the interactions over time and correlates them to the system architecture and to performance data. The proposed tool helps the developer to better understand scenarios involving an unexpected behavior of the system and to restrict the scope of error analysis, making easier the search for a solution.
APA, Harvard, Vancouver, ISO, and other styles
18

Devamitta, Perera Muditha Virangika. "Robustness of normal theory inference when random effects are not normally distributed." Kansas State University, 2011. http://hdl.handle.net/2097/8786.

Full text
Abstract:
Master of Science
Department of Statistics
Paul I. Nelson
The variance of a response in a one-way random effects model can be expressed as the sum of the variability among and within treatment levels. Conventional methods of statistical analysis for these models are based on the assumption of normality of both sources of variation. Since this assumption is not always satisfied and can be difficult to check, it is important to explore the performance of normal based inference when normality does not hold. This report uses simulation to explore and assess the robustness of the F-test for the presence of an among treatment variance component and the normal theory confidence interval for the intra-class correlation coefficient under several non-normal distributions. It was found that the power function of the F-test is robust for moderately heavy-tailed random error distributions. But, for very heavy tailed random error distributions, power is relatively low, even for a large number of treatments. Coverage rates of the confidence interval for the intra-class correlation coefficient are far from nominal for very heavy tailed, non-normal random effect distributions.
APA, Harvard, Vancouver, ISO, and other styles
19

Xie, Changwen. "Methods, tools and components paradigms for the design and buidling of distributed machine control systems." Thesis, De Montfort University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ferreira, Cláudio Luís Pereira. "Maestro: um middleware para suporte a aplicações distribuídas baseadas em componentes de software." Universidade de São Paulo, 2001. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-05102004-172528/.

Full text
Abstract:
É o trabalho de um middleware organizar as atividades de seus diferentes elementos componentes de maneira a operar sincronamente com a execução de uma aplicação. O resultado deste trabalho deve ser transparente para quem interage com o sistema, percebendo-o como um único bloco coeso e sincronizado, orquestrado por um agente principal. Este é o objeto deste trabalho, a especificação de um middleware e seus componentes internos indicando suas principais características e funcionalidades e também sua operação na execução de uma aplicação distribuída. Também foi levado em consideração os novos ambientes nos quais as aplicações distribuídas estão inseridas tais como a diversidade de dispositivos gerenciados pelos usuários, a necessidade de constantes mudanças no sistema, o uso de novas tecnologias no desenvolvimento de software e a necessidade de definições de sistemas abertos. Para a especificação deste middleware, foi utilizado o modelo de referência Open Distributed Processing (ODP) da ISO/IEC que permite que um sistema seja visualizado em cinco pontos de vista distintos. Ao final o sistema é especificado utilizando a tecnologia de componentes de software, ilustrando seu uso numa aplicação comercial.
It’s the job of a middleware to organize the activities of its different component elements as to operate in synchrony with the execution of an application. The result of this work should be transparent to whom interact with the system, perceiving it as a single synchronized and cohered block, orchestrated by a master agent. This is the subject of this work, the specification of a middleware and its internal components indicating its major characteristics and functionalities and also its operation in the execution of distributed applications. It was also taken into account the new environment in which the distributed applications are inserted such as the diversity of devices managed by the users, the necessity for constant system changing, the use of new technologies in software development and the necessity for definition of open systems. For the specification of this middleware, it was used the reference model of Open Distributed Processing (ODP) from ISO/IEC that allows a system to be visualized by five different points of view. By the end the system is specified using the technology of component software, illustrating its use through commercial component software.
APA, Harvard, Vancouver, ISO, and other styles
21

Thomason, Stuart. "An architecture to support the configuration and evolution of software components in a distributed runtime environment." Thesis, Keele University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Victor, Sundar K. "Negotiation Between Distributed Agents in a Concurrent Engineering System." Digital WPI, 1999. https://digitalcommons.wpi.edu/etd-theses/1083.

Full text
Abstract:
"Current approaches to design are often serial and iterative in nature, leading to poor quality of design and reduced productivity. Complex artifacts are designed by groups of experts, each with his/her own area of expertise. Hence design can be modeled as a cooperative multi-agent problem-solving task, where different agents possess different expertise and evaluation criteria. New techniques for Concurrent Design, which emphasize parallel interaction among design experts involved, are needed. During this concurrent design process, disagreements may arise among the expert agents as the design is being produced. The process by which these differences are resolve to arrive at a common set of design decisions is called Negotiation. The main issues associated with the negotiation process are, whether negotiation should be centralized or distributed, the language of communication and the negotiation strategy. The goals of this thesis are to study the work done by various researchers in this field, to do a comarative analysis of their work and to design and implement an approach to handle negotiation between expert agents in an existing Concurrent Engineering Design System."
APA, Harvard, Vancouver, ISO, and other styles
23

Mahalik, Nitaigour Premchand. "A study on field-level components validation and life cycle data acquisition for distributed machine control systems." Thesis, De Montfort University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ndlangisa, Mboneli. "DRUBIS : a distributed face-identification experimentation framework - design, implementation and performance issues." Thesis, Rhodes University, 2004. http://eprints.ru.ac.za/93/1/MNdlangisa-MSc.pdf.

Full text
Abstract:
We report on the design, implementation and performance issues of the DRUBIS (Distributed Rhodes University Biometric Identification System) experimentation framework. The Principal Component Analysis (PCA) face-recognition approach is used as a case study. DRUBIS is a flexible experimentation framework, distributed over a number of modules that are easily pluggable and swappable, allowing for the easy construction of prototype systems. Web services are the logical means of distributing DRUBIS components and a number of prototype applications have been implemented from this framework. Different popular PCA face-recognition related experiments were used to evaluate our experimentation framework. We extract recognition performance measures from these experiments. In particular, we use the framework for a more indepth study of the suitability of the DFFS (Difference From Face Space) metric as a means for image classification in the area of race and gender determination.
APA, Harvard, Vancouver, ISO, and other styles
25

Min, Sung-Hwan. "Automated Construction of Macromodels from Frequency Data for Simulation of Distributed Interconnect Networks." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5209.

Full text
Abstract:
As the complexity of interconnects and packages increases and the rise and fall time of the signal decreases, the electromagnetic effects of distributed passive devices are becoming an important factor in determining the performance of gigahertz systems. The electromagnetic behavior extracted using an electromagnetic simulation or from measurements is available as frequency dependent data. This information can be represented as a black box called a macromodel, which captures the behavior of the passive structure at the input/output ports. In this dissertation, the macromodels have been categorized as scalable, passive and broadband macromodels. The scalable macromodels for building design libraries of passive devices have been constructed using multidimensional rational functions, orthogonal polynomials and selective sampling. The passive macromodels for time-domain simulation have been constructed using filter theory and multiport passivity formulae. The broadband macromodels for high-speed simulation have been constructed using band division, selector, subband reordering, subband dilation and pole replacement. An automated construction method has been developed. The construction time of the multiport macromodel has been reduced. A method for reducing the order of the macromodel has been developed. The efficiency of the methods was demonstrated through embedded passive devices, known transfer functions and distributed interconnect networks.
APA, Harvard, Vancouver, ISO, and other styles
26

Förster, Stefan. "A Formal Framework for Modelling Component Extension and Layers in Distributed Embedded Systems." Doctoral thesis, Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700638.

Full text
Abstract:
Der vorliegende Band der wissenschaftlichen Schriftenreihe Eingebettete Selbstorganisierende Systeme widmet sich dem Entwurf von verteilten Eingebetteten Systemen. Einsatzgebiete solcher Systeme sind unter anderem Missions- und Steuerungssysteme von Flugzeugen (Aerospace-Anwendungen) und , mit zunehmender Vernetzung, der Automotive Bereich. Hier gilt es höchste Sicherheitsstandards einzuhalten und maximale Verfügbarkeit zu garantieren. In dieser Arbeit wird diese Problematik frühzeitig im Entwurfsprozess, in der Spezifikationsphase, aufgegriffen. Es werden Implementierungsvarianten wie Hardware und Software sowie Systemkomponenten wie Berechungskomponenten und Kommunikationskomponenten unterschieden. Für die übergreifende Spezifikation wird auf Grundlage des π-Kalküls ein formales Framework, das eine einheitliche Modellierung von Teilsystemen in den unterschiedlichen Entwurfsphasen unterstützt, entwickelt. Besonderer Schwerpunkt der Untersuchungen von Herrn Förster liegt auf Erweiterungen von Systemspezifikationen. So wird es möglich, Teilkomponenten zu verändern oder zu substituieren und die Gesamtspezifikation auf Korrektheit und Konsistenz automatisiert zu überprüfen
This volume of the scientific series Eingebettete, selbstorganisierende Systeme (Embedded Self-Organized Systems) gives an outline of the design of distributed embedded systems. Fields of application for such systems are, amongst others, mission systems and control systems of airplanes (aeronautic applications) and - with increasing level of integration - also the automotive area. In this area it is essential to meet highest safety standards and to ensure the maximum of availability. Mr Förster addresses these problems in an early state of the design process, namely the specification. Implementation versions like hardware and software are differentiated as well as system components like computation components and communication components. For a general specification Mr Förster develops a formal framework based on the pi-calculus, which supports a standardised modelling of modules in different design steps. The main focus of Mr Förster's research is the extension of system specifications. Therefore it will be possible to modify or substitute modules and to check automatically the correctness and consistency of the total specification. Mr Förster can prove the correctness of his approach and demonstrates impressively the complexity by clearly defined extension relations and formally verifiable embedding in the pi-calculus formalism. A detailed example shows the practical relevance of this research. I am glad that Mr Förster publishes his important research in this scientific series. So I hope you will enjoy reading it and benefit from it
APA, Harvard, Vancouver, ISO, and other styles
27

Kalaichelvan, Niranjanan. "Distributed Traffic Load Scheduler based on TITANSim for System Test of a Home Subscriber Server (HSS)." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-50066.

Full text
Abstract:
The system test is very significant in the development life cycle of a telecommunication network node. Tools such as TITANSim are used to develop the test framework upon which a load test application is created. These tools need to be highly efficient and optimized to reduce the cost of the system test. This thesis project created a load test application based on the distributed scheduling architecture of TITANSim, whereby multiple users can be simulated using a single test component. This new distributed scheduling system greatly reduces the number of operating system processes involved, thus reducing the memory consumption of the load test application; hence higher loads can be easily simulated with limited hardware resources. The load test application used for system test of the HSS is based on the central scheduling architecture of TITANSim. The central scheduling architecture is a function test concept, where every user is simulated by a single test component. In the system test several thousand users are simulated by the test system. Therefore, the load application based on central scheduling architecture uses thousands of test components leading to high memory consumption in the test system. In this architecture, the scheduling of test components is centralized which results in a lot of communication overhead within the test system, as thousands of test components communicate with a master scheduling component during the test execution. On the other hand, in the distributed scheduling architecture the scheduling task is performed locally by each test component. There is no communication overhead within the test system. Therefore, the test system is highly efficient. In the distributed scheduling architecture the traffic flow of the simulated users are described using the Finite State Machines (FSMs). The FSMs are specified in the configuration files that are used by the test system at run time. Therefore, implementing traffic cases using the distributed scheduling architecture becomes simpler and faster as there is no (TTCN-3) coding/compilation. The HSS is the only node (within Ericsson) whose system test is performed using the central scheduling architecture of TITANSim. The other users (nodes) of TITANSim are using the distributed scheduling architecture for its apparent benefits. Under this circumstance, this thesis project assumes significance for the HSS. When a decision to adapt the distributed scheduling architecture is made for the system test of the HSS, the load application created in this thesis project can be used as a model, or extended for the migration of the test modules for the HSS from the central scheduling architecture to the distributed scheduling architecture. By creating this load application we have gained significant knowledge of the TITANSim framework; most importantly, the necessary modifications to the TITANSim framework required to create a distributed scheduling architecture based load application for the HSS. The load application created for this project was used to (system) test the HSS by generating load using real system test hardware. The results were analytically compared with the test results from the existing load application (which is based on the central scheduling architecture). The analysis showed that the load application based on distributed scheduling architecture is efficient, utilizes less test system resources, and capable of scaling up the load generation capacity
Systemet test är mycket betydelsefullt i utvecklingen livscykeln för ett telenät nod.Verktyg som TITANSim används för att utveckla testet ram på vilken ett belastningsprov program skapas. Dessa verktyg måste vara mycket effektiv och optimerad för att minska kostnaderna för systemet testet. Detta examensarbete skapat ett program belastningsprov bygger på distribuerad schemaläggning arkitektur TITANSim, där flera användare kan simuleras med hjälp av ett enda test komponent. Det nya distribuerade schemaläggning systemet minskar kraftigt antalet operativsystem inblandade system processer, vilket minskar minnesförbrukning av lasten testprogram, därav högre belastningar kan enkelt simuleras med begränsade hårdvara resurser. Lasten testa program som används för systemtest av HSS är baserad på den centrala schemaläggning arkitektur TITANSim. Den centrala schemaläggning arkitektur är ett funktionstest koncept, där varje användare simuleras med ett enda test komponent. I systemet testa flera tusen användare är simulerade av testsystemet.Därför använder belastningen program baserat på centrala schemaläggning arkitektur tusentals testa komponenter leder till hög minnesförbrukning i testsystemet.I denna arkitektur är schemaläggning av test komponenter centraliserad vilket resulterar i en mycket kommunikation overhead inom testsystem, som tusentals testa komponenter kommunicerar med en mästare schemaläggning komponent under testexekvering. Å andra sidan, i den distribuerade schemaläggning arkitekturen schemaläggning uppgiften utförs lokalt av varje test komponent. Det finns ingen kommunikation overhead i testsystemet. Därför är testsystemet mycket effektiv. I distribuerad schemaläggning arkitekturen trafikflödet av simulerade användare beskrivs med Finite State Machines (FSMs). Den FSMs anges i konfigurationsfiler som används av testsystemet vid körning. Därför genomföra trafiken fall med distribuerad schemaläggning arkitektur blir enklare och snabbare eftersom det inte finns någon (TTCN-3) kodning / sammanställning. HSS är den enda nod (inom Ericsson) vars system test utförs med hjälp av den centrala schemaläggningen arkitektur TITANSim. Den andra användare (noder) i TITANSim använder distribuerad schemaläggning arkitektur för sina uppenbara fördelar. Under denna omständighet, förutsätter detta examensarbete betydelse för HSS. När ett beslut att anpassa distribuerad schemaläggning arkitektur är gjord för systemet test av HSS, kan belastningen program som skapats i detta examensarbete kan användas som en modell, eller förlängas för migration av testet moduler för HSS från den centrala schemaläggningen arkitektur för distribuerade schemaläggning arkitektur. Genom att skapa denna belastning ansökan har vi fått stor kunskap om TITANSim ramen, viktigast av allt, de nödvändiga ändringar av TITANSim ramverk som krävs för att skapa en distribuerad schemaläggning arkitektur baserad belastning ansökan för HSS. Lasten program som skapats för detta projekt har använts för att (system) testa HSS genom att generera last använda riktiga maskinvarusystem test. Resultaten analytiskt jämfört med provresultaten från den befintliga belastningen ansökan (som är baserad på den centrala schemaläggning arkitektur). Analysen visade att belastningen ansökan baseras på distribuerad schemaläggning arkitektur är effektiv, använder mindre resurser testsystem, och kan skala upp kapaciteten last generation
APA, Harvard, Vancouver, ISO, and other styles
28

Penczek, Frank. "Static guarantees for coordinated components : a statically typed composition model for stream-processing networks." Thesis, University of Hertfordshire, 2012. http://hdl.handle.net/2299/9046.

Full text
Abstract:
Does your program do what it is supposed to be doing? Without running the program providing an answer to this question is much harder if the language does not support static type checking. Of course, even if compile-time checks are in place only certain errors will be detected: compilers can only second-guess the programmer’s intention. But, type based techniques go a long way in assisting programmers to detect errors in their computations earlier on. The question if a program behaves correctly is even harder to answer if the program consists of several parts that execute concurrently and need to communicate with each other. Compilers of standard programming languages are typically unable to infer information about how the parts of a concurrent program interact with each other, especially where explicit threading or message passing techniques are used. Hence, correctness guarantees are often conspicuously absent. Concurrency management in an application is a complex problem. However, it is largely orthogonal to the actual computational functionality that a program realises. Because of this orthogonality, the problem can be considered in isolation. The largest possible separation between concurrency and functionality is achieved if a dedicated language is used for concurrency management, i.e. an additional program manages the concurrent execution and interaction of the computational tasks of the original program. Such an approach does not only help programmers to focus on the core functionality and on the exploitation of concurrency independently, it also allows for a specialised analysis mechanism geared towards concurrency-related properties. This dissertation shows how an approach that completely decouples coordination from computation is a very supportive substrate for inferring static guarantees of the correctness of concurrent programs. Programs are described as streaming networks connecting independent components that implement the computations of the program, where the network describes the dependencies and interactions between components. A coordination program only requires an abstract notion of computation inside the components and may therefore be used as a generic and reusable design pattern for coordination. A type-based inference and checking mechanism analyses such streaming networks and provides comprehensive guarantees of the consistency and behaviour of coordination programs. Concrete implementations of components are deliberately left out of the scope of coordination programs: Components may be implemented in an external language, for example C, to provide the desired computational functionality. Based on this separation, a concise semantic framework allows for step-wise interpretation of coordination programs without requiring concrete implementations of their components. The framework also provides clear guidance for the implementation of the language. One such implementation is presented and hands-on examples demonstrate how the language is used in practice.
APA, Harvard, Vancouver, ISO, and other styles
29

Förster, Stefan. "A Formal Framework for Modelling Component Extension and Layers in Distributed Embedded Systems." TUDpress, 2006. https://monarch.qucosa.de/id/qucosa%3A18707.

Full text
Abstract:
Der vorliegende Band der wissenschaftlichen Schriftenreihe Eingebettete Selbstorganisierende Systeme widmet sich dem Entwurf von verteilten Eingebetteten Systemen. Einsatzgebiete solcher Systeme sind unter anderem Missions- und Steuerungssysteme von Flugzeugen (Aerospace-Anwendungen) und , mit zunehmender Vernetzung, der Automotive Bereich. Hier gilt es höchste Sicherheitsstandards einzuhalten und maximale Verfügbarkeit zu garantieren. In dieser Arbeit wird diese Problematik frühzeitig im Entwurfsprozess, in der Spezifikationsphase, aufgegriffen. Es werden Implementierungsvarianten wie Hardware und Software sowie Systemkomponenten wie Berechungskomponenten und Kommunikationskomponenten unterschieden. Für die übergreifende Spezifikation wird auf Grundlage des π-Kalküls ein formales Framework, das eine einheitliche Modellierung von Teilsystemen in den unterschiedlichen Entwurfsphasen unterstützt, entwickelt. Besonderer Schwerpunkt der Untersuchungen von Herrn Förster liegt auf Erweiterungen von Systemspezifikationen. So wird es möglich, Teilkomponenten zu verändern oder zu substituieren und die Gesamtspezifikation auf Korrektheit und Konsistenz automatisiert zu überprüfen.
This volume of the scientific series Eingebettete, selbstorganisierende Systeme (Embedded Self-Organized Systems) gives an outline of the design of distributed embedded systems. Fields of application for such systems are, amongst others, mission systems and control systems of airplanes (aeronautic applications) and - with increasing level of integration - also the automotive area. In this area it is essential to meet highest safety standards and to ensure the maximum of availability. Mr Förster addresses these problems in an early state of the design process, namely the specification. Implementation versions like hardware and software are differentiated as well as system components like computation components and communication components. For a general specification Mr Förster develops a formal framework based on the pi-calculus, which supports a standardised modelling of modules in different design steps. The main focus of Mr Förster's research is the extension of system specifications. Therefore it will be possible to modify or substitute modules and to check automatically the correctness and consistency of the total specification. Mr Förster can prove the correctness of his approach and demonstrates impressively the complexity by clearly defined extension relations and formally verifiable embedding in the pi-calculus formalism. A detailed example shows the practical relevance of this research. I am glad that Mr Förster publishes his important research in this scientific series. So I hope you will enjoy reading it and benefit from it.
APA, Harvard, Vancouver, ISO, and other styles
30

Arad, Cosmin Ionel. "Programming Model and Protocols for Reconfigurable Distributed Systems." Doctoral thesis, KTH, Programvaruteknik och Datorsystem, SCS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122311.

Full text
Abstract:
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for largescale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics.

QC 20130520

APA, Harvard, Vancouver, ISO, and other styles
31

Arad, Cosmin. "Programming Model and Protocols for Reconfigurable Distributed Systems." Doctoral thesis, SICS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-24202.

Full text
Abstract:
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for large-scale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics.
Kompics
CATS
REST
APA, Harvard, Vancouver, ISO, and other styles
32

Lopes, Adilson Barboza. "Um framework para configuração e gerenciamento de recursos e componentes em sistemas multimidia distribuidos abertos." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261006.

Full text
Abstract:
Orientador: Mauricio Ferreira Magalhães
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-07T23:55:06Z (GMT). No. of bitstreams: 1 Lopes_AdilsonBarboza_D.pdf: 2506368 bytes, checksum: 78d6ece504fc0a9dcfd053303ba3b85b (MD5) Previous issue date: 2006
Resumo: Em sistemas multimídia distribuídos existe uma diversidade de dispositivos de hardware, sistemas operacionais e tecnologias de comunicação. Para tratar os requisitos destas aplicações, os componentes do sistema precisam interagir entre eles considerando os aspectos de QoS de cada um dos elementos envolvidos. Neste contexto, esta tese apresenta o Cosmos ? um framework baseado em componentes proposto para dar suporte à configuração e gerenciamento de recursos em sistemas multimídia. Como prova de conceito do Cosmos, o framework definido foi usado no projeto do middleware AdapTV ? um middleware para sistemas de televisão digital interativa. O projeto do AdapTV explora os principais componentes dos modelos que foram definidos no Cosmos: o modelo de descrição de aplicações de forma independente de linguagens; o modelo de interconexão, que trata as questões de comunicação entre componentes heterogêneos usando diferentes tecnologias de comunicação; e o modelo de gerenciamento de QoS, que permite o monitoramento e a adaptação do sistema. Estes modelos foram explorados na implementação de um protótipo do middleware AdapTV e de uma aplicação distribuída que realiza a captura, transmissão e apresentação de um fluxo de vídeo. Para dar suporte à reusabilidade, o modelo explora o conceito de propriedades para estabelecer acordos de configuração (estáticos e dinâmicos) envolvendo negociações entre os requisitos dos componentes e as características da plataforma
Abstract: Distributed multimedia applications involve a diversity of hardware devices, operating systems and communication technologies. In order to fulfill the requirements of such applications, their constituting components need to interact with each other, as well as to consider QoS issues related to devices and transmission media. In such a context, this thesis presents the Cosmos component-based framework for configuration and management of resources of open, distributed multimedia systems. As a proof of concept, the framework was used in the design of the AdapTV middleware ? a middleware for interactive television which explores the major components of the Cosmos, including: the model to describe and represent applications independently of language aspects; the interconnection model that allows communication between components in heterogeneous and distributed multimedia environments; and the QoS management model that provides support for adaptation in the middleware player, triggered by QoS and user requirements changes. These models have been explored in the implementation of a prototype, which includes the AdapTV middleware and a distributed application example that captures, transmits and presents a video flow. In order to provide a generic and reusable approach, and to establish configuration agreements among component requirements and platform features, the framework explores the concept of properties
Doutorado
Engenharia de Computação
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
33

Pryce, Nathaniel Graham. "Component interaction in distributed systems." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Khan, Izhar Ahmed. "A Distributed Context Simulation Component." Thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-32576.

Full text
Abstract:
Mobile devices with access to large numbers of sensors with internet access move forwards the development of intelligent applications towards new shape of ubiquitous applications. In order to create such applications we need to be able to do simulations to test and deploy. Current simulators do not permit this since they are centralized and the information is not shared globally. Therefore we cannot use them to test application built on distributed sensor information. I selected Siafu as the simulator component. In the next step, the simulator was customized according to the requirements of the project. There are different possibilities to achieve this task, but a simple GUI is made to control the simulator.The end result is a complete architecture for simulating context aware scenarios. The implementation is tested by running the simulator and dumping the context data into the PGRID overlay. For future work, implementing proximity estimation between the agents will be a good idea and can be interesting as well.
APA, Harvard, Vancouver, ISO, and other styles
35

Quinton, Sophie. "Design, vérification et implémentation de systèmes à composants." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00685854.

Full text
Abstract:
Nous avons étudié dans le cadre de cette thèse le design, la vérification et l'implémentation des systèmes à composants. Nous nous sommes intéressés en particulier aux formalismes exprimant des interactions complexes, dans lesquels les connecteurs servent non seulement au transfert de données mais également à la synchronisation entre composants. 1. DESIGN ET VÉRIFICATION Le design par contrat est une approche largement répandue pour développer des systèmes lorsque plusieurs équipes travaillent en parallèle. Les contrats représentent des contraintes sur les implémentations qui sont préservées tout au long du développement et du cycle de vie d'un système. Ils peuvent donc servir également à la phase de vérification d'un tel système. Notre but n'est pas de proposer un nouveau formalisme de spécification, mais plutôt de définir un ensemble minimal de propriétés qu'une théorie basée sur les contrats doit satisfaire pour permettre certains raisonnements. En cela, nous cherchons à séparer explicitement les propriétés spécifiques à chaque formalisme de spécification et les règles de preuves génériques. Nous nous sommes attachés à fournir des définitions suffisamment générales pour exprimer un large panel de langages de spécification, et plus particulièrement ceux dans lesquels les interactions sont complexes, tels que Reo ou BIP. Pour ces derniers, raisonner sur la structure du système est essentiel et c'est pour cette raison que nos contrats ont une partie structurelle. Nous montrons comment découle de la propriété nommée raisonnement circulaire une règle pour prouver la dominance sans composer les contrats, et comment cette propriété peut être affaiblie en utilisant plusieurs relations de raffinement. Notre travail a été motivé par les langages de composants HRC L0 et L1 définis dans le projet SPEEDS. 2. IMPLÉMENTATION Synthétiser un contrôleur distribué imposant une contrainte globale sur un système est dans le cas général un problème indécidable. On peut obtenir la décidabilité en réduisant la concurrence: nous proposons une méthode qui synchronise les processus de façon temporaire. Dans les travaux de Basu et al., le contrôle distribué est obtenu en pré-calculant par model checking la connaissance de chaque processus, qui reflète dans un état local donné toutes les configurations possibles des autres processus. Ensuite, à l'exécution, le contrôleur local d'un processus décide si une action peut être exécutée sans violer la contrainte globale. Nous utilisons de même des techniques de model checking pour pré-calculer un ensemble minimal de points de synchronisation au niveau desquels plusieurs processus partagent leur connaissance au court de brèves périodes de coordination. Après chaque synchronisation, les processus impliqués peuvent de nouveau progresser indépendamment les uns des autres jusqu'à ce qu'une autre synchronisation ait lieu. Une des motivations pour ce travail est l'implémentation distribuée de systèmes BIP.
APA, Harvard, Vancouver, ISO, and other styles
36

Backström, Anders, and Mats Ågesjö. "Design and implementation of a 5GHz radio front-end module." Thesis, Linköping University, Department of Science and Technology, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2635.

Full text
Abstract:

The overall goal of this diploma work is to produce a design of a 5 GHz radio frontend using Agilent Advanced Design System (ADS) and then build a working prototype. Using this prototype to determine if RF circuits at 5 GHz can be successfully produced using distributed components on a laminate substrate.

The design process for the radio front-end consists of two stages. In the first stage the distributed components are designed and simulated, and in the second stage all components are merged into a PCB. This PCB is then manufactured and assembled. All measurements on the radio front-end and the test components are made using a network analyser, in order to measure the S-parameters.

This diploma work has resulted in a functional design and prototype, which has proved that designing systems for 5 GHz on a laminate substrate is possible but by no means trivial.

APA, Harvard, Vancouver, ISO, and other styles
37

Georgiadis, Ioannis. "Self-organising distributed component software architectures." Thesis, Imperial College London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.396255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Born, Marc, and Olaf Kath. "CoRE - komponentenorientierte Entwicklung offener verteilter Softwaresysteme im Telekommunikationskontext." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I; Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2002. http://dx.doi.org/10.18452/14744.

Full text
Abstract:
Die Telekommunikation und die ihr zuliefernde Industrie stellen einen softwareintensiven Bereich dar - der durch einen sehr hohen Anteil von Eigenentwicklungen gekennzeichnet ist. Eine wesentliche Ursache dafür sind spezielle Anforderungen an Telekommunikationssoftwaresysteme, die i.allg. nicht durch Standardsoftwareprodukte sichergestellt werden können. Diese Anforderungen ergeben sich aus den besonderen Eigenschaften solcher Softwaresysteme wie die Verteilung der Komponenten von Softwaresystemen sowie die Verteilung der Entwicklung dieser Komponenten, die Heterogenität der Entwicklungs- und Einsatzumgebungen für diese Komponenten und die Komplexität der entwickelten Softwaresysteme hinsichtlich nichtfunktionaler Charakteristika. Die industrielle Entwicklung von Telekommunikationssoftwaresystemen ist ein schwieriger und bisher nicht zufriedenstellend gelöster Prozeß. Aktuelle Forschungsarbeiten thematisieren Softwareentwicklungsprozesse und -techniken sowie unterstützende Werkzeuge zur Erstellung und Integration wiederverwendbarer Softwarekomponenten ("Componentware"). Das Ziel dieser Dissertation besteht in der Unterstützung der industriellen Entwicklung offener, verteilter Telekommunikationssoftwaresysteme. Dazu wird die Entwicklungstechnik Objektorientierte Modellierung mit dem Einsatz von Komponentenarchitekturen durch die automatische Ableitung von Softwarekomponenten aus Modellen zusammengeführt. Die zentrale Idee ist dabei eine präzise Definition der zur Entwicklung von verteilten Softwaresystemen einsetzbaren Modellierungskonzepte in Form eines Metamodells. Von diesem Metamodell ausgehend werden dann zur Konstruktion und Darstellung objektorientierter Entwurfsmodelle eine graphische und eine textuelle Notation definiert. Da die Notationen die Konzepte des Meta- Modells visualisieren, haben sie diesem gegenüber einen sekundären Charakter. Für die Transformation von Entwurfsmodellen in ausführbare Applikationen wurde auf der Grundlage von CORBA eine Komponentenplattform realisiert, die zusätzlich zu Interaktionen zwischen verteilten Softwarekomponenten auch Entwicklungs-, Deployment- und Ausführungsaspekte unterstützt. Wiederum ausgehend vom Metamodell wird durch die Anwendung wohldefinierter Ableitungsregeln die automatische Überführung von Entwurfsmodellen in Softwarekomponenten des zu entwickelnden Systems ermöglicht. Die von den Autoren erarbeiteten Konzeptionen und Vorgehensweisen wurden praktisch in eine Werkzeugumgebung umgesetzt, die sich bereits erfolgreich in verschiedenen Softwareentwicklungsprojekten bewährt hat.
The telecommunication industry and their suppliers form a software intensive domain. In addition, a high percentage of the software is developed by the telecommunication enterprises themselves. A main contributing factor for this situation are specific requirements to telecommunication software systems which cannot be fulfilled by standard off-the-shelf products. These requirements result from particular properties of those software systems, e.g. distributed development and execution of their components, heterogeneity of execution and development environments and complex non-functional characteristics like scalability, reliability, security and manageability. The development of telecommunication software systems is a complex process and currently not satisfactory realized. Actual research topics in this arena are software development processes and development techniques as well as tools which support the creation and integration of reusable software components (component ware). The goal of this thesis work is the support of the industrial development and manufacturing of open distributed telecommunication software systems. For that purpose, the development technique object oriented modelling and the implementation technique usage of component architectures are combined. The available modelling concepts are precisely defined as a metamodel. Based on that metamodel, graphical and textual notations for the presentation of models are developed. To enable a smooth transition from object oriented models into executable components a component architecture based on CORBA was also developed as part of the thesis. This component architecture covers besides the interaction support for distributed components deployment and execution aspects. Again on the basis of the metamodel code generation rules are defined which allow to automate the transition from models to components. The development techniques described in this thesis have been implemented as a tool chain. This tool chain has been successfully used in several software development projects.
APA, Harvard, Vancouver, ISO, and other styles
39

Brohede, Marcus. "Component Decomposition of Distributed Real-Time Systems." Thesis, University of Skövde, Department of Computer Science, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-407.

Full text
Abstract:

Development of distributed real-time applications, in contrast to best effort applications, traditionally have been a slow process due to the lack of available standards, and the fact that no commercial off the shelf (COTS) distributed object computing (DOC) middleware supporting real-time requirements have been available to use, in order to speed up the development process without sacrificing any quality.

Standards and DOC middlewares are now emerging that are addressing key requirements of real-time systems, predictability and efficiency, and therefore, new possibilities such as component decomposition of real-time systems arises.

A number of component decomposed architectures of the distributed active real-time database system DeeDS is described and discussed, along with a discussion on the most suitable DOC middleware. DeeDS is suitable for this project since it supports hard real-time requirements and is distributed. The DOC middlewares that are addressed in this project are OMG's Real-Time CORBA, Sun's Enterprise JavaBeans, and Microsoft's COM/DCOM. The discussion to determine the most suitable DOC middleware focuses on real-time requirements, platform support, and whether implementations of these middlewares are available.

APA, Harvard, Vancouver, ISO, and other styles
40

Triki, Ahlem. "Distributed Implementations of Timed Component-based Systems." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENM014/document.

Full text
Abstract:
L'implémenation distribuée des systèmes temps-réel a été toujous une tâche non-triviale. La coordination des composants s'exécutant sur une plate-forme distribuée doit être assurée par des protocoles de communication complexes en tenant compte de leurs contraintes de temps. Dans cette thèse, nous proposons un flot de conception rigoureux à partir d'un modèle de haut niveau d'un logiciel d'application décrit en BIP (Behavior, Interaction, Priority) et conduisant à une implémenation distribuée. Le flot de conception implique l'utilisation de transformations de modèles tout en conservant les propriétés fonctionnelles des modèles originaux de BIP. Un modèle BIP se compose d'un ensemble de composants qui se synchronisent à travers les interactions et les priorités. Notre méthode transforme les modèles BIP en un modéle Send/Receive qui fonctionnent en utilisant le passage de messages asynchrones. Les modèles obtenus sont directement implémenté sur une plate-forme donnée. Nous présentons trois solutions pour obtenir un modéle Send/Receive. Dans la première solution, nous proposons des modéles Send/Receive qui fonctionnent avec un engin centralisé qui implémente les interactions et les priorités. Les composants atomiques des modèles originaux sont transformés en composants Send/Receive qui communiquent avec l'engin centralisé via des interactions Send/Receive. L'engin centralisé exécute les interactions sous certaines conditions définies par les modèles à états partiels. Ces modèles représentent une déscription haut niveau de l'exécution parallèle de modèles BIP. Dans la deuxième solution, nous proposons de décentraliser l'engin. Les modéles Send/Receive obtenus sont structurées en trois couches: (1) les composants Send/Receive (2) un ensemble d'engin, chacun exécutant un sous-ensemble d'interactions, et (3) un ensemble de composants implémentant un protocole de résolution des conflits. Avec les solutions décrites ci-dessus, nous supposons que les latences de communication sont négligeables. Ceci est du au fait que les modéles Send/Receive sont concu de telle sorte qu'il n'y ait pas retard entre la décision d'exécuter une interaction dans un engin et son exécution dans les composants participant. Dans la troisième solution, nous proposons des modéles Send/ Receive qui exécutent correctement même en présence de latences de communication. Cette solution est basée sur le fait que les engin planifient l'exécution des interactions et notifient les composants à l'avance. Afin de planifier correctement les interactions, nous montrons que les engins sont tenus à observer des composants supplémentaires, en plus de ceux qui participent aux interactions. Nous présentons également une méthode pour optimiser le nombre de composants observés, en se basant sur l'utilisation de techniques d'analyse statique. A partir d'un modéle Send/Receive donné, nous générons une application distribuée où les interactions Send/Receive sont implémentées par les sockets TCP. Les résultats expérimentaux sur des exemples non triviaux et des études de cas montrent l'efficacité de notre méthode
Correct distributed implementation of real-time systems has always been a challenging task. The coordination of components executing on a distributed platform has to be ensured by complex communication protocols taking into account their timing constraints. In this thesis, we propose rigorous design flow starting from a high-level model of an application software in BIP (Behavior, Interaction, Priority) and leading to a distributed implementation. The design flow involves the use of model transformations while preserving the functional properties of the original BIP models. A BIP model consists of a set of components synchronizing through multiparty interactions and priorities. Our method transforms high-level BIP models into Send/Receive models that operate using asynchronous message passing. The obtained models are directly implementable on a given platform. We present three solutions for obtaining Send/Receive BIP models. -In the first solution, we propose Send/Receive models with a centralized scheduler that implements interactions and priorities. Atomic components of the original models are transformed into Send/Receive components that communicate with the centralized scheduler via Send/Receive interactions. The centralized scheduler is required to schedule interactions under some conditions defined by partial state models. Those models represent high-level representation of parallel execution of BIP models. - In the second solution, we propose to decentralize the scheduler. The obtained Send/Receive models are structured in 3 layers: (1) Send/Receive atomic components, (2) a set of schedulers each one handling a subset of interactions, and (3) a set of components implementing a conflict resolution protocol. With the above solutions, we assume that the obtained Send/Receive models are implemented on platforms that provide fast communications (e.g. multi-process platforms) to meet perfect synchronization in components. This is because the obtained schedulers are modeled such that interactions scheduling corresponds exactly to execution in components. - In the third solution, we propose Send/Receive models that execute correctly even if communications are not fast enough. This solution is based on the fact that schedulers plan interactions execution and notify components in advance. In order to plan correctly the interactions, we show that the schedulers are required to observe additional components, in addition to the ones participating in the interactions. We present also a method to optimize the number of observed components, based on the use of static analysis techniques. From a given Send/Receive model, we generate a distributed implementation where Send/Receive interactions are implemented by TCP sockets. The experimental results on non trivial examples and case studies show the efficiency of our design flow
APA, Harvard, Vancouver, ISO, and other styles
41

Batista, Oureste Elias. "Sistema inteligente baseado em decomposição por componentes ortogonais e inferência fuzzy para localização de faltas de alta impedância em sistemas de distribuição de energia elétrica com geração distribuída." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-30052016-103546/.

Full text
Abstract:
Os sistemas elétricos de potência modernos apresentam inúmeros desafios em sua operação. Nos sistemas de distribuição de energia elétrica, devido à grande ramificação, presença de extensos ramais monofásicos, à dinâmica das cargas e demais particularidades inerentes, a localização de faltas representa um dos maiores desafios. Das barreiras encontradas, a influência da impedância de falta é uma das maiores, afetando significativamente a aplicação dos métodos tradicionais na localização, visto que a magnitude das correntes de falta é similar à da corrente de carga. Neste sentido, esta tese objetivou desenvolver um sistema inteligente para localização de faltas de alta impedância, o qual foi embasado na aplicação da técnica de decomposição por componentes ortogonais no pré-processamento das variáveis e inferência fuzzy para interpretar as não-linearidades do Sistemas de Distribuição com presença de Geração Distribuída. Os dados para treinamento do sistema inteligente foram obtidos a partir de simulações computacionais de um alimentador real, considerando uma modelagem não-linear da falta de alta impedância. O sistema fuzzy resultante foi capaz de estimar as distâncias de falta com um erro absoluto médio inferior a 500 m e um erro absoluto máximo da ordem de 1,5 km, em um alimentador com cerca de 18 km de extensão. Tais resultados equivalem a um grau de exatidão, para a maior parte das ocorrências, dentro do intervalo de ±10%.
Modern electric power systems present numerous challenges in its operation. Fault location is a major challenge in Power Distribution Systems due to its large branching, presence of single-phase laterals and the dynamic loads. The influence of the fault impedance is one of the largest, significantly affecting the use of traditional methods for its location, since the magnitude of the fault currents is similar to the load current. In this sense, this thesis aimed to develop an intelligent system for location of high impedance faults, which was based on the application of the decomposition technique of orthogonal components in the pre-processing variables and fuzzy inference to interpret the nonlinearities of Power Distribution Systems with the presence of Distributed Generation. The data for training the intelligent system were obtained from computer simulations of an actual feeder, considering a non-linear modeling of the high impedance fault. The resulting fuzzy system was able to estimate distances to fault with an average absolute error of less than 500 m and a maximum absolute error of 1.5 km order, on a feeder about 18 km long. These results are equivalent to a degree of accuracy for the most occurrences within the ± 10% range.
APA, Harvard, Vancouver, ISO, and other styles
42

Mulugeta, Dinku Mesfin. "QoS Contract Negotiation in Distributed Component-Based Software." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2007. http://nbn-resolving.de/urn:nbn:de:swb:14-1185279327735-87696.

Full text
Abstract:
Currently, several mature and commercial component models (for e.g. EJB, .NET, COM+) exist on the market. These technologies were designed largely for applications with business-oriented non-functional requirements such as data persistence, confidentiality, and transactional support. They provide only limited support for the development of components and applications with non-functional properties (NFPs) like QoS (e.g. throughput, response time). The integration of QoS into component infrastructure requires among other things the support of components’ QoS contract specification, negotiation, adaptation, etc. This thesis focuses on contract negotiation. For applications in which the consideration of non-functional properties (NFPs) is essential (e.g. Video-on-Demand, eCommerce), a component-based solution demands the appropriate composition of the QoS contracts specified at the different ports of the collaborating components. The ports must be properly connected so that the QoS level required by one is matched by the QoS level provided by the other. Generally, QoS contracts of components depend on run-time resources (e.g. network bandwidth, CPU time) or quality attributes to be established dynamically and are usually specified in multiple QoS-Profiles. QoS contract negotiation enables the selection of appropriate concrete QoS contracts between collaborating components. In our approach, the component containers perform the contract negotiation at run-time. This thesis addresses the QoS contract negotiation problem by first modelling it as a constraint satisfaction optimization problem (CSOP). As a basis for this modelling, the provided and required QoS as well as resource demand are specified at the component level. The notion of utility is applied to select a good solution according to some negotiation goal (e.g. user’s satisfaction). We argue that performing QoS contract negotiation in multiple phases simplifies the negotiation process and makes it more efficient. Based on such classification, the thesis presents heuristic algorithms that comprise coarse-grained and fine-grained negotiations for collaborating components deployed in distributed nodes in the following scenarios: (i) single-client - single-server, (ii) multiple-clients, and (iii) multi-tier scenarios. To motivate the problem as well as to validate the proposed approach, we have examined three componentized distributed applications. These are: (i) video streaming, (ii) stock quote, and (iii) billing (to evaluate certain security properties). An experiment has been conducted to specify the QoS contracts of the collaborating components in one of the applications we studied. In a run-time system that implements our algorithm, we simulated different behaviors concerning: (i) user’s QoS requirements and preferences, (ii) resource availability conditions concerning the client, server, and network bandwidth, and (iii) the specified QoS-Profiles of the collaborating components. Under various conditions, the outcome of the negotiation confirms the claim we made with regard to obtaining a good solution.
APA, Harvard, Vancouver, ISO, and other styles
43

Cansado, Antonio. "Formal specification and verification of distributed component systems." Nice, 2008. http://www.theses.fr/2008NICE4052.

Full text
Abstract:
Les composants sont des blocs logiciels qui communiquent par des interfaces bien définies. Ces interfaces définissent un contrat avec l'environnement. Ce contrat doit garantir la compatibilité comportementale des interfaces. Cette compatibilité est en particulier importante quand des composants sont distribués et communiquent par des méthodes asynchrones. Cette thèse se base sur les spécifications comportementales des composants distribués. Nous développons un cadre formel qui nous permet de construire des modèles comportementaux pour ces composants. Après une phase d'abstraction, ces modèles peuvent être utilisés en entrée pour des outils de vérification modernes. L'objectif principal est de spécifier, vérifier et au final de produire des composants distribués avec un comportement garanti. Pour ce faire, nous développons une langage de spécifications proche de Java. Ce langage est établi sur notre modèle comportemental, et fournit une abstraction puissante de haut niveau du système. Les avantages sont les suivants: (i) nous pouvons nous connecter avec des outils de vérification: ainsi nous sommes capables de vérifier plusieurs sortes de propriétés ; et (ii), les spécifications sont assez complètes pour produire des squelettes de code de contrôle des composants. Finalement, nous validons notre approche avec un cas d'étude à l'aide d'un exemple commun de système à composants (``Common Component Model Example (CoCoME)''). Les particularités du langage proposé sont : traiter des composants hiérarchiques qui communiquent par des appels de méthodes asynchrones; donner le comportement d'un composant comme l'ensemble de services; utiliser une sémantique proche d'un langage de programmation; et traiter des abstractions de code utilisateur
Components are self-contained building blocks. They communicate through well-defined interfaces, that set some kind of contract. This contract must guarantee the behavioural compatibility of bound interfaces. This is particularly true when components are distributed and communicate through asynchronous method calls. This thesis addresses the behavioural specification of distributed components. We develop a formal framework that allows us to build behavioural models. After abstraction, these models are a suitable input for state-of-the-art verification tools. The main objective is to specify, to verify, and to generate safe distributed components. To this aim, we develop a specification language close to Java. This language is built on top of our behavioural model, and provides a powerful high-level abstraction of the system. The benefits are twofold: (i) we can interface with verification tools, so we are able to verify various kinds of properties; and (ii), the specification is complete enough to generate code-skeletons defining the control part of the components. Finally, we validate our approach with a Point-Of-Sale case-study under the Common Component Model Example (CoCoME). The specificities of the specification language proposed in this thesis are: to deal with hierarchical components that communicate by asynchronous method calls; to give the component behaviour as a set of services; and to provide semantics close to a programming language by dealing with abstractions of user-code
APA, Harvard, Vancouver, ISO, and other styles
44

Barros, Tomás. "Formal specification and verification of distributed component systems." Nice, 2005. http://www.theses.fr/2005NICE4048.

Full text
Abstract:
Les phospholipases A2 sécrétées (sPLA2) sont de puissants inhibiteurs de l’entrée du virus de l’immunodéficience humaine (HIV) (Fenard et al. , 1999). Afin de mieux comprendre leur mécanisme d’action, nous avons cloné un HIV résistant à la sPLA2 de venin d’abeille (bvPLA2), HIVRBV-3. Mon projet de thèse a consisté en l’élucidation des mécanismes moléculaires qui confèrent sa résistance à HIVRBV-3. Il est bien documenté que le HIV pénétrant dans la cellule par fusion de la membrane virale avec la membrane plasmique. Par ailleurs, il est généralement admis que les HIV pénétrant dans la cellule par la voie endosomale sont dégradés dans les lysosomes et ne sont donc pas infectieux. Nous avons montré, avec trois techniques d’analyse différentes, que la voie d’entrée de HIVRBV-3 est dépendante des endosomes et de leurs moteurs cytosqueletiques. En effet, la réplication de HIVRBV-3 est sensible à plusieurs types d’inhibiteurs de l’acidification des endosomes et de la polymérisation des micro-filaments d’actine et ce, dans différents types cellulaires. Nous montrons que ce mécanisme d’entrée original ainsi que la résistance à bvPLA2 sont supportés par la glycoprotéine d’enveloppe (gp160) de HIVRBV-3. Nos recherches en cours consistent à démontrer l’implication de modification singulière de certaines boucles variables de la gp120 de HIVRBV-3 dans ce mécanisme d’entrée inédit. Les données de la bibliographie indiquent que ces modifications de la gp120 sont retrouvées dans les souches virales isolées chez les patients non progresseurs à long terme. Notre hypothèse actuelle est donc que les sPLA2 humaines pourraient jouer un rôle dans le contrôle de la réplication virale chez ces individus
Secreted phospholipases A2 (sPLA2) are potent inhibitors of Human Immunodeficiency Virus (HIV) replication. In order to gain insights of their antiviral effects we have cloned a bee-venom sPLA2 (bvPLA2) resistant HIV strain, HIVRBV-3. Our goal is to elucidate the molecular mechanisms that confer bvPLA2 resistance to HIVRBV-3. HIV enters cell via fusion o viral and plasma membrane. Furthermore, it is generally admitted that HIV endosomal entry is a dead end route of infection. We show that HIVRBV-3 entry is highly dependent on the molecular mechanisms of endocytosis, particularly those of vesicular trafficking. We were able to show, using three different ways of investigation, that HIVRBV-3 replication in different cell lines is inhibited by lysosomotropic agents, and by drug that affect the cytoskeleton (actin microfilaments and microtubules) polymerization. We further demonstrate that HIVRBV-3 envelope glycoprotein directs HIVRBV-3 in this particularly entry route and that is sufficient to confer bvPLA2 resistance to a HIV bfPLA2 sensitive strain. We are currently investigating the role played by uncommon mutations in the variable loops of HIVRBV-3 envelope glycoprotein, in directing the HIVRBV-3 entry pathway. These uncommon mutations are also specific of long-term non-progressor HIV strains? This lead us to assess the role played by endogenous human sPLA2 in the physiopathology of the HIV infection. Altogether our results suggest a new resistance mechanism at the cellular level. Indeed, HIV may overcome the inhibitory effect of an intracytoplasmic block by using an alternative entry pathway
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Koping. "Spider II: A component-based distributed computing system." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1874.

Full text
Abstract:
Spider II system is the second version implementation of the Spider project. This system is the first distributed computation research project in the Department of Computer Science at CSUSB. Spider II is a distributed virtual machine on top of the UNIX or LINUX operating system. Spider II features multi-tasking, load balancing and fault tolerance, which optimize the performance and stability of the system.
APA, Harvard, Vancouver, ISO, and other styles
46

Sentilles, Séverine. "Towards Efficient Component-Based Software Development of Distributed Embedded Systems." Licentiate thesis, Mälardalen University, School of Innovation, Design and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-7368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Kotlarsky, Julia. "Management of Globally Distributed Component-Based Software Development Projects." [Rotterdam]: Erasmus Research Institute of Management (ERIM), Erasmus University Rotterdam ; Rotterdam : Erasmus University Rotterdam [Host], 2005. http://hdl.handle.net/1765/6772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Niemelä, Eila. "A component framework of a distributed control systems family /." Espoo [Finland] : Technical Research Centre of Finland, 1999. http://www.vtt.fi/inf/pdf/publications/1999/P402.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

LIBORIO, AIRTON JOSE ARAUJO. "SUPPORT FOR ARCHITECTURAL EVOLUTION IN COMPONENT-BASED DISTRIBUTED SYSTEMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2013. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=23877@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
A natureza de certos sistemas de software determina que estes tenham de executar de maneira ininterrupta. Por outro lado, diversos sistemas de software são constantemente sujeitos a mudanças, por questões que incluem, mas não se limitam a, infraestrutura, correções de falhas, adição de funcionalidades e mudanças na lógica de domínio. Evolução dinâmica de software consiste em alterar aplicações durante a sua execução sem interrompê-las, mantendo-as disponíveis mesmo durante a aplicação destas modificações. Sistemas distribuídos baseados em componentes permitem decompor o software em entidades claramente separadas. Nesses casos, a evolução pode ser resumida a remoção, adição e modificação de tais entidades, e se tais atividades podem ser exercidas enquanto a aplicação está em execução, tem-se evolução dinâmica de software. Através disso, neste trabalho foi criada uma abordagem em que é possível se manipular arquiteturas distribuídas desenvolvidas sobre o middleware SCS de maneira a se minimizar a interrupção de partes do sistema enquanto certas adaptações são implantadas. Aplicamos o mecanismo em um sistema distribuído já consolidado, o CAS, que consiste em uma infraestrutura de gravação extensível com suporte a captura e acesso automáticos de mídias distribuídas.
The nature of some software systems determine that they run without interruption. Furthermore, many software systems are constantly subject to change for reasons that include, but are not limited to, infrastructure changes, bug fixes, addition of functionalities, and changes in the domain logic. Dynamic software evolution consists into changing application during execution without stopping them, keeping them available even when applying these modifications. Component-based distributed systems allows decomposing software into clearly separated entities. In such cases, evolution can be summarized to removal, addition and modification of such entities, and if such activities can be performed while the application is executing, dynamic adaptation is achieved. In this work, we ve investigated an approach that aims to allow manipulation of distributed software architectures developed over the SCS middleware, in order to minimize system disruption while certain adaptations are deployed. The mechanism was tested in an already consolidated distributed system, the CAS, which consists of an extensible recording infrastructure that supports automatic capture and access of distributed medias.
APA, Harvard, Vancouver, ISO, and other styles
50

CONDORI, EDWARD JOSE PACHECO. "DEPLOYMENT OF DISTRIBUTED COMPONENT-BASED APPLICATIONS ON CLOUD INFRASTRUCTURES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=23645@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
A implantação de aplicações baseadas em componentes distribuídos é composta por um conjunto de atividades geridas por uma Infraestrutura de Implantação. Aplicações atuais estão se tornando cada vez mais complexas, necessitando de um ambiente alvo dinâmico e multi-plataforma. Assim, a atividade de planejamento de uma implantação é o passo mais crítico, pois define a configuração da infraestrutura de execução de forma a atender os requisitos do ambiente alvo de uma aplicação. Por outro lado, o modelo de serviço na nuvem chamado Infraestrutura como Serviço(IaaS) oferece recursos computacionais sob demanda, com características dinâmicas, escaláveis e elásticas. Nesta dissertação nós estendemos a Infraestrutura de Implantação para componentes SCS de forma a permitir o uso de nuvens privadas ou públicas como o ambiente alvo de uma implantação, através do uso de uma cloud API e políticas flexíveis para especificar um ambiente alvo personalizado. Além disso, hospedamos a infraestrutura de implantação na nuvem. Isto permitiu-nos usar recursos computacionais sob demanda para instanciar os serviços da Infraestrutura de Implantação, produzindo uma Plataforma como Serviço(PaaS) experimental.
Deployment of distributed component-based applications is composed of a set of activities managed by a Deployment Infrastructure. Current applications are becoming increasingly more complex, requiring a multi-platform and a dynamic target environment. Thus, the planning activity is the most critical step because it defines the configuration of the execution infrastructure in order to satisfy the requirements of the application’s target environment. On the other hand, the cloud service model called Infrastructure as a Service (IaaS) offers on-demand computational resources with dynamic, scalable, and elastic features. In this work we have extended the Deployment Infrastructure for SCS componentes to support private or public clouds as its target environment, through the use of a cloud API and flexible policies to specify a customized target environment. Additionally, we host the Deployment Infrastructure on the cloud, which allow us to use on-demand computational resources to instantiate Deployment Infrastructure services, creating an experimental Platform as a Service (PaaS).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography