To see the other types of publications on this topic, follow the link: Agent-based computing.

Dissertations / Theses on the topic 'Agent-based computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 dissertations / theses for your research on the topic 'Agent-based computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cao, Junwei. "Agent-based resource management for grid computing." Thesis, University of Warwick, 2001. http://wrap.warwick.ac.uk/4172/.

Full text
Abstract:
A computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capability. An ideal grid environment should provide access to the available resources in a seamless manner. Resource management is an important infrastructural component of a grid computing environment. The overall aim of resource management is to efficiently schedule applications that need to utilise the available resources in the grid environment. Such goals within the high performance community will rely on accurate performance prediction capabilities. An existing toolkit, known as PACE (Performance Analysis and Characterisation Environment), is used to provide quantitative data concerning the performance of sophisticated applications running on high performance resources. In this thesis an ASCI (Accelerated Strategic Computing Initiative) kernel application, Sweep3D, is used to illustrate the PACE performance prediction capabilities. The validation results show that a reasonable accuracy can be obtained, cross-platform comparisons can be easily undertaken, and the process benefits from a rapid evaluation time. While extremely well-suited for managing a locally distributed multi-computer, the PACE functions do not map well onto a wide-area environment, where heterogeneity, multiple administrative domains, and communication irregularities dramatically complicate the job of resource management. Scalability and adaptability are two key challenges that must be addressed. In this thesis, an A4 (Agile Architecture and Autonomous Agents) methodology is introduced for the development of large-scale distributed software systems with highly dynamic behaviours. An agent is considered to be both a service provider and a service requestor. Agents are organised into a hierarchy with service advertisement and discovery capabilities. There are four main performance metrics for an A4 system: service discovery speed, agent system efficiency, workload balancing, and discovery success rate. Coupling the A4 methodology with PACE functions, results in an Agent-based Resource Management System (ARMS), which is implemented for grid computing. The PACE functions supply accurate performance information (e. g. execution time) as input to a local resource scheduler on the fly. At a meta-level, agents advertise their service information and cooperate with each other to discover available resources for grid-enabled applications. A Performance Monitor and Advisor (PMA) is also developed in ARMS to optimise the performance of the agent behaviours. The PMA is capable of performance modelling and simulation about the agents in ARMS and can be used to improve overall system performance. The PMA can monitor agent behaviours in ARMS and reconfigure them with optimised strategies, which include the use of ACTs (Agent Capability Tables), limited service lifetime, limited scope for service advertisement and discovery, agent mobility and service distribution, etc. The main contribution of this work is that it provides a methodology and prototype implementation of a grid Resource Management System (RMS). The system includes a number of original features that cannot be found in existing research solutions.
APA, Harvard, Vancouver, ISO, and other styles
2

Tang, Jia. "An agent-based peer-to-peer grid computing architecture." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060508.151716/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ruan, Jianhua, Han-Shen Yuh, and Koping Wang. "Spider III: A multi-agent-based distributed computing system." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2249.

Full text
Abstract:
The project, Spider III, presents architecture and protocol of a multi-agent-based internet distributed computing system, which provides a convenient development and execution environment for transparent task distribution, load balancing, and fault tolerance. Spider is an on going distribution computing project in the Department of Computer Science, California State University San Bernardino. It was first proposed as an object-oriented distributed system by Han-Sheng Yuh in his master's thesis in 1997. It has been further developed by Koping Wang in his master's project, of where he made large contribution and implemented the Spider II System.
APA, Harvard, Vancouver, ISO, and other styles
4

Tashakor, Ghazal. "Scalable agent-based model simulation using distributed computing on system biology." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/671332.

Full text
Abstract:
El modelat basat en agents és una eina informàtica molt útil que permet simular un comportament complex utilitzant regles tant a escales micro com macro. La complexitat d’aquest tipus de modelat està en la definició de les regles que tendran els agents per definir elements estructurals o els patrons de comportament estàtics i/o dinàmics. La present tesis aborda la definició de models complexos de xarxes biològiques que representen cèl·lules canceroses per obtenir comportaments sobre diferents escenaris mitjançant simulació i conèixer l’evolució del procés de metàstasi per a usuaris no-experts en sistemes de còmput. A més es desenvolupa una prova de concepte de com incorporar tècniques d’anàlisi de xarxes dinàmiques i d’aprenentatge automàtic en els models basats en agents a partir del desenvolupament d’un sistema de simulació federat per millorar el procés de presa de decisions. Per al desenvolupament d’aquesta tesi s’ha tingut que abordar, des del punt de vista de la simulació, la representació de xarxes biològiques complexes basades en grafs i investigar com integrar la topologia i funcions d’aquest tipus de xarxes interactuant amb un model basat en agents. En aquest objectiu, s’ha utilitzat el model ABM com a base per a la construcció, agrupament i classificació dels elements de la xarxa i que representen l’estructura d’una xarxa biològica complexa i escalable. La simulació d’un model complex de múltiples escales i múltiples agents, proporciona una eina útil per a que un científic, no-expert en computació, pugui executar un model complex i paramètric i utilitzar-ho com a eina d’anàlisi d’escenaris o predicció de variacions segons els diferents perfils de pacients considerats. El desenvolupament s’ha centrat en un model de tumor basat en agents que ha evolucionat des d’un model ABM simple i bé conegut, al qual se li han incorporat les variables i dinàmiques referenciades per l’Hallmarks of Cancer, fins a un models basat en grafs. Aquest model, basat en grafs, permet representar a diferents nivells d’interacció i dinàmiques dins de les cèl·lules en l’evolució d’un tumor que permet diferents graus de representacions (a nivell molecular/cel·lular). Tot això s’ha posat en funcionament en un entorn de simulació i ha creat un flux de treball (workflow) per construir una xarxa escalable complexa basada en un escenari de creixement tumoral i on s’apliquen tècniques dinàmiques per conèixer el creixement de la xarxa tumoral sobre diferents patrons. L’experimentació s’ha realitzat utilitzant l’entorn de simulació desenvolupat considerat l’execució de models per a diferents perfils de pacients, com a mostra de la seva funcionalitat, per a paràmetres d’interès per a l’expert no-informàtic com per exemple l’evolució del volum del tumor. L’entorn ha estat dissenyat per descobrir i classificar subgrafs del model de tumor basat en agents, que permetran distribuir els models en un sistema de còmput d’altes prestacions per poder analitzar escenaris complexos i/o diferents perfils de pacients amb patrons tumorals amb un alt nombre de cèl·lules canceroses en un temps reduït.
El modelado basado en agentes es una herramienta computacional muy útil que permite simular un comportamiento complejo utilizando reglas tanto en escalas micro como macro. La complejidad de este tipo de modelado radica en la definición de las reglas que tendrán los agentes para definir los elementos estructurales o los patrones de comportamiento estáticos y/o dinámicos. La presente tesis aborda la definición de modelos complejos de redes biológicas que representan células cancerosas para obtener comportamientos sobre diferentes escenarios mediante simulación y conocer la evolución del proceso de metástasis para usuarios no expertos en sistemas de cómputo. Además se desarrolla una prueba de concepto de cómo incorporar técnicas de análisis de redes dinámicas y de aprendizaje automático en los modelos basados en agentes a partir del desarrollo de un sistema de simulación federado para mejorar el proceso de toma de decisiones. Para el desarrollo de esta tesis se han tenido que abordar, desde el punto de vista de la simulación, la representación de redes biológicas complejas basadas en grafos e investigar como integrar la topología y funciones de este tipo de redes interactuando un modelo basado en agentes. En este objetivo, se ha utilizado el modelo ABM como base para la construcción, agrupamiento y clasificación de los elementos de la red y que representan la estructura de una red biológica compleja y escalable. La simulación de un modelo complejo de múltiples escalas y múltiples agentes, proporciona una herramienta útil para que un científico, no-experto en computación, pueda ejecutar un modelo complejo paramétrico y utilizarlo como herramienta de análisis de escenarios o predicción de variaciones según los diferentes perfiles de pacientes considerados. El desarrollo se ha centrado en un modelo de tumor basado en agentes que ha evolucionado desde un modelo ABM simple y bien conocido, al cual se le han incorporado las variables y dinámicas referenciadas por el Hallmarks of Cancer, a un modelo complejo basado en grafos. Este modelo, basado en grafos, se utiliza para representar a diferentes niveles de interacción y dinámicas dentro de las células en la evolución de un tumor que permite diferentes grado de representaciones (a nivel molecular/celular). Todo ello se ha puesto en funcionamiento en un entorno de simulación y se ha creado un flujo de trabajo (workflow) para construir una red escalable compleja basada en un escenario de crecimiento tumoral y donde se aplican técnicas dinámicas para conocer el crecimiento de la red tumoral sobre diferentes patrones. La experimentación se ha realizado utilizando el entorno de simulación desarrollado considerado la ejecución de modelos para diferentes perfiles de pacientes, como muestra de su funcionalidad, para calcular parámetros de interés para el experto no-informático como por ejemplo la evolución del volumen del tumor. El entorno ha sido diseñado para descubrir y clasificar subgrafos del modelo de tumor basado en agentes, que permitirá distribuir los modelos en un sistema de cómputo de altas prestaciones y así poder analizar escenarios complejos y/o diferentes perfiles de pacientes con patrones tumorales con un alto número de células cancerosas en un tiempo reducido.
Agent-based modeling is a very useful computational tool to simulate complex behavior using rules at micro and macro scales. This type of modeling’s complexity is in defining the rules that the agents will have to define the structural elements or the static and dynamic behavior patterns. This thesis considers the definition of complex models of biological networks that represent cancer cells obtain behaviors on different scenarios by means of simulation and to know the evolution of the metastatic process for non-expert users of computer systems. Besides, a proof of concept has been developed to incorporate dynamic network analysis techniques and machine learning in agent-based models based on developing a federated simulation system to improve the decision-making process. For this thesis’s development, the representation of complex biological networks based on graphs has been analyzed, from the simulation point of view, to investigate how to integrate the topology and functions of this type of networks interacting with an agent-based model. For this purpose, the ABM model has been used as a basis for the construction, grouping, and classification of the network elements representing the structure of a complex and scalable biological network. The simulation of complex models with multiple scales and multiple agents provides a useful tool for a scientist, non-computer expert to execute a complex parametric model and use it to analyze scenarios or predict variations according to the different patient’s profiles. The development has focused on an agent-based tumor model that has evolved from a simple and well-known ABM model. The variables and dynamics referenced by the Hallmarks of Cancer have been incorporated into a complex model based on graphs. Based on graphs, this model is used to represent different levels of interaction and dynamics within cells in the evolution of a tumor with different degrees of representations (at the molecular/cellular level). A simulation environment and workflow have been created to build a complex, scalable network based on a tumor growth scenario. In this environment, dynamic techniques are applied to know the tumor network’s growth using different patterns. The experimentation has been carried out using the simulation environment developed considering the execution of models for different patient profiles, as a sample of its functionality, to calculate parameters of interest for the non-computer expert, such as the evolution of the tumor volume. The environment has been designed to discover and classify subgraphs of the agent-based tumor model to execute these models in a high-performance computer system. These executions will allow us to analyze complex scenarios and different profiles of patients with tumor patterns with a high number of cancer cells in a short time.
APA, Harvard, Vancouver, ISO, and other styles
5

Bicak, Mesude. "Agent-based modelling of decentralized ant behaviour using high performance computing." Thesis, University of Sheffield, 2011. http://etheses.whiterose.ac.uk/1392/.

Full text
Abstract:
Ant colonies are complex biological systems that respond to changing conditions in nature by solving dynamic problems. Their ability of decentralized decision-making and their self-organized trail systems have inspired computer scientists since 1990s, and consequently initiated a class of heuristic search algorithms, known as ant colony optimization (ACO) algorithms. These have proven to be very effective in solving combinatorial optimisation problems, especially in the field of telecommunication. The major challenge in social insect research is understanding how colony-level behaviour emerges from individual interactions. Models to date focus on simple pheromone usage with mathematically devised behaviour, which deviates largely from the real ant behaviour. Furthermore, simulating large-scale behaviour at the individual level is a difficult computational challenge; hence models fail to simulate realistic colony sizes and dimensions for foraging environments. In this thesis, FLAME, an agent-based modelling (ABM) framework capable of producing parallelisable models, was used as the modelling platform and simulations were performed on a High Performance Computing (HPC) grid. This enabled large-scale simulations of complex models to be run in parallel on a grid, without compromising on the time taken to attain results. Furthermore, the advanced features of the framework, such as dynamic creation of agents during a simulation, provided realistic grounds for modelling pheromones and the environment. ABM approach through FLAME was utilized to improve existing models of the Pharaoh's ants (Monomorium pharaonis) focusing on their foraging strategies. Based on related biological research, a number of hypotheses were further tested, which were: (i) the ability of the specialist ‘U-turner' ants in trail maintenance, (ii) the trail choices performed at bifurcations, and (iii) the ability of ants to deposit increased concentrations of pheromones based on food quality. Heterogeneous colonies with 7% U-turner ant agents were further shown to perform significantly better in foraging compared to homogeneous colonies. Furthermore, laying pheromones with a higher intensity based on food quality was shown to be beneficial for the Pharaoh's ant colonies in switching to more rewarding trails. The movement of the Pharaoh's ants in unexplored areas (without pheromones) was also investigated by conducting biological experiments. Video tracking was used to extract movement vectors from the recordings of experiments and the data obtained was subject to statistical analysis in order to devise parameters for ant movement in the models developed. Overall, this research makes contributions to biology and computer science research by: (i) utilizing ABM and HPC via FLAME to reduce technological challenges, (ii) further validating existing hypotheses through realistic models, (iii) developing a video tracking system to acquire experimental data, and (iv) discussing potential applications to emergent telecommunication and networking problems.
APA, Harvard, Vancouver, ISO, and other styles
6

Gusukuma, Luke. "GPU Based Large Scale Multi-Agent Crowd Simulation and Path Planning." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/78098.

Full text
Abstract:
Crowd simulation is used for many applications including (but not limited to) videogames, building planning, training simulators, and various virtual environment applications. Particularly, crowd simulation is most useful for when real life practices wouldn't be practical such as repetitively evacuating a building, testing the crowd flow for various building blue prints, placing law enforcers in actual crowd suppression circumstances, etc. In our work, we approach the fidelity to scalability problem of crowd simulation from two angles, a programmability angle, and a scalability angle, by creating new methodology building off of a struct of arrays approach and transforming it into an Object Oriented Struct of Arrays approach. While the design pattern itself is applied to crowd simulation in our work, the application of crowd simulation exemplifies the variety of applications for which the design pattern can be used.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
7

Mengistu, Dawit. "Multi-Agent Based Simulations in the Grid Environment." Licentiate thesis, Karlskrona : Blekinge Institute of Technology, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00371.

Full text
Abstract:
The computational Grid has become an important infrastructure as an execution environment for scientific applications that require large amount of computing resources. Applications which would otherwise be unmanageable or take a prohibitively longer execution time under previous computing paradigms can now be executed efficiently on the Grid within a reasonable time. Multi-agent based simulation (MABS) is a methodology used to study and understand the dynamics of real world phenomena in domains involving interaction and/or cooperative problem solving where the participants are characterized by entities having autonomous and social behaviour. For certain domains the size of the simulation is extremely large, intractable without employing adequate computing resources such as the Grid. Although the Grid has come with immense opportunities to resource demanding applications such as MABS, it has also brought with it a number of challenges related to performance. Performance problems may have their origins either on the side of the computing infrastructure or the application itself, or both. This thesis aims at improving the performance of MABS applications by overcoming problems inherent to the behaviour of MABS applications. It also studies the extent to which the MABS technologies have been exploited in the field of simulation and find ways to adapt existing technologies for the Grid. It investigates performance monitoring and prediction systems in the Grid environment and their implementation for MABS application with the purpose of identifying application related performance problems and their solutions. Our research shows that large-scale MABS applications have not been implemented despite the fact that many problem domains that cannot be studied properly with only partial simulation. We assume that this is due to the lack of appropriate tools such as MABS platforms for the Grid. Another important finding of this work is the improvement of application performance through the use of MABS specific middleware.
APA, Harvard, Vancouver, ISO, and other styles
8

Murdock, J. William. "Self-improvment through self-understanding : model-based reflection for agent adaptation." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/8225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Karimian, Kimia. "BioCompT - A Tutorial on Bio-Molecular Computing." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1367943120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Zhengchun. "Modeling and simulation for healthcare operations management using high performance computing and agent-based model." Doctoral thesis, Universitat Autònoma de Barcelona, 2016. http://hdl.handle.net/10803/392743.

Full text
Abstract:
Los servicios hospitalarios de urgencias (SU) son servicios altamente integrados que gestionan las necesidades primarias de los pacientes que llegan sin cita previa y en condiciones inciertas. En este contexto, el análisis y la gestión de flujos de pacientes ejercen un papel clave en el desarrollo de las políticas y herramientas de decisión para mejorar la actuación global del sistema. Pese a esto, los mismos flujos de pacientes en un SU son considerados muy complejos debido a los diferentes caminos que pueden tomar los pacientes y a la inherente incerteza y variabilidad de los servicios de salud. Debido a la complejidad y al papel crucial de un SU en el sistema sanitario, la habilidad de representar, simular y predecir el rendimiento de un SU tiene un valor incalculable para quien toma decisiones para resolver los problemas de la gestión de las operaciones. Una manera a percatarse de las consecuencias es mediante el modelado y la simulación. El objetivo general de este estudio es desarrollar herramientas para entender mejora la complejidad (explicar), evaluar la política (predecir) y mejorar la eficiencia (optimizar) de unidades de SU. Las dos aportaciones principales son: (1) Un modelo basado en agentes para predecir y analizar cuantitativamente el complejo comportamiento de los servicios de urgencias. E objetivo de este modelo es captar la asociación no lineal entre las funciones de nivel macro y el comportamiento a nivel micro con el objetivo de comprender mejor el cuello de botella del rendimiento de los SU y proporcionar la capacidad de cuantificar este rendimiento en una condición dada. El modelo fue construido en colaboración con el personal de asistencia sanitaria en un SU típica y ha sido implementado en el entorno de modelado NetLogo. Se proporcionan casos de estudio para presentar algunas capacidades del simulador que analizan cuantitativamente el comportamiento del SU así como el soporte a la toma de decisiones. (2) Una metodología de simulación basada en la optimización para el calibrado de los parámetros del modelo en condiciones de escasez de datos. Para conseguir una alta fidelidad y credibilidad en la realización de la predicción y exploración del sistema actual con modelos de simulaciones se ha de aplicar en primer lugar una calibración rigurosa y un procedimiento de validación. No obstante, una de las cuestiones clave en el calibrado es la adquisición de información de una fuente válida para el sistema destino. El objetivo de este trabajo es desarrollar un método sistemático para calibrar automáticamente un modelo genérico de un servicio de urgencias con datos incompletos. El método de calibrado propuesto permite a los usuarios de la simulación calibrar el modelo genérico para la simulación de los propios sistemas sin involucrarse en el modelo. Las técnicas de computación de alto rendimiento se utilizaron para buscar el conjunto óptimo de parámetros de manera eficiente. Creemos que una herramienta de calibrado automático publicado juntamente con un modelo genérico de un SU es prometedor para la promoción de la aplicación de la simulación en los estudios de SU. Además, la integración de técnicas de simulación de un SU i optimación podrían también ser utilizada para la optimización sistemática de un SU. A partir de la simulación de los servicios de urgencias, nuestros esfuerzos probaron la viabilidad y la idoneidad de la utilización del modelo de simulación y técnicas basadas en agentes para el estudio del sistema de salud.
Hospital based emergency departments (EDs) are highly integrated service units to primarily handle the needs of the patients arriving without prior appointment, and with uncertain conditions. In this context, analysis and management of patient flows play a key role in developing policies and decision tools for overall performance improvement of the system. However, patient flows in EDs are considered to be very complex because of the different pathways patients may take and the inherent uncertainty and variability of healthcare processes. Due to the complexity and crucial role of an ED in the healthcare system, the ability to accurately represent, simulate and predict performance of ED is invaluable for decision makers to solve operations management problems. One way to realize this requirement is by modeling and simulation. Armed with the ability to execute a compute-intensive model and analyze huge datasets, the overall goal of this study is to develop tools to better understand the complexity (explain), evaluate policy (predict) and improve efficiencies (optimize) of ED units. The two main contributions are: (1) An agent-based model for quantitatively predicting and analyzing the complex behavior of emergency departments. The objective of this model is to grasp the non-linear association between macro-level features and micro-level behavior with the goal of better understanding the bottleneck of ED performance and provide ability to quantify such performance on defined condition. The model was built in collaboration with healthcare staff in a typical ED and has been implemented in a NetLogo modeling environment. In order to validate its adaptivity, the presented model has been calibrated to emulate a real ED in Spain, simulation results have proven the feasibility and ideality of using agent-based model & simulation techniques to study the ED system. Case studies are provided to present some capabilities of the simulator on quantitively analyzing ED behavior and supporting decision making. (2) A simulation and optimization based methodology for calibrating model parameters under data scarcity. To achieve high fidelity and credibility in conducting prediction and exploration of the actual system with simulation models, a rigorous calibration and validation procedure should firstly be applied. However, one of the key issues in calibration is the acquisition of valid source information from the target system. The aim of this contribution is to develop a systematic method to automatically calibrate a general emergency department model with incomplete data. The proposed calibration method enables simulation users to calibrate the general model for simulating their system without the involvement of model developers. High performance computing techniques were used to efficiently search for the optimal set of parameters. The case study indicates that the proposed method appears to be capable of properly calibrating and validating the simulation model with incomplete data. We believe that an automatic calibration tool released with a general ED model is promising for promoting the application of simulation in ED studies. In addition, the integration of the ED simulator and optimization techniques could be used for ED systematic performance optimization as well. Starting from simulating the emergency departments, our efforts proved the feasibility and ideality of using agent-based model methods to study healthcare systems.
APA, Harvard, Vancouver, ISO, and other styles
11

Skjerven, Brian M. "A parallel implementation of an agent-based brain tumor model." Link to electronic thesis, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-060507-172337/.

Full text
Abstract:
Thesis (M.S.) -- Worcester Polytechnic Institute.
Keywords: Visualization; Numerical analysis; Computational biology; Scientific computation; High-performance computing. Includes bibliographical references (p.19).
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Ruimin. "An agent-based service-oriented approach to evolving legacy software systems into a pervasive computing environment." Thesis, De Montfort University, 2010. http://hdl.handle.net/2086/4023.

Full text
Abstract:
This thesis focuses on an Agent-Based Service-Oriented approach to evolving legacy system into a Pervasive Computing environment. The methodology consists of multiple phases: using reverse engineering techniques to comprehend and decompose legacy systems, employing XML and Web Services to transform and represent a legacy system as pervasive services, and integrating these pervasive services into pervasive computing environments with agent based integration technology. A legacy intelligent building system is used as a case study for experiments with the approach, which demonstrates that the proposed approach has the ability to evolve legacy systems into pervasive service environments seamlessly. Conclusion is drawn based on analysis and further research directions are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
13

Shiode, Haruumi. "In-building Location Sensing Based on WLAN Signal Strength : Realizing a Presence User Agent." Thesis, KTH, Kommunikationssystem, CoS, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91868.

Full text
Abstract:
Exploiting context-aware environments, where sensors scattered in a environment update presence servers to indicate the environmental changes can be used to enable new services. Such systems have become feasible both in terms of technical di±culties and their cost. A current focus in this area of research is how a context-aware system should be designed so that it reduces both the cost and complexity of the infrastructure, but still provides the desired services. One of the key components of many context-aware systems is location sensing, because a user's location is one of the most used elements of information in context-aware services. In this paper, we address cost e®ective location services by utilizing measurements of WLAN signal strength. We derive from these measurements an estimate of a device's location, and make this location information available via a SIP Presence User Agent, thus making location information readily available to services that might wish to use this information - while hiding details of how this information is acquired from these services.
Genom att utnyttja kontextmedvetna miljöer, där sensorer i en miljö uppdaterar närvarande servrar med information omändringar i omgivningen, så kan man öppna upp vägar för nya tjänster. Sådana system har blivit utförbara bade när det gäller tekniska svarigheter och deras kostnader. Inom forskning som rör sådana här system ägnas mycket uppmärksamhetåt hur en kontextmedveten miljö borde designas för att minimera både kostnaden och komplexiteten av infrastrukturen, men fortfarande tillhandahålla den önskade tjänsten. En av huvudkomponenterna i många kontextmedvetna system är platsuppfattning, eftersom en användares position är en av de mest använda elementen av information i kontextmedvetna tjänster. I den här uppsatsen ägnar vi oss åt kostnadseffektiva platstjänster genom att mäta signalstyrkan av ett WLAN. Genom dessa mätningar uppskattar vi en enhets position och gör denna information tillgänglig via en SIP Presence User Agent, och gör på så vis platsinformationen tillgänglig för tjänster som kan vilja ha den { utan att avslöja detaljer om hur informationen har skaffats.
APA, Harvard, Vancouver, ISO, and other styles
14

Orichel, Thomas. "Adaptive rules in emergent logistics (ARIEL) : an agent-based analysis environment to study adaptive route-finding in changing road-networks /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FOrichel.pdf.

Full text
Abstract:
Thesis (M.S. in Modeling, Virtual Environments and Simulation and M.S. in Computer Science)--Naval Postgraduate School, June 2003.
"This thesis is done in cooperation with the MOVES Institute"--Cover. Thesis advisor(s): Eugene Paulo, John Hiles. Includes bibliographical references (p. 49). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
15

Al-ou'n, Ashraf M. S. "VM Allocation in Cloud Datacenters Based on the Multi-Agent System. An Investigation into the Design and Response Time Analysis of a Multi-Agent-based Virtual Machine (VM) Allocation/Placement Policy in Cloud Datacenters." Thesis, University of Bradford, 2017. http://hdl.handle.net/10454/16067.

Full text
Abstract:
Recent years have witnessed a surge in demand for infrastructure and services to cover high demands on processing big chunks of data and applications resulting in a mega Cloud Datacenter. A datacenter is of high complexity with increasing difficulties to identify, allocate efficiently and fast an appropriate host for the requested virtual machine (VM). Establishing a good awareness of all datacenter’s resources enables the allocation “placement” policies to make the best decision in reducing the time that is needed to allocate and create the VM(s) at the appropriate host(s). However, current algorithms and policies of placement “allocation” do not focus efficiently on awareness of the resources of the datacenter, and moreover, they are based on conventional static techniques. Which are adversely impacting on the allocation progress of the policies. This thesis proposes a new Agent-based allocation/placement policy that employs some of the Multi-Agent system features to get a good awareness of Cloud Datacenter resources and also provide an efficient allocation decision for the requested VMs. Specifically, (a) The Multi-Agent concept is used as a part of the placement policy (b) A Contract Net Protocol is devised to establish good awareness and (c) A verification process is developed to fully dimensional VM specifications during allocation. These new results show a reduction in response time of VM allocation and the usage improvement of occupied resources. The proposed Agent-based policy was implemented using the CloudSim toolkit and consequently was compared, based on a series of typical numerical experiments, with the toolkit’s default policy. The comparative study was carried out in terms of the time duration of VM allocation and other aspects such as the number of available VM types and the amount of occupied resources. Moreover, a two-stage comparative study was introduced through this thesis. Firstly, the proposed policy is compared with four state of the art algorithms, namely the Random algorithm and three one-dimensional Bin-Packing algorithms. Secondly, the three Bin-Packing algorithms were enhanced to have a two-dimensional verification structure and were compared against the proposed new algorithm of the Agent-based policy. Following a rigorous comparative study, it was shown that, through the typical numerical experiments of all stages, the proposed new Agent-based policy had superior performance in terms of the allocation times. Finally, avenues arising from this thesis are included.
APA, Harvard, Vancouver, ISO, and other styles
16

Shojaei, Elham. "Simulation for Investigating Impact of Dependent and Independent Factors on Emergency Department System Using High Performance Computing and Agent-based Modeling." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/670856.

Full text
Abstract:
S'ha suggerit que l'augment de la vida útil i l'envelliment de la població a Espanya, juntament amb les seves condicions de salut corresponents, com les Malalties No Transmissibles (ENT), contribueixen a una major demanda en els Serveis d'Urgències Hospitalàries (SUH). Espanya és un d'aquests països on els SUH suporta una càrrega molt alta de pacients amb ENT. Aquests pacients sovint necessiten accedir als sistemes de salut i molts d'ells han de ser readmesos, encara que no es trobin en una situació d'emergència o perillosa. A més, moltes ENT són conseqüència de l'elecció de l'estil de vida que poden ser controlables. En general, les condicions de vida de cada pacient crònic afecten a les variables de salut i modifiquen els valors d'aquestes variables, per la qual cosa poden canviar la situació d'estabilitat dels pacients amb ENT, a la d'inestabilitat i la seva consegüent visita al Servei d'Urgències. En aquest estudi, es presenta un nou mètode per a preveure el futur rendiment i la demanda als Serveis d'Urgències Hospitalàries (SUH) a Espanya. Aquesta predicció i quantificació del comportament dels SUH són tot un desafiament, ja que els SUH són una de les parts més complexes dels hospitals. El futur del comportament dels SUH a Espanya es preveu mitjançant l'ús d'enfocaments computacionals detallats, integrats amb dades clíniques. En primer lloc es van desenvolupar models estadístics per a preveure com, la distribució de la població i l'edat dels pacients amb malalties no transmissibles (ENT), variarien a Espanya en els pròxims anys. Seguidament, es va usar un enfocament de modelatge basat en agents, per a la simulació dels Serveis d'Urgències Hospitalàries (SUH), amb l'objectiu de predir els impactes que els canvis en la distribució de la població i l'edat dels pacients amb ENT, tindrien en el rendiment als SUH, reflectit en l'indicador del (Temps d'estada del pacient) al SUH, entre els anys 2019 i 2039. Una altra part d'aquest estudi, és la proposta d'un model que ajuda a analitzar el comportament dels pacients amb malalties cròniques (ENT), amb un enfocament específic en pacients amb insuficiència cardíaca, en funció del seu estil de vida. Considerem que les condicions de vida afecten els signes i símptomes de les malalties cròniques i, en conseqüència, com aquestes acaben afectant a l'estabilitat de les malalties cròniques. Utilitzem el modelatge basat en agents, màquines d'estats i un sistema de lògica difusa per a desenvolupar el nostre model. Específicament, modelem els paràmetres requerits de ""condició de vida"" que poden influir en les variables mèdiques requerides. Aquestes variables determinen la classe d'estabilitat de la malaltia crònica. Aquesta tesi també investiga els impactes del Tele-SUH en el comportament, el temps d'atenció i l'eficiència dels SUH i el fet de fer ús dels hospitals. També es proposa un model pel Tele-SUH, que proporciona serveis mèdics d'atenció "en línia". La simulació i el modelatge basat en agents són eines poderoses que ens permeten modelitzar i preveure el comportament dels SUH, com un sistema complex, pel conjunt d'entrades desitjades. Cada agent, fonamentat en un conjunt de regles, interacciona amb el seu entorn i amb la resta dels agents. Així mateix, aquesta tesi pot respondre a diverses qüestions al respecte de la demanda i el rendiment dels SUH en un futur i proporciona als proveïdors d'atenció mèdica informació quantitativa sobre l'impacte econòmic, l'assequibilitat, el personal requerit i els recursos físics necessaris. La predicció del comportament dels pacients amb ENT també pot ser beneficiosa, perquè la política de salut planifiqui l'increment de l'educació sanitària a la comunitat, reduint els comportaments de risc i ensenyant a prendre decisions de vida saludables.
Se ha sugerido que el aumento de la vida útil y el envejecimiento de la población en España, junto con sus condiciones de salud correspondientes, como las Enfermedades No Transmisibles (ENT), contribuyen a una mayor demanda en el Servicio de Urgencias Hospitalarias (SUH). España es uno de esos países donde los SUH soportan una carga muy alta de pacientes con ENT. Estos pacientes a menudo necesitan acceder a los sistemas de salud y muchos de ellos deben ser readmitidos, aunque no se encuentren en una situación de emergencia o peligrosa. Además, muchas ENT son consecuencia de elecciones de estilo de vida que pueden ser controlables. Por lo general, las condiciones de vida de cada paciente crónico afectan las variables de salud y modifican los valores de estas variables, por lo que pueden cambiar la situación de estabilidad de los pacientes con ENT, a la de inestabilidad y su consiguiente visita al Servicio de Urgencias. En este estudio, se presenta un nuevo método para la predicción del futuro rendimiento y la demanda en el Servicio de Urgencias Hospitalarias (SUH) en España. Esta predicción y cuantificación del comportamiento del SUH son todo un desafío, ya que el SUH es una de las partes más complejas de los hospitales. El futuro del comportamiento del SUH en España se predice mediante el uso de enfoques computacionales detallados, integrados con datos clínicos. En primer lugar se desarrollaron modelos estadísticos para predecir cómo, la distribución de la población y la edad de los pacientes con enfermedades no transmisibles (ENT), cambiarían en España en los próximos años. A continuación, se usó un enfoque de modelado basado en agentes, para la simulación del Servicio de Urgencias Hospitalarias (SUH), con el objetivo de predecir los impactos que los cambios en la distribución de la población y la edad de los pacientes con ENT, tendrían en el rendimiento del SUH, reflejado en el indicador LoS (Tiempo de estancia del paciente) del SUH, entre los años 2019 y 2039. Otra parte de este estudio, es la propuesta de un modelo que ayuda a analizar el comportamiento de los pacientes con enfermedades crónicas (ENT), con un enfoque específico en pacientes con insuficiencia cardíaca, en función de su estilo de vida. Consideramos cómo las condiciones de vida afectan los signos y síntomas de las enfermedades crónicas y, en consecuencia, cómo acaban afectando la estabilidad de estas enfermedades crónicas. Utilizamos el modelado basado en agentes, máquinas de estados y un sistema de lógica difusa para desarrollar nuestro modelo. Específicamente, modelizamos los parámetros requeridos de ""condición de vida"" que pueden influir en las variables médicas requeridas. Estas variables determinan la clase de estabilidad de la enfermedad crónica. Esta tesis también investiga los impactos del Tele-SUH en el comportamiento, el tiempo de atención y la eficiencia del SUH y la utilización del hospital. También se propone un modelo para el Tele-SUH, que proporciona servicios médicos de atención "en línea". La simulación y el modelizado basado en agentes son herramientas poderosas que nos permiten modelizar y predecir el comportamiento del SUH, como un sistema complejo, para el conjunto de entradas deseadas. Cada agente, basado en un conjunto de reglas, interacciona con su entorno y con el resto de los agentes. Esta tesis puede responder a varias preguntas con respecto a la demanda y el rendimiento del SUH en el futuro y proporciona a los proveedores de atención médica información cuantitativa sobre el impacto económico, la asequibilidad, el personal requerido y los recursos físicos necesarios.
Increased life expectancy, and population aging in Spain, along with their corresponding health conditions such as non-communicable diseases (NCDs), have been suggested to contribute to higher demands on the Emergency Department (ED). Spain is one of such countries which an ED is occupied by a very high burden of patients with NCDs. They very often need to access healthcare systems and many of them need to be readmitted even though they are not in an emergency or dangerous situations. Furthermore many NCDs are a consequence of lifestyle choices that can be controllable. Usually, the living conditions of each chronic patient affect health variables and change the quantity of these health variables, so they can change the stability situation of the patients with NCDs to instability and its resultant will be visiting ED. In this study, a new method for the prediction of future performance and demand in the emergency department (ED) in Spain is presented. Prediction and quantification of the behavior of ED are, however, challenging as ED is one of the most complex parts of hospitals. Future years of Spain's ED behavior was predicted by the use of detailed computational approaches integrated with clinical data. First, statistical models were developed to predict how the population and age distribution of patients with non-communicable diseases change in Spain in future years. Then, an agent-based modeling approach was used for simulation of the emergency department to predict impacts of the changes in population and age distribution of patients with NCDs on the performance of ED, reflected in hospital LoS, between years 2019 and 2039. Then in another part of this study, we propose a model that helps to analyze the behavior of chronic disease patients with a focus on heart failure patients based on their lifestyle. We consider how living conditions affect the signs and symptoms of chronic disease and, accordingly, how these signs and symptoms affect chronic disease stability. We use an agent-based model, a state machine, and a fuzzy logic system to develop the model. Specifically, we model the required 'living condition' parameters that can influence the required medical variables. These variables determine the stability class of chronic disease. This thesis also investigates the impacts of Tele-ED on behavior, time, and efficiency of ED and hospital utilization. Then we propose a model for Tele-ED which delivers the medical services online. Simulation and Agent-based modeling are powerful tools that allow us to model and predict the behavior of ED as a complex system for a given set of desired inputs. Each agent based on a set of rules responds to its environment and other agents. This thesis can answer several questions in regards to the demand and performance of ED in the future and provides health care providers with quantitative information on economic impact, affordability, required staff, and physical resources. Prediction of the behavior of patients with NCDs can also be beneficial for health policy to plan for increasing health education in the community, reduce risky behavior, and teaching to make healthy decisions in a lifetime. Prediction of behavior of Spain's ED in future years can help care providers for decision-makers to improve health care management.
APA, Harvard, Vancouver, ISO, and other styles
17

da, Silva Borges de Santana Francisco José. "Care HPS: A high performance simulation methodology for complex agent-based models." Doctoral thesis, Universitat Autònoma de Barcelona, 2016. http://hdl.handle.net/10803/395209.

Full text
Abstract:
La simulació paral·lela i distribuïda és una potent eina per al desenvolupament realista de models basats en agents i la seva simulació (ABMS). Aquesta eina permet que científics de diferents àrees puguin realitzar conclusions i adquirir coneixements sobre el sistema sota estudi. Tanmateix, això solament és possible si les simulacions realitzades ofereixen resultats realistes; és a dir, si els resultats s’assemblen a la realitat i si aquestes simulacions poden ser utilitzades per a la predicció o per a explicar algun tipus de fenomen emergent. Per això, aquestes simulacions requereixen resultats confiables mitjançant mètodes estadístics i presenten una alta complexitat computacional pel fet que milers d’ agents independents s’utilitzen per a modelar el sistema. Per aquesta raó, aquest tipus de simulació requereix de llargs temps d’execució i de gran potència de computació. Una possible solució per a resoldre aquest tipus de simulacions és la utilització de sistemes paral·lels i distribuïts que aprofiten la potència de l’arquitectura subjacent disponible en les infraestructures actuals i és important, per l’avanç de la ciència de la computació, el desenvolupament de tècniques, algoritmes i enfocaments que permetin analitzar aquests sistemes executant-se sobre infraestructures de computació d’altes prestacions. En la literatura, es poden trobar algunes eines que permeten el modelatge basat en agents i que utilitzen HPC però cap d’aquestes eines estan dissenyades per executar experiments amb el fi d’incloure i analitzar nous enfocaments, tècniques i solucions per a ABMS que requereixin un alt rendiment i poc esforç de programació. En el present treball, s’introdueix una metodologia per a realitzar investigacions en models complexos basats en agents que demanen solucions d’alt rendiment (HPC). Aquesta metodologia, anomenada Care High Performance Simulation (HPS) permet als investigadors: 1) desenvolupar tècniques i solucions d’alt rendiment i simulacions distribuïdes per a models basats en agents; i, 2) permet l’estudi, disseny i implementació de models complexos basats en agents que requereixen solucions de computació d’alt rendiment. Aquesta metodologia ha sigut dissenyada per desenvolupar de forma fàcil i ràpida nous ABM, així com per a estendre i aplicar noves solucions als diferents mòduls funcionals que afecten a una simulació paral·lela i distribuïda, tals com la sincronització, la comunicació, la carrega i el balanç de la computació i/o els algoritmes de partició de dades. Dintre del present treball, i com a prova de concepte, s’han desenvolupat a més en Care HPS diferents models basats en agents i tècniques/algoritmes que poden ser utilitzats per als investigadors en ABMS i que requereixen solucions HPC per a realitzar les seves investigacions. Per a validar la proposta s’han realitzat un conjunt d’experiments amb l’objectiu de mostrar la completitud i funcionalitat d’aquesta metodologia i avaluar la bondat dels resultats obtinguts. Aquests experiments es centren en: 1) validar els resultats de les tècniques proposades i enfocaments que s’utilitzen en Care HPS; 2) mostrar que les característiques de disseny de Care HPS satisfacin els objectius proposats; i finalment, 3) verificar els resultats d’escalabilitat de Care HPS com infraestructura de simulació distribuïda per a models basats en agents. En conclusió, Care HPS pot ser utilitzat com a instrument científic en el desenvolupament de models basats en agents i en l’àrea de simulacions distribuïda en arquitectures HPC.
La simulación paralela y distribuida es una potente herramienta para el desarrollo realista de modelos basados en agentes y su simulación (ABMS). Esta herramienta permite que científicos de diferentes áreas puedan realizar conclusiones y adquirir conocimientos acerca del sistema bajo estudio. Sin embargo, esto sólo es posible si las simulaciones realizadas ofrecen resultados realistas, es decir si los resultados se asemejan a la realidad y si estas simulaciones pueden ser utilizadas para la predicción o para explicar algún tipo de fenómeno emergente. Por ello, estas simulaciones requieren resultados confiables a través de métodos estadísticos y presentan una alta complejidad computacional debido a que miles de agentes independientes se utilizan para modelar el sistema. Por estas razónes, este tipo de simulación requiere de largos tiempos de ejecución y de gran potencia de cómputo. Una posible solución para resolver este tipo de simulaciones es la utilización de sistemas paralelos y distribuidos que aprovechan la potencia de la arquitectura subyacente disponible en las infraestructuras actuales y es importante, para el avance de la ciencia de la computación, el desarrollo de técnicas, algoritmos y enfoques que permitan analizar estos sistemas ejecutándose sobre infraestructuras de cómputo de altas prestaciones. En la literatura, se pueden encontrar algunas herramientas que permiten el modelado basados en agentes y que utilizan HPC pero ninguna de estas herramientas están diseñadas para ejecutar experimentos con el fin de incluir y analizar nuevos enfoques, técnicas y soluciones para ABMS que requieran un alto rendimiento y poco esfuerzo de programación. En el presente trabajo, se introduce una metodología para realizar investigaciones en modelos complejos basados en agentes que demandan soluciones de alto rendimiento (HPC). Esta metodología, llamada Care High Performance Simulation (HPS) permite a los investigadores: 1) desarrollar técnicas y soluciones de alto rendimiento y simulaciones distribuidas para modelos basados en agentes; y, 2) permite el estudio, diseño e implementación de modelos complejos basados en agentes que requieren soluciones de computación de alto rendimiento. Esta metodología ha sido diseñada para desarrollar de forma fácil y rápida nuevos ABM, así como para extender y aplicar nuevas soluciones a los diferentes módulos funcionales que afectan a una simulación paralela y distribuida, tales como la sincronización, la comunicación, la carga y el balance de la computación y/o los algoritmos de partición de datos. Dentro del presente trabajo, y como prueba de concepto, se han desarrollado además en Care HPS diferentes modelos basados en agentes y técnicas/algoritmos que pueden ser utilizados por los investigadores en ABMS y que requieran soluciones HPC para realizar sus investigaciones. Para validar la propuesta se han realizado un conjunto de experimentos con el objetivo de mostrar la completitud y funcionalidad de esta metodología y evaluar la bondad de los resultados obtenidos. Estos experimentos se centran en: 1) validar los resultados de las técnicas propuestas y enfoques que se utilizan en Care HPS; 2) mostrar que las características de diseño de Care HPS satisfacen los objetivos propuestos; y finalmente, 3) verificar los resultados de escalabilidad de Care HPS como infraestructura de simulación distribuida para modelos basados en agentes. En conclusión, Care HPS puede ser utilizado como instrumento científico en el desarrollo de modelos basado en agentes y en el área de simulaciones distribuida en arquitecturas HPC.
Parallel and distributed simulation is a powerful tool for developing realistic agent-based modeling and simulation (ABMS). ABMS can allow scientists to reach conclusions and gain knowledge about the system under study. But this is only possible if these simulations offer realistic results, meaning simulations whose results are validated in reality and that can be used for prediction or to explain some phenomenon. These simulations require reliable results through statistical approaches. In addition, they have a high computational complexity because thousands of agents are used in order to model them. For these reasons, this kind of simulation requires a long execution time. Consequently, one possible solution to solve these simulations is to use parallel and distributed simulations that take advantage of the powerful architecture available nowadays. Thus, for the advance of computing science, it is important that High Performance Computing (HPC) techniques, solutions and approaches be proposed and studied. In the literature, we can find some agent-based model tools that use HPC to execute agent-based modeling and simulations. However, none of these tools are designed to execute HPC experiments in order to propose new approaches, techniques and solutions for ABMS that required high performance solutions without great programming effort. In this thesis, we introduce a methodology to do research on HPC for complex agentbased models that demand high performance solutions. This methodology, named Care High Performance Simulation (HPS), enables researchers to: 1) develop techniques and solutions of high performance parallel and distributed simulations for agent-based models; and, 2) study, design and implement complex agent-based models that require high performance computing solutions. This methodology was designed to easily and quickly develop new ABMs, as well as to extend and implement new solutions for the main issues of parallel and distributed simulations such as: synchronization, communication, load and computing balancing, and partitioning algorithms in order to test and analyze. Also, we developed in Care HPS some agent-based models and HPC approaches and techniques which can be used by researchers in HPC for ABMs that required high performance solutions. We conducted some experiments with the aim of showing the completeness and functionality of this methodology and evaluate how the results can be useful. These experiments focus on: 1) presenting the results of our proposed HPC techniques and approaches which are used in Care HPS; 2) showing that the features of Care HPS reach the proposed aims; and, 3) presenting the scalability results of Care HPS. As a result, we show that Care HPS can be used as a scientific instrument for the advance of the agent-based parallel and distributed simulations field.
APA, Harvard, Vancouver, ISO, and other styles
18

Harvey, Daniel Gordon. "Efficient approaches to simulating individual-based cell population models." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:95f50f05-9cf5-4c58-9115-aff7aabdfd6f.

Full text
Abstract:
Computational modelling of populations of cells has been applied to further understanding in a range of biological fields, from cell sorting to tumour development. The ability to analyse the emergent population-level effects of variation at the cellular and subcellular level makes it a powerful approach. As more detailed models have been proposed, the demand for computational power has increased. While developments in microchip technology continue to increase the power of individual compute units available to the research community, the use of parallel computing offers an immediate increase in available computing power. To make full use of parallel computing technology it is necessary to develop specialised algorithms. To that end, this thesis is concerned with the development, implementation and application of a novel parallel algorithm for the simulation of an off-lattice individual-based model of a population of cells. We first use the Message Passing Interface to develop a parallel algorithm for the overlapping spheres model which we implement in the Chaste software library. We draw on approaches for parallelising molecular dynamics simulations to develop a spatial decomposition approach to dividing data between processors. By using functions designed for saving and loading the state of simulations, our implementation allows for the parallel simulation of all subcellular models implemented in Chaste, as well as cell-cell interactions that depend on any of the cell state variables. Our implementation allows for faithful replication of model cells that migrate between processors during a simulation. We validate our parallel implementation by comparing results with the extensively tested serial implementation in Chaste. While the use of the Message Passing Interface means that our algorithm may be used on shared- and distributed-memory systems, we find that parallel performance is limited due to high communication costs. To address this we apply a series of optimisations that improve the scaling of our algorithm both in terms of compute time and memory consumption for given benchmark problems. To demonstrate an example application of our work to a biological problem, we extend our algorithm to enable parallel simulation of the Subcellular Element Model (S.A. Sandersius and T.J. Newman. Phys. Biol., 5:015002, 2008). By considering subcellular biomechanical heterogeneity we study the impact of a stiffer nuclear region within cells on the initiation of buckling of a compressed epithelial layer. The optimised parallel algorithm decreases computation time for a single simulation in this study by an order of magnitude, reducing computation time from over a week to a single day.
APA, Harvard, Vancouver, ISO, and other styles
19

Nasman, James M. "Deployed virtual consulting : the fusion of wearable computing, collaborative technology, augmented reality and intelligent agents to support fleet aviation maintenance /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FNasman.pdf.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, March 2004.
Thesis advisor(s): Alex Bordetsky, Gurminder Singh. Includes bibliographical references (p. 49). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
20

Azzam, Adel R. "Survey of Autonomic Computing and Experiments on JMX-based Autonomic Features." ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2123.

Full text
Abstract:
Autonomic Computing (AC) aims at solving the problem of managing the rapidly-growing complexity of Information Technology systems, by creating self-managing systems. In this thesis, we have surveyed the progress of the AC field, and studied the requirements, models and architectures of AC. The commonly recognized AC requirements are four properties - self-configuring, self-healing, self-optimizing, and self-protecting. The recommended software architecture is the MAPE-K model containing four modules, namely - monitor, analyze, plan and execute, as well as the knowledge repository. In the modern software marketplace, Java Management Extensions (JMX) has facilitated one function of the AC requirements - monitoring. Using JMX, we implemented a package that attempts to assist programming for AC features including socket management, logging, and recovery of distributed computation. In the experiments, we have not only realized the powerful Java capabilities that are unknown to many educators, we also illustrated the feasibility of learning AC in senior computer science courses.
APA, Harvard, Vancouver, ISO, and other styles
21

Vasudevan, Swetha. "Immune Based Event-Incident Model for Intrusion Detection Systems: A Nature Inspired Approach to Secure Computing." [Kent, Ohio] : Kent State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=kent1182821562.

Full text
Abstract:
Thesis (M.S.)--Kent State University, 2007.
Title from PDF t.p. (viewed Mar. 19, 2009). Advisor: Michael Rothstein. Keywords: intrusion detection systems, immune system, immune detectors, intrusion detection squad, multi-agent system. Includes bibliographical references (p. 62-66).
APA, Harvard, Vancouver, ISO, and other styles
22

Ji, Meng. "Graph-Based Control of Networked Systems." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16313.

Full text
Abstract:
Networked systems have attracted great interests from the control society during the last decade. Several issues rising from the recent research are addressed in this dissertation. Connectedness is one of the important conditions that enable distributed coordination in a networked system. Nonetheless, it has been assumed in most implementations, especially in continuous-time applications, until recently. A nonlinear weighting strategy is proposed in this dissertation to solve the connectedness preserving problem. Both rendezvous and formation problem are addressed in the context of homogeneous network. Controllability of heterogeneous networks is another issue which has been long omitted. This dissertation contributes a graph theoretical interpretation of controllability. Distributed sensor networks make up another important class of networked systems. A novel estimation strategy is proposed in this dissertation. The observability problem is raised in the context of our proposed distributed estimation strategy, and a graph theoretical interpretation is derived as well. The contributions of this dissertation are as follows: It solves the connectedness preserving problem for networked systems. Based on that, a formation process is proposed. For heterogeneous networks, the leader-follower structure is studied and sufficient and necessary conditions are presented for the system to be controllable. A novel estimation strategy is proposed for distributed sensor networks, which could improve the performance. The observability problem is studied for this estimation strategy and a necessary condition is obtained. This work is among the first ones that provide graph theoretical interpretations of the controllability and observability issues.
APA, Harvard, Vancouver, ISO, and other styles
23

Tröger, Ralph. "Supply Chain Event Management – Bedarf, Systemarchitektur und Nutzen aus Perspektive fokaler Unternehmen der Modeindustrie." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-155014.

Full text
Abstract:
Supply Chain Event Management (SCEM) bezeichnet eine Teildisziplin des Supply Chain Management und ist für Unternehmen ein Ansatzpunkt, durch frühzeitige Reaktion auf kritische Ausnahmeereignisse in der Wertschöpfungskette Logistikleistung und -kosten zu optimieren. Durch Rahmenbedingungen wie bspw. globale Logistikstrukturen, eine hohe Artikelvielfalt und volatile Geschäftsbeziehungen zählt die Modeindustrie zu den Branchen, die für kritische Störereignisse besonders anfällig ist. In diesem Sinne untersucht die vorliegende Dissertation nach einer Beleuchtung der wesentlichen Grundlagen zunächst, inwiefern es in der Modeindustrie tatsächlich einen Bedarf an SCEM-Systemen gibt. Anknüpfend daran zeigt sie nach einer Darstellung bisheriger SCEM-Architekturkonzepte Gestaltungsmöglichkeiten für eine Systemarchitektur auf, die auf den Designprinzipien der Serviceorientierung beruht. In diesem Rahmen erfolgt u. a. auch die Identifikation SCEM-relevanter Business Services. Die Vorzüge einer serviceorientierten Gestaltung werden detailliert anhand der EPCIS (EPC Information Services)-Spezifikation illustriert. Abgerundet wird die Arbeit durch eine Betrachtung der Nutzenpotenziale von SCEM-Systemen. Nach einer Darstellung von Ansätzen, welche zur Nutzenbestimmung infrage kommen, wird der Nutzen anhand eines Praxisbeispiels aufgezeigt und fließt zusammen mit den Ergebnissen einer Literaturrecherche in eine Konsolidierung von SCEM-Nutzeffekten. Hierbei wird auch beleuchtet, welche zusätzlichen Vorteile sich für Unternehmen durch eine serviceorientierte Architekturgestaltung bieten. In der Schlussbetrachtung werden die wesentlichen Erkenntnisse der Arbeit zusammengefasst und in einem Ausblick sowohl beleuchtet, welche Relevanz die Ergebnisse der Arbeit für die Bewältigung künftiger Herausforderungen innehaben als auch welche Anknüpfungspunkte sich für anschließende Forschungsarbeiten ergeben.
APA, Harvard, Vancouver, ISO, and other styles
24

O’Reilly, Sean Patrick. "Agent-based crowd simulation using GPU computing." Thesis, 2014. http://hdl.handle.net/10210/11428.

Full text
Abstract:
M.Sc. (Information Technology)
The purpose of the research is to investigate agent-based approaches to virtual crowd simulation. Crowds are ubiquitous and are becoming an increasingly common phenomena in modern society, particularly in urban settings. As such, crowd simulation systems are becoming increasingly popular in training simulations, pedestrian modelling, emergency simulations, and multimedia. One of the primary challenges in crowd simulation is the ability to model realistic, large-scale crowd behaviours in real time. This is a challenging problem, as the size, visual fidelity, and complex behaviour models of the crowd all have an impact on the available computational resources. In the last few years, the graphics processing unit (GPU) has presented itself as a viable computational resource for general purpose computation. Traditionally, GPUs were used solely for their ability to efficiently compute operations related to graphics applications. However, the modern GPU is a highly parallel programmable processor, with substantially higher peak arithmetic and memory bandwidth than its central processing unit (CPU) counterpart. The GPU’s architecture makes it a suitable processing resource for computations that are parallel or distributed in nature. One attribute of multi-agent systems (MASs) is that they are inherently decentralised. As such, a MAS that leverages advancements in GPU computing may provide a solution for crowd simulation. The research investigates techniques and methods for general purpose crowd simulation, including topics in agent behavioural modes, pathplanning, collision avoidance and agent steering. The research also investigates how GPU computing has been utilised to address these computationally intensive problem domains. Based on the outcomes of the research, an agent-based model, Massively Parallel Crowds (MPCrowds), is proposed to address virtual crowd simulation, using the GPU as an additional resource for agent computation.
APA, Harvard, Vancouver, ISO, and other styles
25

Chou, Yu-Cheng. "Autonomic mobile agent-based parallel computing for distributed systems." Diss., 2009. http://proquest.umi.com/pqdweb?did=1983628981&sid=1&Fmt=2&clientId=48051&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Shih, Kai-Yao, and 施凱耀. "Agent-based Protocol for Fair Trading in Grid Computing." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/93580974024218306322.

Full text
Abstract:
碩士
國立中央大學
資訊管理研究所
95
Due to the growing scale of Grid economy based on popularized broadband internet and high performance personal computers, exchanging of valuable documents like computing result for a resource trading payment between service consumer and service provider has become more and more important for Grid environment today. This has motivated us to propose a new protocol for fair resource trading between Grid service consumer and Grid service provider with the assistance of a Grid bank and an off-line trusted third party. The main contributions of our protocol are threefold. First, it offers not only good security but also true fairness. Secondly, it supports secure and convenient payment by using reinforced randomized RSA-based partially blind signature scheme for electronic cash. Thirdly, it is adapted for Grid middleware like Globus Toolkit with only few integrating cost. In this paper we state the assumptions employed for our protocol design, define the protocol using an off-line recovery method, exam its security and fairness, measure executing performance than finally outline our conclusion with some future works.
APA, Harvard, Vancouver, ISO, and other styles
27

Hsu, Chung-Min, and 徐崇閔. "Pricing Strategy Evolution of Cloud Computing– Agent-Based Model." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/2m4yd2.

Full text
Abstract:
碩士
國立臺北科技大學
資訊管理研究所
102
In recent years, Cloud Computing is a major development in IT industry, relevant application technology has gradually matured, more and more manufacturers enter the Cloud industry, this market of competition cloud services followed accompanied by price competition, pricing strategy will benefit significantly important. However, because no fixed pricing strategy mode, so select the appropriate pricing strategy is difficult. Pricing strategy of firm profits and long-term development, especially in the emerging cloud computing industry is more obvious, it affects the layout of the cloud service providers and operators. In this study, used agent-based model characteristics to progressive manufacturers price behavior simulation to discuss cloud computing service vendor pricing strategy. To construct an agent-based model and encoding an agent for individual elements of the way through the evolution of the Genetic Algorithm, Through the evolution of the genetic algorithm approach, to manufacturers in the market mutual learning process, thus improving their price behavior, selection of the most suitable pricing strategy on simulation vendors in the current market environment. This study was designed to interact with the model maker''s pricing strategy, pricing behavior in order to provide cloud services vendors to market simulation, thus pricing strategy for manufacturers as a reference.
APA, Harvard, Vancouver, ISO, and other styles
28

Ke, Hung-i., and 柯鴻儀. "A multi-agent-based distributed computing environment for bioinformatics applications." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/3pnre2.

Full text
Abstract:
碩士
國立中山大學
資訊管理學系研究所
97
The process of bioinformatics computing consumes huge computing resources, in situation of difficulty in improvement of algorithm and high cost of mainframe, many scholars choice distributed computing as an approach for reducing computing time. When using distributed computing for bioinformatics, how to find a properly tasks allocation strategy among different computing nodes to keep load-balancing is an important issue. By adopting multi-agent system as a tool, system developer can design tasks allocation strategies through intuitional view and keep load-balancing among computing nodes. The purpose of our research work is using multi-agent system as an underlying tool to develop a distributed computing environment and assist scholars in solving bioinformatics computing problem, In comparison with public computing projects such as BOINC, our research work focuses on utilizing computing nodes deployed inside organization and connected by local area network.
APA, Harvard, Vancouver, ISO, and other styles
29

HSU, Yu-Kuei, and 許育魁. "Constructing an Agent-based Single Sign-On Scheme for Cloud Computing Services." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/43257006465484933569.

Full text
Abstract:
碩士
大葉大學
資訊管理學系碩士班
100
Nowadays, there are hundreds of cloud services, and users just need to register with one account and password and then they can use browsers to access services in each platform at any time. Although the cloud system can bring a lot of advantages, users still worry about how they can ensure the information security and confidentiality. The international alliance OASIS sets up the standard of the security assertion markup language, which combines the function about single sign-on that uses the way of redirecting to the authentication server to achieve authentication, but its operation not only increases the load of servers and consume huge bandwidth, but also is vulnerable to the replay and man-in-the-middle attacks. In such a way, it will not be able to protect confidential information and personal privacy. The main purpose in this thesis is to integrate the secure agent platform and cloud services to build a single sign-on system in the trust mode. Through the way of carrying users’ information by mobile agents to let the times of communication between a user and the host be reduced, the proposed scheme can avoid a variety of malicious attacks. In summary, the proposal scheme can enhance the security of single sign-on in cloud service environments and protect users’ privacy, reduce the network traffic in distributed environments, and thus make all services and efficiency better.
APA, Harvard, Vancouver, ISO, and other styles
30

Deng, Lawrence Y., and 鄧有光. "Using Food Web as an Evolution Computing Model for Internet-Based Mobile Agent." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/30141694748908747660.

Full text
Abstract:
碩士
淡江大學
資訊工程學系
87
The ecosystem is an evolutionary result of natural laws and a Food Web (or Food Chain) embeds a set of computation rules of natural balance. Based on the concept of Food Web, and one of the laws that we may learn from the natural besides neural networks and genetic algorithms, we propose a theoretical computation mobile agent evolution on the Internet. We define an agent niche overlap graph and agent evolution states. We also propose a set of algorithms used in our multimedia search programs to simulate agent evolution. Agents are cloned to live on other remote host stations according to three different strategies: the brute force strategy, semi-brute force strategy, and selective strategy. Evolution of different strategies are discussed. Guidelines of writing mobile agent programs are given, too. The proposed technique can be used in distributed information retrieval allowing the computation load shifted to servers, and significantly reduces the traffic of network communication. To date, it’s still hard to find other similar models in the literature of software agents. The results of our research will address a small portion of the ice field only, but, meanwhile, we hope that, this problem should be further studied in the societies of network communication, multimedia information retrieval, and intelligent systems on the Internet.
APA, Harvard, Vancouver, ISO, and other styles
31

Park, Anthony Sang-Bum [Verfasser]. "A service based agent system supporting mobile computing / vorgelegt von Anthony Sang-Bum Park." 2004. http://d-nb.info/971019843/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chou, Yi-Hsiang, and 周宜興. "The Design and Implementation of a Mobile Agent-based Framework for Context-aware Computing." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/78678935013586465631.

Full text
Abstract:
碩士
國立交通大學
資訊科學系
90
Since the proliferation of all kinds of heterogeneous wireless networks, the scenarios of accessing networks become complicated gradually. Therefore, context-aware computing appeared for providing applications the ability to execute on heterogeneous mobile computing platforms to achieve the goal of pervasive computing. My thesis focus on current wireless network environment for pervasive computing and propose a context-aware framework which based on mobile agent platform and integrate context-awareness with dynamic reconfiguration of software components for the sake of facilitating applications possess adaptability and transparency for heterogeneous devices and execute on diverse network environment.
APA, Harvard, Vancouver, ISO, and other styles
33

Chu, Wen-chen, and 朱文禎. "The Emergence of Software Component Electronic Marketplaces: Through An Agent-based Computing Economics Approach." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/03773946412548313716.

Full text
Abstract:
博士
國立政治大學
資訊管理研究所
91
The Emergence of Software Component Electronic Marketplaces: Through An Agent-based Computing Economics Approach Abstract Software reuse plays a vital role in response to software crises in software evolution. An emergence of software component e-marketplace is one of the great milestones providing a core infrastructure for software reuse. The objective of this study involving features of s/w components, transaction costs and satisfaction-trust relations intends to understand why s/w component e-marketplaces emerge as well as demonstrate how they do. The model allows agents to develop their trust in the market as a function of continuation of a satisfied relation through an agent-based computational economics approach with genetic programming. The findings show that the agents with prudent strategies tend to dominate the market in evolution of e-marketplaces under the market power. In addition, the lower level the functional particularity of component is, the higher the buying rate is. As the satisfaction attitude is taken into consideration, the buying rate of recall-satisfied agents lies between that of low-satisfied agents and that of high-satisfied agents. Moreover, when the comparisons are made among the three types of trust function, the buying rate of the high-trust agent is higher than that of low-trust agents. And the buying rate of the low-trust agent is bigger than that of not-trust agents. Similarly, the sequences of the buying rate are strongly influenced by different type of trust function at the catalog market and the loyal catalog market. Meanwhile, almost all high-trust agents have continuous and loyal trade behavior. Either continuous or temporal trade behavior is usually found in the low-trust agents. The tentative trade behavior is seen among almost every not-trust agents. In another words, it is well obvious that it takes more time for the not-trust agents to accumulate trust from their possible trade partners. Keywords: Software component electronic marketplaces; Transaction costs; Genetic programming (GP); Agent-based computational economics (ACE); Trust, Emergence
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Hao Lan. "Agent-based open connectivity for decision support systems." 2007. http://eprints.vu.edu.au/1453/1/zhang.pdf.

Full text
Abstract:
One of the major problems that discourages the development of Decision Support Systems (DSSs) is the un-standardised DSS environment. Computers that support modern business processes are no longer stand-alone systems, but have become tightly connected both with each other and their users. Therefore, having a standardised environment that allows different DSS applications to communicate and cooperate is crucial. The integration difficulty is the most crucial problem that affects the development of DSSs. Therefore, an open and standardised environment for integrating various DSSs is required. Despite the critical need for an open architecture in the DSS designs, the present DSS architectural designs are unable to provide a fundamental solution to enhance the flexibility, connectivity, compatibility, and intelligence of a DSS. The emergence of intelligent agent technology fulfils the requirements of developing innovative and efficient DSS applications as intelligent agents offer various advantages, such as mobility, flexibility, intelligence, etc., to tackle the major problems in existing DSSs. Although various agent-based DSS applications have been suggested, most of these applications are unable to balance manageability with flexibility. Moreover, most existing agent-based DSSs are based on agent-coordinated design mechanisms, and often overlook the living environment for agents. This could cause the difficulties in cooperating and upgrading agents because the agent-based coordination mechanisms have limited capabilities to provide agents with relatively comprehensive information about global system objectives. This thesis proposes a novel multi-agent-based architecture for DSS, called Agentbased Open Connectivity for Decision support systems (AOCD). The AOCD architecture adopts a hybrid agent network topology that makes use of a unique feature called the Matrix-agent connection. The novel component, i.e. Matrix, provides a living environment for agents; it allows agents to upgrade themselves through interacting with the Matrix. This architecture is able to overcome the difficulties in concurrency control and synchronous communication that plague many decentralised systems. Performance analysis has been carried out on this framework and we find that it is able to provide a high degree of flexibility and efficiency compared with other frameworks. The thesis explores the detailed design of the AOCD framework and the major components employed in this framework including the Matrix, agents, and the unified Matrices structure. The proposed framework is able to enhance the system reusability and maximize the system performance. By using a set of interoperable autonomous agents, more creative decision-making can be accomplished in comparison with a hard-coded programmed approach. In this research, we systematically classified the agent network topologies, and developed an experimental program to evaluate the system performance based on three different agent network topologies. The experimental results present the evidence that the hybrid topology is efficient in the AOCD framework design. Furthermore, a novel topological description language for agent networks (TDLA) has been introduced in this research work, which provides an efficient mechanism for agents to perceive the information about their interconnected network. A new Agent-Rank algorithm is introduced in the thesis in order to provide an efficient matching mechanism for agent cooperation. The computational results based on our recently developed program for agent matchmaking demonstrate the efficiency and effectiveness of the Agent-Rank algorithm in the agent-matching and re-matching processes
APA, Harvard, Vancouver, ISO, and other styles
35

Yan, Xue-Hui, and 顏學回. "The Design of An Interactive Scenario-Based Multi-Agent Architecture for Supporting Mobile Computing Environment." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/86163398799431065206.

Full text
Abstract:
碩士
逢甲大學
資訊工程所
94
With the development of software agents, software agent technology has been applied in many different applications. For example, we can find many applications of software agent technology in e-commerce and network management. Because agents have its internal states and behaviors, agents can autonomously help user to complete tasks. This is why software agent can become a weightily information technology. Therefore, the most important challenge is the mechanism for controlling agent behaviors during system development phase. Generally speaking, there are two different mechanisms to control agent behaviors, that is, internal states and scenario mechanisms. In the aspect of reach which about scenario mechanism, someone has proposed an interactive scenario mechanism which control the interactive behaviors between agents and humans and it has been successfully applied in network marketing. With these researches, we can find that software programmers have to design extra software components for supporting interactive scenario mechanism. Interactive scenario mechanism will not work correctly, if software programmers ignore the design of these software components. However, we cannot find any methodology to identify these necessary software components in the past researches. Consequently, the purpose of this research is to propose an interactive scenario-based multi-agent architecture for software programmers a referenced architecture. Besides, because this research considers that the topic of eldercare has become more popular, this research will develop an eldercare interactive agent system based on the architecture proposed by this research as a demonstrative system.
APA, Harvard, Vancouver, ISO, and other styles
36

Sankaranarayanan, Suresh. "Studies in agent based IP traffic congestion management in diffserv networks." 2006. http://arrow.unisa.edu.au:8081/1959.8/46358.

Full text
Abstract:
The motivation for the research carried out was to develop a rule based traffic management scheme for DiffServ networks with a view to introducing QoS (Quality of Service). This required definition of rules for congestion management/control based on the type and nature of IP traffic encountered, and then constructing and storing these rules to enable future access for application and enforcement. We first developed the required rule base and then developed the software based mobile agents using the Java (RMI) application package, for accessing these rules for application and enforcement. Consequently, these mobile agents act as smart traffic managers at nodes/routers in the computer based communication network and manage congestion.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Hao Lan. "Agent-based open connectivity for decision support systems." Thesis, 2007. https://vuir.vu.edu.au/1453/.

Full text
Abstract:
One of the major problems that discourages the development of Decision Support Systems (DSSs) is the un-standardised DSS environment. Computers that support modern business processes are no longer stand-alone systems, but have become tightly connected both with each other and their users. Therefore, having a standardised environment that allows different DSS applications to communicate and cooperate is crucial. The integration difficulty is the most crucial problem that affects the development of DSSs. Therefore, an open and standardised environment for integrating various DSSs is required. Despite the critical need for an open architecture in the DSS designs, the present DSS architectural designs are unable to provide a fundamental solution to enhance the flexibility, connectivity, compatibility, and intelligence of a DSS. The emergence of intelligent agent technology fulfils the requirements of developing innovative and efficient DSS applications as intelligent agents offer various advantages, such as mobility, flexibility, intelligence, etc., to tackle the major problems in existing DSSs. Although various agent-based DSS applications have been suggested, most of these applications are unable to balance manageability with flexibility. Moreover, most existing agent-based DSSs are based on agent-coordinated design mechanisms, and often overlook the living environment for agents. This could cause the difficulties in cooperating and upgrading agents because the agent-based coordination mechanisms have limited capabilities to provide agents with relatively comprehensive information about global system objectives. This thesis proposes a novel multi-agent-based architecture for DSS, called Agentbased Open Connectivity for Decision support systems (AOCD). The AOCD architecture adopts a hybrid agent network topology that makes use of a unique feature called the Matrix-agent connection. The novel component, i.e. Matrix, provides a living environment for agents; it allows agents to upgrade themselves through interacting with the Matrix. This architecture is able to overcome the difficulties in concurrency control and synchronous communication that plague many decentralised systems. Performance analysis has been carried out on this framework and we find that it is able to provide a high degree of flexibility and efficiency compared with other frameworks. The thesis explores the detailed design of the AOCD framework and the major components employed in this framework including the Matrix, agents, and the unified Matrices structure. The proposed framework is able to enhance the system reusability and maximize the system performance. By using a set of interoperable autonomous agents, more creative decision-making can be accomplished in comparison with a hard-coded programmed approach. In this research, we systematically classified the agent network topologies, and developed an experimental program to evaluate the system performance based on three different agent network topologies. The experimental results present the evidence that the hybrid topology is efficient in the AOCD framework design. Furthermore, a novel topological description language for agent networks (TDLA) has been introduced in this research work, which provides an efficient mechanism for agents to perceive the information about their interconnected network. A new Agent-Rank algorithm is introduced in the thesis in order to provide an efficient matching mechanism for agent cooperation. The computational results based on our recently developed program for agent matchmaking demonstrate the efficiency and effectiveness of the Agent-Rank algorithm in the agent-matching and re-matching processes
APA, Harvard, Vancouver, ISO, and other styles
38

Konur, Savas, L. M. Mierla, F. Ipate, and Marian Gheorghe. "kPWorkbench: A software suit for membrane systems." 2020. http://hdl.handle.net/10454/18044.

Full text
Abstract:
Yes
Membrane computing is a new natural computing paradigm inspired by the functioning and structure of biological cells, and has been successfully applied to many different areas, from biology to engineering. In this paper, we present kPWorkbench, a software framework developed to support membrane computing and its applications. kPWorkbench offers unique features, including modelling, simulation, agent-based high performance simulation and verification, which allow modelling and computational analysis of membrane systems. The kPWorkbench formal verification component provides the opportunity to analyse the behaviour of a model and validate that important system requirements are met and certain behaviours are observed. The platform also features a property language based on natural language statements to facilitate property specification.
EPSRC
APA, Harvard, Vancouver, ISO, and other styles
39

Konur, Savas, L. Mierla, F. Ipate, and Marian Gheorghe. "kPWorkbench: A software suit for membrane systems." 2001. http://hdl.handle.net/10454/18044.

Full text
Abstract:
Yes
Membrane computing is a new natural computing paradigm inspired by the functioning and structure of biological cells, and has been successfully applied to many different areas, from biology to engineering. In this paper, we present kPWorkbench, a software framework developed to support membrane computing and its applications. kPWorkbench offers unique features, including modelling, simulation, agent-based high performance simulation and verification, which allow modelling and computational analysis of membrane systems. The kPWorkbench formal verification component provides the opportunity to analyse the behaviour of a model and validate that important system requirements are met and certain behaviours are observed. The platform also features a property language based on natural language statements to facilitate property specification.
EPSRC
APA, Harvard, Vancouver, ISO, and other styles
40

Kiran, Mariam. "Modelling Cities as a collection of TeraSystems - Computational challenges in Multi-Agent Approach." 2015. http://hdl.handle.net/10454/9056.

Full text
Abstract:
Yes
Agent-based modeling techniques are ideal for modeling massive complex systems such as insect colonies or biological cellular systems and even cities. However these models themselves are extremely complex to code, test, simulate and analyze. This paper discusses the challenges in using agent-based models to model complete cities as a complex system. In this paper we argue that Cities are actually a collection of various complex models which are themselves massive multiple systems, each of millions of agents, working together to form one system consisting of an order of a billion agents of different types - such as people, communities and technologies interacting together. Because of the agent numbers and complexity challenges, the present day hardware architectures are unable to cope with the simulations and processing of these models. To accommodate these issues, this paper proposes a Tera (to denote the order of millions)-modeling framework, which utilizes current technologies of Cloud computing and Big data processing, for modeling a city, by allowing infinite resources and complex interactions. This paper also lays the case for bringing together research communities for interdisciplinary research to build a complete reliable model of a city.
APA, Harvard, Vancouver, ISO, and other styles
41

Lall, Manoj. "Selection of mobile agent systems based on mobility, communication and security aspects." Diss., 2005. http://hdl.handle.net/10500/2397.

Full text
Abstract:
The availability of numerous mobile agent systems with its own strengths and weaknesses poses a problem when deciding on a particular mobile agent system. In this dissertation, factors based on mobility, communication and security of the mobile agent systems are presented and used as a means to address this problem. To facilitate in the process of selection, a grouping scheme of the agent system was proposed. Based on this grouping scheme, mobile agent systems with common properties are grouped together and analyzed against the above-mentioned factors. In addition, an application was developed using the Aglet Software Development Toolkit to demonstrate certain features of agent mobility, communication and security.
Theoretical Computing
M. Sc. (Computer Science)
APA, Harvard, Vancouver, ISO, and other styles
42

Konur, Savas, and Marian Gheorghe. "Proceedings of the Workshop on Membrane Computing, WMC 2016." 2016. http://hdl.handle.net/10454/8840.

Full text
Abstract:
yes
This Workshop on Membrane Computing, at the Conference of Unconventional Computation and Natural Computation (UCNC), 12th July 2016, Manchester, UK, is the second event of this type after the Workshop at UCNC 2015 in Auckland, New Zealand*. Following the tradition of the 2015 Workshop the Proceedings are published as technical report. The Workshop consisted of one invited talk and six contributed presentations (three full papers and three extended abstracts) covering a broad spectrum of topics in Membrane Computing, from computational and complexity theory to formal verification, simulation and applications in robotics. All these papers – see below, but the last extended abstract, are included in this volume. The invited talk given by Rudolf Freund, “P SystemsWorking in Set Modes”, presented a general overview on basic topics in the theory of Membrane Computing as well as new developments and future research directions in this area. Radu Nicolescu in “Distributed and Parallel Dynamic Programming Algorithms Modelled on cP Systems” presented an interesting dynamic programming algorithm in a distributed and parallel setting based on P systems enriched with adequate data structure and programming concepts representation. Omar Belingheri, Antonio E. Porreca and Claudio Zandron showed in “P Systems with Hybrid Sets” that P systems with negative multiplicities of objects are less powerful than Turing machines. Artiom Alhazov, Rudolf Freund and Sergiu Ivanov presented in “Extended Spiking Neural P Systems with States” new results regading the newly introduced topic of spiking neural P systems where states are considered. “Selection Criteria for Statistical Model Checker”, by Mehmet E. Bakir and Mike Stannett, presented some early experiments in selecting adequate statistical model checkers for biological systems modelled with P systems. In “Towards Agent-Based Simulation of Kernel P Systems using FLAME and FLAME GPU”, Raluca Lefticaru, Luis F. Macías-Ramos, Ionuţ M. Niculescu, Laurenţiu Mierlă presented some of the advatages of implementing kernel P systems simulations in FLAME. Andrei G. Florea and Cătălin Buiu, in “An Efficient Implementation and Integration of a P Colony Simulator for Swarm Robotics Applications" presented an interesting and efficient implementation based on P colonies for swarms of Kilobot robots. *http://ucnc15.wordpress.fos.auckland.ac.nz/workshop-on-membrane-computingwmc- at-the-conference-on-unconventional-computation-natural-computation/
APA, Harvard, Vancouver, ISO, and other styles
43

Capurso, Giovanna. "Object (B)logging: a Semantic-Based Self-Description for Cyber-Physical Systems." Doctoral thesis, 2020. http://hdl.handle.net/11589/191893.

Full text
Abstract:
Pervasive computing deals with heterogeneous mobile agents attached to ubiquitous micro-devices. In such scenarios, what one agent knows about the environment is based on perception components it uses or has access to, and it can be significantly different from another agent's knowledge. Furthermore, transient conditions and uncertainty affects perceptions and communication, aggravating the need to cope with the lack of complete and reliable information. Current solutions in the Internet of Things (IoT) are mostly based on centralized data collection and analysis and on top-down agent orchestration, with obvious limitations in latency, connection availability and data confidentiality. This thesis proposes a novel distributed knowledge-based framework named object (b)logging to tackle the above issues. The approach is conceived as a general-purpose evolution of the IoT, able to associate semantic annotations to real-world objects and events as well as to trigger complex objects choreography through advanced resource discovery. It envisions several smart entities organized in social networks, interacting autonomously and sharing information, cooperating and orchestrating resources through a published micro-blog. Ontology-referred context annotations produced and shared by individual smart objects in mobile ad-hoc networks are merged by means of novel Concept Fusion and enhanced Concept Integration reasoning services in Description Logics, specifically devised for context-aware multi-agent systems and tailored to resource-constrained devices. Management of incomplete information, reconciliation of inconsistencies in context descriptions, quick adaptation to changes and robustness against spurious or inaccurate information allow to progressively enrich a node's core knowledge in a private micro-log. Then it becomes able to identify on-the-fly the task(s) needed to change its own configuration or the environment state and automatically infer what useful capabilities it can provide to or needs from other entities in order to enact them, in a decentralized and collaborative fashion. A novel semantic-enhanced blockchain infrastructure underlies the dissemination, discovery and selection of services and resources. These tasks have been revisited as smart contracts with opportunistic and distributed execution, exploiting validation by consensus. The introduced paradigm ideally applies to pervasive cyber-physical systems, where several mobile heterogeneous micro-devices cooperate to connote and modify appropriately the environment they are dipped in, as demonstrated by relevant case studies and extensive experimental evaluations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography