Academic literature on the topic 'ADMA [Autonomous Decentralized Management Architecture]'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'ADMA [Autonomous Decentralized Management Architecture].'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "ADMA [Autonomous Decentralized Management Architecture]"

1

Scholz, Michael, Sven Kreitlein, and Jörg Franke. "E|Flow - Decentralized Computer Architecture and Simulation Models for Sustainable and Resource Efficient Intralogistics." Applied Mechanics and Materials 856 (November 2016): 117–22. http://dx.doi.org/10.4028/www.scientific.net/amm.856.117.

Full text
Abstract:
Nowadays material flow in factories is realized by different concepts of transport. Each of those specific conveyers has pros and cons due to its concept. In general, the state of art of transport systems have a low flexibility of the path planning and are not suitable for dynamic transport requirements, wherefore they are designed for a specific application. Generally, the common systems cover a specific task of transportation and can fulfill a predefined maximum amount of transportation orders. Due to the effects of mass-customization there is an increase of the variance of the products combined with a reduction of the number of units per variation and a volatile costumer demand. Therefore, it is necessary that the next generations of production lines, especially the intralogistics transportation systems, have to be designed more adaptable and flexible. The object of the research in this paper is a cyber-physical material flow system with flexible, autonomous and collaborative vehicles combined with centralized sensors to digitalize the workspace. Furthermore, the number of vehicles in the system can be adjusted to the volume of the transport requirement, wherefore the system is suitable for different tasks in the intralogistics. Due to the approach of a decentralized digitalization of the workspace on the one hand side and the decentralized architecture of the path planning and order allocation system on the other hand side the concept lead to a nearly endless scalability of the system. The scalability is only restricted by the maximum number of entities which can use the communication system. Therefore, it is possible that the system adjust itself to the actual intralogistics demand as well as the dimension of the field of operation. This lead to a self-adjustable intralogistics transportation system which avoid a physical redesign of the whole system if the intralogistics demand is changing. To validate the approach, the decentralized intelligence of the transport entities and the production units is implemented in a discrete event simulation. In this simulation environment different task allocation methods, sizes of the transportation fleet, lot size management concepts and site layout concepts can be compared and rated which each other.
APA, Harvard, Vancouver, ISO, and other styles
2

Mastilak, Lukas, Marek Galinski, Pavol Helebrandt, Ivan Kotuliak, and Michal Ries. "Enhancing Border Gateway Protocol Security Using Public Blockchain." Sensors 20, no. 16 (August 11, 2020): 4482. http://dx.doi.org/10.3390/s20164482.

Full text
Abstract:
Communication on the Internet consisting of a massive number of Autonomous Systems (AS) depends on routing based on Border Gateway Protocol (BGP). Routers generally trust the veracity of information in BGP updates from their neighbors, as with many other routing protocols. However, this trust leaves the whole system vulnerable to multiple attacks, such as BGP hijacking. Several solutions have been proposed to increase the security of BGP routing protocol, most based on centralized Public Key Infrastructure, but their adoption has been relatively slow. Additionally, these solutions are open to attack on this centralized system. Decentralized alternatives utilizing blockchain to validate BGP updates have recently been proposed. The distributed nature of blockchain and its trustless environment increase the overall system security and conform to the distributed character of the BGP. All of the techniques based on blockchain concentrate on inspecting incoming BGP updates only. In this paper, we improve on these by modifying an existing architecture for the management of network devices. The original architecture adopted a private blockchain implementation of HyperLedger. On the other hand, we use the public blockchain Ethereum, more specifically the Ropsten testing environment. Our solution provides a module design for the management of AS border routers. It enables verification of the prefixes even before any router sends BGP updates announcing them. Thus, we eliminate fraudulent BGP origin announcements from the AS deploying our solution. Furthermore, blockchain provides storage options for configurations of edge routers and keeps the irrefutable history of all changes. We can analyze router settings history to detect whether the router advertised incorrect information, when and for how long.
APA, Harvard, Vancouver, ISO, and other styles
3

Nayyar, Anand, Rudra Rameshwar, and Piyush Kanti Dutta. "Special Issue on Recent Trends and Future of Fog and Edge Computing, Services and Enabling Technologies." Scalable Computing: Practice and Experience 20, no. 2 (May 2, 2019): iii—vi. http://dx.doi.org/10.12694/scpe.v20i2.1558.

Full text
Abstract:
Recent Trends and Future of Fog and Edge Computing, Services, and Enabling Technologies Cloud computing has been established as the most popular as well as suitable computing infrastructure providing on-demand, scalable and pay-as-you-go computing resources and services for the state-of-the-art ICT applications which generate a massive amount of data. Though Cloud is certainly the most fitting solution for most of the applications with respect to processing capability and storage, it may not be so for the real-time applications. The main problem with Cloud is the latency as the Cloud data centres typically are very far from the data sources as well as the data consumers. This latency is ok with the application domains such as enterprise or web applications, but not for the modern Internet of Things (IoT)-based pervasive and ubiquitous application domains such as autonomous vehicle, smart and pervasive healthcare, real-time traffic monitoring, unmanned aerial vehicles, smart building, smart city, smart manufacturing, cognitive IoT, and so on. The prerequisite for these types of application is that the latency between the data generation and consumption should be minimal. For that, the generated data need to be processed locally, instead of sending to the Cloud. This approach is known as Edge computing where the data processing is done at the network edge in the edge devices such as set-top boxes, access points, routers, switches, base stations etc. which are typically located at the edge of the network. These devices are increasingly being incorporated with significant computing and storage capacity to cater to the need for local Big Data processing. The enabling of Edge computing can be attributed to the Emerging network technologies, such as 4G and cognitive radios, high-speed wireless networks, and energy-efficient sophisticated sensors. Different Edge computing architectures are proposed (e.g., Fog computing, mobile edge computing (MEC), cloudlets, etc.). All of these enable the IoT and sensor data to be processed closer to the data sources. But, among them, Fog computing, a Cisco initiative, has attracted the most attention of people from both academia and corporate and has been emerged as a new computing-infrastructural paradigm in recent years. Though Fog computing has been proposed as a different computing architecture than Cloud, it is not meant to replace the Cloud. Rather, Fog computing extends the Cloud services to network edges for providing computation, networking, and storage services between end devices and data centres. Ideally, Fog nodes (edge devices) are supposed to pre-process the data, serve the need of the associated applications preliminarily, and forward the data to the Cloud if the data are needed to be stored and analysed further. Fog computing enhances the benefits from smart devices operational not only in network perimeter but also under cloud servers. Fog-enabled services can be deployed anywhere in the network, and with these services provisioning and management, huge potential can be visualized to enhance intelligence within computing networks to realize context-awareness, high response time, and network traffic offloading. Several possibilities of Fog computing are already established. For example, sustainable smart cities, smart grid, smart logistics, environment monitoring, video surveillance, etc. To design and implementation of Fog computing systems, various challenges concerning system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. are needed to be addressed. Also, to make Fog compatible with Cloud several factors such as Fog and Cloud system integration, service collaboration between Fog and Cloud, workload balance between Fog and Cloud, and so on need to be taken care of. It is our great privilege to present before you Volume 20, Issue 2 of the Scalable Computing: Practice and Experience. We had received 20 Research Papers and out of which 14 Papers are selected for Publication. The aim of this special issue is to highlight Recent Trends and Future of Fog and Edge Computing, Services and Enabling technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to Fog Computing, Cloud Computing and Edge Computing. Sujata Dash et al. contributed a paper titled “Edge and Fog Computing in Healthcare- A Review” in which an in-depth review of fog and mist computing in the area of health care informatics is analysed, classified and discussed. The review presented in this paper is primarily focussed on three main aspects: The requirements of IoT based healthcare model and the description of services provided by fog computing to address then. The architecture of an IoT based health care system embedding fog computing layer and implementation of fog computing layer services along with performance and advantages. In addition to this, the researchers have highlighted the trade-off when allocating computational task to the level of network and also elaborated various challenges and security issues of fog and edge computing related to healthcare applications. Parminder Singh et al. in the paper titled “Triangulation Resource Provisioning for Web Applications in Cloud Computing: A Profit-Aware” proposed a novel triangulation resource provisioning (TRP) technique with a profit-aware surplus VM selection policy to ensure fair resource utilization in hourly billing cycle while giving the quality of service to end-users. The proposed technique use time series workload forecasting, CPU utilization and response time in the analysis phase. The proposed technique is tested using CloudSim simulator and R language is used to implement prediction model on ClarkNet weblog. The proposed approach is compared with two baseline approaches i.e. Cost-aware (LRM) and (ARMA). The response time, CPU utilization and predicted request are applied in the analysis and planning phase for scaling decisions. The profit-aware surplus VM selection policy used in the execution phase for select the appropriate VM for scale-down. The result shows that the proposed model for web applications provides fair utilization of resources with minimum cost, thus provides maximum profit to application provider and QoE to the end users. Akshi kumar and Abhilasha Sharma in the paper titled “Ontology driven Social Big Data Analytics for Fog enabled Sentic-Social Governance” utilized a semantic knowledge model for investigating public opinion towards adaption of fog enabled services for governance and comprehending the significance of two s-components (sentic and social) in aforesaid structure that specifically visualize fog enabled Sentic-Social Governance. The results using conventional TF-IDF (Term Frequency-Inverse Document Frequency) feature extraction are empirically compared with ontology driven TF-IDF feature extraction to find the best opinion mining model with optimal accuracy. The results concluded that implementation of ontology driven opinion mining for feature extraction in polarity classification outperforms the traditional TF-IDF method validated over baseline supervised learning algorithms with an average of 7.3% improvement in accuracy and approximately 38% reduction in features has been reported. Avinash Kaur and Pooja Gupta in the paper titled “Hybrid Balanced Task Clustering Algorithm for Scientific workflows in Cloud Computing” proposed novel hybrid balanced task clustering algorithm using the parameter of impact factor of workflows along with the structure of workflow and using this technique, tasks can be considered for clustering either vertically or horizontally based on value of impact factor. The testing of the algorithm proposed is done on Workflowsim- an extension of CloudSim and DAG model of workflow was executed. The Algorithm was tested on variables- Execution time of workflow and Performance Gain and compared with four clustering methods: Horizontal Runtime Balancing (HRB), Horizontal Clustering (HC), Horizontal Distance Balancing (HDB) and Horizontal Impact Factor Balancing (HIFB) and results stated that proposed algorithm is almost 5-10% better in makespan time of workflow depending on the workflow used. Pijush Kanti Dutta Pramanik et al. in the paper titled “Green and Sustainable High-Performance Computing with Smartphone Crowd Computing: Benefits, Enablers and Challenges” presented a comprehensive statistical survey of the various commercial CPUs, GPUs, SoCs for smartphones confirming the capability of the SCC as an alternative to HPC. An exhaustive survey is presented on the present and optimistic future of the continuous improvement and research on different aspects of smartphone battery and other alternative power sources which will allow users to use their smartphones for SCC without worrying about the battery running out. Dhanapal and P. Nithyanandam in the paper titled “The Slow HTTP Distributed Denial of Service (DDOS) Attack Detection in Cloud” proposed a novel method to detect slow HTTP DDoS attacks in cloud to overcome the issue of consuming all available server resources and making it unavailable to the real users. The proposed method is implemented using OpenStack cloud platform with slowHTTPTest tool. The results stated that proposed technique detects the attack in efficient manner. Mandeep Kaur and Rajni Mohana in the paper titled “Static Load Balancing Technique for Geographically partitioned Public Cloud” proposed a novel approach focused upon load balancing in the partitioned public cloud by combining centralized and decentralized approaches, assuming the presence of fog layer. A load balancer entity is used for decentralized load balancing at partitions and a controller entity is used for centralized level to balance the overall load at various partitions. Results are compared with First Come First Serve (FCFS) and Shortest Job First (SJF) algorithms. In this work, the researchers compared the Waiting Time, Finish Time and Actual Run Time of tasks using these algorithms. To reduce the number of unhandled jobs, a new load state is introduced which checks load beyond conventional load states. Major objective of this approach is to reduce the need of runtime virtual machine migration and to reduce the wastage of resources, which may be occurring due to predefined values of threshold. Mukta and Neeraj Gupta in the paper titled “Analytical Available Bandwidth Estimation in Wireless Ad-Hoc Networks considering Mobility in 3-Dimensional Space” proposes an analytical approach named Analytical Available Bandwidth Estimation Including Mobility (AABWM) to estimate ABW on a link. The major contributions of the proposed work are: i) it uses mathematical models based on renewal theory to calculate the collision probability of data packets which makes the process simple and accurate, ii) consideration of mobility under 3-D space to predict the link failure and provides an accurate admission control. To test the proposed technique, the researcher used NS-2 simulator to compare the proposed technique i.e. AABWM with AODV, ABE, IAB and IBEM on throughput, Packet loss ratio and Data delivery. Results stated that AABWM performs better as compared to other approaches. R.Sridharan and S. Domnic in the paper titled “Placement Strategy for Intercommunicating Tasks of an Elastic Request in Fog-Cloud Environment” proposed a novel heuristic IcAPER,(Inter-communication Aware Placement for Elastic Requests) algorithm. The proposed algorithm uses the network neighborhood machine for placement, once current resource is fully utilized by the application. The performance IcAPER algorithm is compared with First Come First Serve (FCFS), Random and First Fit Decreasing (FFD) algorithms for the parameters (a) resource utilization (b) resource fragmentation and (c) Number of requests having intercommunicating tasks placed on to same PM using CloudSim simulator. Simulation results shows IcAPER maps 34% more tasks on to the same PM and also increase the resource utilization by 13% while decreasing the resource fragmentation by 37.8% when compared to other algorithms. Velliangiri S. et al. in the paper titled “Trust factor based key distribution protocol in Hybrid Cloud Environment” proposed a novel security protocol comprising of two stages: first stage, Group Creation using the trust factor and develop key distribution security protocol. It performs the communication process among the virtual machine communication nodes. Creating several groups based on the cluster and trust factors methods. The second stage, the ECC (Elliptic Curve Cryptography) based distribution security protocol is developed. The performance of the Trust Factor Based Key Distribution protocol is compared with the existing ECC and Diffie Hellman key exchange technique. The results state that the proposed security protocol has more secure communication and better resource utilization than the ECC and Diffie Hellman key exchange technique in the Hybrid cloud. Vivek kumar prasad et al. in the paper titled “Influence of Monitoring: Fog and Edge Computing” discussed various techniques involved for monitoring for edge and fog computing and its advantages in addition to a case study based on Healthcare monitoring system. Avinash Kaur et al. elaborated a comprehensive view of existing data placement schemes proposed in literature for cloud computing. Further, it classified data placement schemes based on their assess capabilities and objectives and in addition to this comparison of data placement schemes. Parminder Singh et al. presented a comprehensive review of Auto-Scaling techniques of web applications in cloud computing. The complete taxonomy of the reviewed articles is done on varied parameters like auto-scaling, approach, resources, monitoring tool, experiment, workload and metric, etc. Simar Preet Singh et al. in the paper titled “Dynamic Task Scheduling using Balanced VM Allocation Policy for Fog Computing Platform” proposed a novel scheme to improve the user contentment by improving the cost to operation length ratio, reducing the customer churn, and boosting the operational revenue. The proposed scheme is learnt to reduce the queue size by effectively allocating the resources, which resulted in the form of quicker completion of user workflows. The proposed method results are evaluated against the state-of-the-art scene with non-power aware based task scheduling mechanism. The results were analyzed using parameters-- energy, SLA infringement and workflow execution delay. The performance of the proposed schema was analyzed in various experiments particularly designed to analyze various aspects for workflow processing on given fog resources. The LRR (35.85 kWh) model has been found most efficient on the basis of average energy consumption in comparison to the LR (34.86 kWh), THR (41.97 kWh), MAD (45.73 kWh) and IQR (47.87 kWh). The LRR model has been also observed as the leader when compared on the basis of number of VM migrations. The LRR (2520 VMs) has been observed as best contender on the basis of mean of number of VM migrations in comparison with LR (2555 VMs), THR (4769 VMs), MAD (5138 VMs) and IQR (5352 VMs).
APA, Harvard, Vancouver, ISO, and other styles
4

Herlihy, Maurice, Barbara Liskov, and Liuba Shrira. "Cross-chain deals and adversarial commerce." VLDB Journal, August 20, 2021. http://dx.doi.org/10.1007/s00778-021-00686-1.

Full text
Abstract:
AbstractModern distributed data management systems face a new challenge: how can autonomous, mutually distrusting parties cooperate safely and effectively? Addressing this challenge brings up familiar questions from classical distributed systems: how to combine multiple steps into a single atomic action, how to recover from failures, and how to synchronize concurrent access to data. Nevertheless, each of these issues requires rethinking when participants are autonomous and potentially adversarial. We propose the notion of a cross-chain deal, a new way to structure complex distributed computations that manage assets in an adversarial setting. Deals are inspired by classical atomic transactions, but are necessarily different, in important ways, to accommodate the decentralized and untrusting nature of the exchange. We describe novel safety and liveness properties, along with two alternative protocols for implementing cross-chain deals in a system of independent blockchain ledgers. One protocol, based on synchronous communication, is fully decentralized, while the other, based on semi-synchronous communication, requires a globally shared ledger. We also prove that some degree of centralization is required in the semi-synchronous communication model.
APA, Harvard, Vancouver, ISO, and other styles
5

Sarangi, Lokanath, and Dr Chittaranjan Panda. "A Review on Intelligent Agent Systems." International Journal of Computer and Communication Technology, July 2011, 200–215. http://dx.doi.org/10.47893/ijcct.2011.1092.

Full text
Abstract:
Multi-agent system (MAS) is a common way of exploiting the potential power of agent by combining many agents in one system. Each agent in a multivalent system has incomplete information and is in capable of solving entire problem on its own. Multi-agent system offers modularity. If a problem domain is particularly complex, large and contain uncertainty, then the one way to address, it to develop a number of functional specific and modular agent that are specialized at solving various problems individually. It also consists of heterogeneous agents implemented by different tool and techniques. MAS can be defining as loosely coupled network of problem solvers that interact to solve problems that are beyond the individual capabilities or knowledge of each problem solver. These problem solvers, often ailed agent are autonomous and can be heterogeneous in nature. MAS is followed by characteristics, Future application, What to be change, problem solving agent, tools and techniques used, various architecture, multi agent applications and finally future Direction and conclusion. Various Characteristics are limited viewpoint, effectively, decentralized; computation is asynchronous, use of genetic algorithms. It has some drawbacks which must be change to make MAS more effective. In the session of problem solving of MAS, the agent performance measure contains many factors to improve it like formulation of problems, task allocation, organizations. In planning of multivalent this paper cover self-interested multivalent interactions, modeling of other agents, managing communication, effective allocation of limited resources to multiple agents with managing resources. Using of tool, to make the agent more efficient in task that are often used. The architecture o MAS followed by three layers, explore, wander, avoid obstacles respectively. Further different and task decomposition can yield various architecture like BDI (Belief Desire Intension), RETSINA. Various applications of multi agent system exist today, to solve the real-life problems, new systems are being developed two distinct categories and also many others like process control, telecommunication, air traffic control, transportation systems, commercial management, electronic commerce, entertainment applications, medical applications. The future aspect of MAS to solve problems that are too large, to allow interconnection and interoperation of multiple existing legacy systems etc.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "ADMA [Autonomous Decentralized Management Architecture]"

1

Ayari, Mouna. "Architecture de gestion décentralisée de la qualité de service par politiques dans les réseaux Ad Hoc." Paris 6, 2008. http://www.theses.fr/2009PA066006.

Full text
Abstract:
Dans cette thèse, nous proposons une architecture de gestion autonomique de la qualité de service par politiques dans les réseaux ad hoc : ADMA (Autonomous Decentralized Management Architecture). L'objectif de notre approche est de mettre en place, un système auto-géré, dynamique et adaptatif. Nous avons associé à cet effet les caractéristiques du système de gestion par politiques avec la propriété d'auto-configuration des réseaux autonomiques. Aucun noeud ad hoc ne possède une vision complète ou globale du réseau et les décisions seront prises d'une manière totalement décentralisée. Ces décisions respectent des politiques prédéfinies par l'administrateur du réseau. Ces politiques sont répliquées dans tous les nœuds du réseau ad hoc. Nous en avons distinguées quatre classes : politiques de configuration, de reconfiguration, de surveillance et méta-politiques. Nous proposons également un nouveau protocole de gestion par politiques fonctionnant en mode pair-à-pair décentralisé: DPMP (Distributed Policy Management Protocol). Notre protocole permet d'assurer deux services : la distribution des politiques dans le réseau ad hoc et la collecte d'information d'état du nœud et son environnement. Il permet également l'interaction entre les différents composants de notre architecture ADMA. L'évaluation de notre proposition porte sur deux volets : la vérification formelle et les simulations. La modélisation et vérification formelles nous ont permis de valider la conception du protocole DPMP par rapport à sa spécification. Les résultats de simulations ont montré que notre solution passe à l'échelle et qu'elle n'est pas sensible à la mobilité et aux changements topologiques.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "ADMA [Autonomous Decentralized Management Architecture]"

1

Dong, Dapeng, Huanhuan Xiong, Gabriel G. Castañé, and John P. Morrison. "A Decentralized Cloud Management Architecture Based on Application Autonomous Systems." In Cloud Computing and Service Science, 102–14. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94959-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kumari, P. Lalitha Surya. "Blockchain-Autonomous Driving Systems." In Advances in Data Mining and Database Management, 87–114. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3295-9.ch006.

Full text
Abstract:
Blockchain is the upcoming new information technology that could have quite a lot of significant future applications. In this chapter, the communication network for the reliable environment of intelligent vehicle systems is considered along with how the blockchain technology generates trust network among intelligent vehicles. It also discusses different factors that are effecting or motivating automotive industry, data-driven intelligent transportation system (D2ITS), structure of VANET, framework of intelligent vehicle data sharing based on blockchain used for intelligent vehicle communication and decentralized autonomous vehicles (DAV) network. It also talks about the different ways the autonomous vehicles use blockchain. Block-VN distributed architecture is discussed in detail. The different challenges of research and privacy and security of vehicular network are discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Kral, Jaroslav, and Michal Zemlicka. "Software Confederation - An Architecture for Global Systems and Global Management." In Global Information Technologies, 823–45. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-939-7.ch064.

Full text
Abstract:
Many (especially the large) software systems tend to be virtual peer-to-peer (P2P) networks of permanent autonomous services (e.g., e-government should be supported by the network of information systems of individual offices). The services are loosely coupled, a service can join/leave the system quite easily. We call such networks software confederation (SWC). The paradigm of the SWC is orthogonal to the paradigm of the object-oriented methodology. The architecture of SWC is an engineering necessity in the case of global or very large information systems (IS) and provides many software engineering advantages like incremental development, openness, modifiability, maintainability, etc. SWC is a necessity in many other cases. SWC supports the trend of large enterprises or modern states to be decentralized, dynamic, and able to work in the time of globalization. Software confederations are the result of the tendency to globalization, and at the same time, the tool allowing of implementation of IS for a globalized society. SWC changes basic features of a CEO’s work as well as a CIO’s. In both cases, it supports the decentralization. This paper discusses the motivation of software confederations, the techniques of their design and implementation, including the use of XML (inclusive SOAP-UDDI), their software engineering advantages, relation to object-oriented technology and methodological consequences of their use. The main conclusion is that the concept of SWC is the crucial for the future software and information technologies and substantially changes the management tasks of the CIO and CEO.
APA, Harvard, Vancouver, ISO, and other styles
4

Kral, Jaroslav, and Michal Zemlicka. "Software Confederation - An Architecture for Global Systems and Global Management." In Managing Globally with Information Technology, 57–81. IGI Global, 2003. http://dx.doi.org/10.4018/978-1-93177-742-1.ch006.

Full text
Abstract:
Many (especially the large) software systems tend to be virtual peer-to-peer (P2P) networks of permanent autonomous services (e.g., e-government should be supported by the network of information systems of individual offices). The services are loosely coupled, a service can join/leave the system quite easily. We call such networks software confederation (SWC). The paradigm of the SWC is orthogonal to the paradigm of the object-oriented methodology. The architecture of SWC is an engineering necessity in the case of global or very large information systems (IS) and provides many software engineering advantages like incremental development, openness, modifiability, maintainability, etc. SWC is a necessity in many other cases. SWC supports the trend of large enterprises or modern states to be decentralized, dynamic, and able to work in the time of globalization. Software confederations are the result of the tendency to globalization, and at the same time, the tool allowing of implementation of IS for a globalized society. SWC changes basic features of a CEO’s work as well as a CIO’s. In both cases, it supports the decentralization. This paper discusses the motivation of software confederations, the techniques of their design and implementation, including the use of XML (inclusive SOAP-UDDI), their software engineering advantages, relation to object-oriented technology and methodological consequences of their use. The main conclusion is that the concept of SWC is the crucial for the future software and information technologies and substantially changes the management tasks of the CIO and CEO.
APA, Harvard, Vancouver, ISO, and other styles
5

Virmani, Charu, Dimple Juneja Gupta, and Tanu Choudhary. "Blockchain 2.0." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 167–88. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-9257-0.ch009.

Full text
Abstract:
Blockchain is a shared and distributed ledger across an open or private processing system that expedites the process of recording transactions and data management in a business network. It empowers the design of decentralized transactions, smart contracts, and intelligent assets that can be managed over internet. It formulates the revolutionary decision-making governance systems with more egalitarian users, and autonomous organizations that can control over internet without any third-party involved. This disruptive technology has tremendous opportunities that open the doors to detract the power from centralized authorities in the sphere of communications, business, and even politics or law. This chapter outlines an introduction to the blockchain technologies and its decentralized architecture, especially from the perspective of challenges and limitations. The objective is to explore the current research topics, benefits, and drawbacks of blockchain. The study explores its potential applications for business and future directions that is all set to transfigure the digital world.
APA, Harvard, Vancouver, ISO, and other styles
6

Virmani, Charu, Dimple Juneja Gupta, and Tanu Choudhary. "Blockchain 2.0." In Research Anthology on Blockchain Technology in Business, Healthcare, Education, and Government, 1–22. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5351-0.ch001.

Full text
Abstract:
Blockchain is a shared and distributed ledger across an open or private processing system that expedites the process of recording transactions and data management in a business network. It empowers the design of decentralized transactions, smart contracts, and intelligent assets that can be managed over internet. It formulates the revolutionary decision-making governance systems with more egalitarian users, and autonomous organizations that can control over internet without any third-party involved. This disruptive technology has tremendous opportunities that open the doors to detract the power from centralized authorities in the sphere of communications, business, and even politics or law. This chapter outlines an introduction to the blockchain technologies and its decentralized architecture, especially from the perspective of challenges and limitations. The objective is to explore the current research topics, benefits, and drawbacks of blockchain. The study explores its potential applications for business and future directions that is all set to transfigure the digital world.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "ADMA [Autonomous Decentralized Management Architecture]"

1

Ayari, Mouna, Zeinab Movahedi, Guy Pujolle, and Farouk Kamoun. "ADMA: autonomous decentralized management architecture for MANETs." In the 2009 International Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1582379.1582409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jaimez-Gonzalez, Carlos R., and Wulfrano A. Luna-Ramirez. "An agent-based architecture for supply chain management." In 2013 IEEE Eleventh International Symposium on Autonomous Decentralized Systems (ISADS). IEEE, 2013. http://dx.doi.org/10.1109/isads.2013.6513405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wei, Fan, Liumei Zhang, Tianshi Liu, Xiaodong Lu, and Kinji Mori. "Autonomous Community Architecture and Construction Technology for City Petrol Supply Management System." In 2015 IEEE Twelfth International Symposium on Autonomous Decentralized System (ISADS). IEEE, 2015. http://dx.doi.org/10.1109/isads.2015.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography