To see the other types of publications on this topic, follow the link: Dynamic Capacity Provisioning.

Journal articles on the topic 'Dynamic Capacity Provisioning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Dynamic Capacity Provisioning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ramamurthy, R., Z. Bogdanowicz, S. Samieian, et al. "Capacity performance of dynamic provisioning in optical networks." Journal of Lightwave Technology 19, no. 1 (2001): 40–48. http://dx.doi.org/10.1109/50.914483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

M., G. Saravanan, and Natarajan M. "FUZZY ALGORITHM USING VIRTUAL MACHINES SCHEDULING IN DISTRIBUTER SYSTEM AUTOMATIC OVERLOADED IN DISTRIBUTE DATABASE." International Journal of Engineering Research and Modern Education 3, no. 2 (2018): 4–7. https://doi.org/10.5281/zenodo.1402725.

Full text
Abstract:
In present day virtualization based register mists, applications share the hidden equipment by running in disconnected Virtual Machines (VMs). Each VM, amid its underlying creation, is arranged with a specific measure of processing assets, (for example, CPU, memory and I/O). A key factor for accomplishing economies of scale in a register cloud is asset provisioning, which alludes to apportioning assets to VMs to coordinate their workload. Commonly, effective provisioning is accomplished by two operations: (1) static asset provisioning. VMs are made with indicated size and after that united onto an arrangement of physical servers. The VM limit does not change; and (2) dynamic asset provisioning. VM limit is powerfully changed in accordance with coordinate workload vacillations. In both static and dynamic provisioning, VM estimating is maybe the most fundamental stride. VM measuring alludes to the estimation of the measure of assets that ought to be allotted to a VM. The target of VM estimating is to guarantee that VM limit is comparable with the workload. While over-provisioning squanders exorbitant assets, under-provisioning debases application execution and may lose clients. In this venture proposed gathered VM provisioning approach in which various VMs are merged and provisioned in view of a gauge of their total limit needs.
APA, Harvard, Vancouver, ISO, and other styles
3

Pham-Nguyen, Hoang-Nam, and Quang Tran-Minh. "Dynamic Resource Provisioning on Fog Landscapes." Security and Communication Networks 2019 (May 2, 2019): 1–15. http://dx.doi.org/10.1155/2019/1798391.

Full text
Abstract:
A huge amount of smart devices which have capacity of computing, storage, and communication to each other brings forth fog computing paradigm. Fog computing is a model in which the system tries to push data processing from cloud servers to “near” IoT devices in order to reduce latency time. The execution orderings and the deployed places of services make significant effect on the overall response time of an application. Beside new research directions in fog computing, e.g., fog-cloud collaboration, service scalability, fog scalability, mobile fog computing, fog federation, trade-off between energy consumption and communication efficiency, duration of storing data locally, storage security and communication security, and semantic-aware fog computing, the service deployment problem is one of the attractive research fields of fog computing. The service deployment is a multiobjective optimization problem; there are so many proposed solutions for various targets, such as response time, communication cost, and energy consumption. In this paper, we focus on the optimization problem which minimizes the overall response time of an application with awareness of network usage and server usage. Then, we have conducted experiments on two service deployment strategies, called cloudy and foggy strategies. We analyze numerically the overall response time, network usage, and server usage of those two strategies in order to prove the effectiveness of our proposed foggy service deployment strategy.
APA, Harvard, Vancouver, ISO, and other styles
4

Sood, Sandeep K. "A Value Based Dynamic Resource Provisioning Model in Cloud." International Journal of Cloud Applications and Computing 3, no. 1 (2013): 1–12. http://dx.doi.org/10.4018/ijcac.2013010101.

Full text
Abstract:
Cloud computing has become an innovative computing paradigm, which aims at providing reliable, customized, Quality of Service (QoS) and guaranteed computing infrastructures for users. Efficient resource provisioning is required in cloud for effective resource utilization. For resource provisioning, cloud provides virtualized computing resources that are dynamically scalable. This property of cloud differentiates it from the traditional computing paradigm. But the initialization of a new virtual instance causes a several minutes delay in the hardware resource allocation. Furthermore, cloud provides a fault tolerant service to its clients using the virtualization. But, in order to attain higher resource utilization over this technology, a technique or a strategy is needed using which virtual machines can be deployed over physical machines by predicting its need in advance so that the delay can be avoided. To address these issues, a value based prediction model in this paper is proposed for resource provisioning in which a resource manager is used for dynamically allocating or releasing a virtual machine depending upon the resource usage rate. In order to know the recent resource usage rate, the resource manager uses sliding window to analyze the resource usage rate and to predict the system behavior in advance. By predicting the resource requirements in advance, a lot of processing time can be saved. Earlier, a server has to perform all the calculations regarding the resource usage that in turn wastes a lot of processing power thus decreasing its overall capacity to handle the incoming request. The main feature of the proposed model is that a lot of load is being shifted from the individual server to the resource manager as it performs all the calculations and therefore the server is free to handle the incoming requests to its full capacity.
APA, Harvard, Vancouver, ISO, and other styles
5

Sood, Sandeep K. "A Value Based Dynamic Resource Provisioning Model in Cloud." International Journal of Cloud Applications and Computing 3, no. 2 (2013): 35–46. http://dx.doi.org/10.4018/ijcac.2013040104.

Full text
Abstract:
Cloud computing has become an innovative computing paradigm, which aims at providing reliable, customized, Quality of Service (QoS) and guaranteed computing infrastructures for users. Efficient resource provisioning is required in cloud for effective resource utilization. For resource provisioning, cloud provides virtualized computing resources that are dynamically scalable. This property of cloud differentiates it from the traditional computing paradigm. But the initialization of a new virtual instance causes a several minutes delay in the hardware resource allocation. Furthermore, cloud provides a fault tolerant service to its clients using the virtualization. But, in order to attain higher resource utilization over this technology, a technique or a strategy is needed using which virtual machines can be deployed over physical machines by predicting its need in advance so that the delay can be avoided. To address these issues, a value based prediction model in this paper is proposed for resource provisioning in which a resource manager is used for dynamically allocating or releasing a virtual machine depending upon the resource usage rate. In order to know the recent resource usage rate, the resource manager uses sliding window to analyze the resource usage rate and to predict the system behavior in advance. By predicting the resource requirements in advance, a lot of processing time can be saved. Earlier, a server has to perform all the calculations regarding the resource usage that in turn wastes a lot of processing power thus decreasing its overall capacity to handle the incoming request. The main feature of the proposed model is that a lot of load is being shifted from the individual server to the resource manager as it performs all the calculations and therefore the server is free to handle the incoming requests to its full capacity.
APA, Harvard, Vancouver, ISO, and other styles
6

Jeyarani, R., N. Nagaveni, and R. Vasanth Ram. "Self Adaptive Particle Swarm Optimization for Efficient Virtual Machine Provisioning in Cloud." International Journal of Intelligent Information Technologies 7, no. 2 (2011): 25–44. http://dx.doi.org/10.4018/jiit.2011040102.

Full text
Abstract:
Cloud Computing provides dynamic leasing of server capabilities as a scalable, virtualized service to end users. The discussed work focuses on Infrastructure as a Service (IaaS) model where custom Virtual Machines (VM) are launched in appropriate servers available in a data-center. The context of the environment is a large scale, heterogeneous and dynamic resource pool. Nonlinear variation in the availability of processing elements, memory size, storage capacity, and bandwidth causes resource dynamics apart from the sporadic nature of workload. The major challenge is to map a set of VM instances onto a set of servers from a dynamic resource pool so the total incremental power drawn upon the mapping is minimal and does not compromise the performance objectives. This paper proposes a novel Self Adaptive Particle Swarm Optimization (SAPSO) algorithm to solve the intractable nature of the above challenge. The proposed approach promptly detects and efficiently tracks the changing optimum that represents target servers for VM placement. The experimental results of SAPSO was compared with Multi-Strategy Ensemble Particle Swarm Optimization (MEPSO) and the results show that SAPSO outperforms the latter for power aware adaptive VM provisioning in a large scale, heterogeneous and dynamic cloud environment.
APA, Harvard, Vancouver, ISO, and other styles
7

M., Rakesh Chowdary, Yashwanth Reddy A., and Abhishek N. "RESOURCE MANAGEMENT IN DEALING WITH SECURITY CHALLENGES IN CLOUD BASED ENVIRONMENT." International Journal of Applied and Advanced Scientific Research 1, no. 2 (2016): 152–55. https://doi.org/10.5281/zenodo.1034457.

Full text
Abstract:
Cloud Service Brokers includes technology consultants, business professional service organizations, registered brokers and agents, and influencers that help guide consumers in the selection of cloud computing solutions. Service brokers concentrate on the negotiation of the relationships between consumers and providers without owning or managing the whole Cloud infrastructure. Moreover, they add extra services on top of a Cloud provider’s infrastructure to make up the user’s Cloud environment.With the emergence of many new data centers around the globe; energy consumption by those data centers has been tremendously increased. Dynamic capacity provisioning is a promising approach for reducing energy consumption by dynamically adjusting the number of active machines to match resource demands.
APA, Harvard, Vancouver, ISO, and other styles
8

Goswami, Veena, S. S. Patra, and G. B. Mund. "Dynamic Provisioning and Resource Management for Multi-Tier Cloud Based Applications." Foundations of Computing and Decision Sciences 38, no. 3 (2013): 175–91. http://dx.doi.org/10.2478/fcds-2013-0008.

Full text
Abstract:
Abstract Dynamic capacity provisioning is a useful technique for handling the workload variations seen in cloud environment. In this paper, we propose a dynamic provisioning technique for multi-tier applications to allocate resources efficiently using queueing model. It dynamically increases the mean service rate of the virtual machines to avoid congestion in the multi-tier environments. An optimization model to minimize the total number of virtual machines for computing resources in each tier has been presented. Using the supplementary variable and the recursive techniques, we obtain the system-length distributions at pre-arrival and arbitrary epochs. Some important performance indicators such as blocking probability, request waiting time and number of tasks in the system and in the queue have also been investigated. Finally, computational results showing the effect of model parameters on key performance indicators are presented.
APA, Harvard, Vancouver, ISO, and other styles
9

Guerin, Roch, Kartik Hosanagar, Xinxin Li, and Soumya Sen. "Shared or Dedicated Infrastructures: On the Impact of Reprovisioning Ability." MIS Quarterly 43, no. 4 (2019): 1059–79. http://dx.doi.org/10.25300/misq/2019/14857.

Full text
Abstract:
New technologies, such as virtualization, are transforming the way in which software and services are deployed and delivered to their users. They are behind the emergence of IT offerings such as cloud computing and converged networks, and manifest themselves through two important trends: (1) lower the cost of sharing a common infrastructure across multiple services with disparate resource requirements, and (2) dynamic provisioning of capacity in response to demand. Conventional wisdom is that both of these capabilities are synergistic, with greater provisioning flexibility improving the benefits derived from sharing computing or network resources. Consequently, a service operator should now always favor the use of a shared infrastructure over dedicated solutions when hosting multiple services. In this paper, we ask whether this is indeed the case, and investigate the dual impact of lower costs of sharing and provisioning flexibility on shared and dedicated infrastructures. The investigation reveals that while lower costs are always expected to favor infrastructure sharing, dynamic provisioning plays an ambiguous role. Reprovisioning improves both shared and dedicated solutions, but can do so differently and can sometimes favor a dedicated infrastructure. Our findings help illustrate that the technology trends, such as virtualization, behind cloud computing need not always favor the deployment of services on a shared infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
10

S, Suriya, Madhvesh V S, and Mrudhhula V S. "Cost Efficient Resource Provisioning using ACO." Journal of Soft Computing Paradigm 6, no. 4 (2025): 365–77. https://doi.org/10.36548/jscp.2024.4.003.

Full text
Abstract:
Cloud computing has revolutionized the way computational resources are provisioned and managed, offering scalable and flexible services to meet diverse user demands. However, cost-effective resource management is a very challenging process because of the dynamism and diversity of the aspects of cloud environments that changes in terms of load and resources. The traditional sources of resource acquisition do not have the capacity to deliver the alternatives expected on their cost without having a negative impact on the performance of the resources. This work describes the new approach of utilizing the ACO for resource management in cloud computing. The method that is proposed contains the potential to incorporate pheromone-based heuristics for controlling the process of resource allocation such that reduced operational costs are ensured as well as the performance of the process is maintained at the optimal rate. ACO explains the behaviour of the search process where the allocation of the tasks is done based on the values of the pheromone trails and the heuristic information. An ACO model that includes dynamic measurements for the diverse cloud environment and several adaptive mechanisms for creating more tasks and virtual machines (VMs) can be considered a helpful solution for actual cloud applications. The results of the experiments are high in terms of cost-effectiveness compared to other approaches and reflect the ACO’s ability to function in dynamic cloud environments.
APA, Harvard, Vancouver, ISO, and other styles
11

S, Aarthee, and Venkatesan R. "Secure Dynamic Resource Provisioning Cost by Optimized Placement of Virtual Machines in Cloud Computing." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 4, no. 1 (2013): 88–93. http://dx.doi.org/10.24297/ijct.v4i1b.3063.

Full text
Abstract:
Cloud computing provides pay-as-you-go computing resources and accessing services are offered from data centers all over the world as the cloud. Consumers may find that cloud computing allows them to reduce the cost of information management as they are not required to own their servers and can use capacity leased from third parties or cloud service providers. Cloud consumers can successfully reduce total cost of resource provisioning using Optimal Cloud Resource Provisioning (OCRP) algorithm in cloud computing environment. The two provisioning plans are reservation and on-demand, used for computing resources which is offered by cloud providers to cloud consumers. The cost of utilizing computing resources provisioned by reservation plan is cheaper than that provisioned by on-demand plan, since a cloud consumer has to pay to provider in advance. This project proposes that the OCRP algorithm associated with rule based resource manager technique is used to increase the scalability of cloud on-demand services by dynamic placement of virtual machines to reduce the cost and also endow with secure accessing of resources from data centers and parameters like virtualized platforms, data or service management are monitored in the cloud environment.
APA, Harvard, Vancouver, ISO, and other styles
12

Yu, Xiaosong, Jiye Wang, Kaixin Zhang, et al. "Brown-Field Migration Aware Routing and Spectrum Assignment in Backbone Optical Networks." Applied Sciences 12, no. 1 (2022): 438. http://dx.doi.org/10.3390/app12010438.

Full text
Abstract:
With the development of optical networks technology, broad attention has been paid to flexible grid technology in optical networks due to its ability to carry large-capacity information as well as provide flexible and fine-grained services through on-demand spectrum resource allocation. However, a one-time green-field deployment of a flexible grid network may not be practical. The transition technology called the fixed/flex-grid optical networks is more applicable and highly pragmatic. In such network, many nodes would likely be upgraded from a fixed-grid to flex-grid. In fact, dynamic service provisioning during the process of a node upgrade in fixed/flex-grid optical networks have become a challenge because the service connection can be easily interrupted, which leads to considerable data loss because of node upgrade. To overcome this challenge, we propose a brown-field migration aware routing and spectrum assignment (BMA-RSA) algorithm in fixed/flex-grid optical networks. The aim is to construct a probabilistic migration label (PML) model. The well-designed label setting of PML can balance the relationship between distance and node-upgrade probability. Dynamic service provisioning operations are undertaken based on the PML model to achieve a migration-aware dynamic connection before network migration occurs. We also evaluate the performance of different service provisioning strategies under different traffic models. The simulation results show that the BMA-RSA algorithm can achieve: (1) the tradeoff between distance and node upgrade probability during the process of service provisioning; (2) lower service interruption compared with the traditional non-migration aware K-shortest-path routing and spectrum assignment algorithm.
APA, Harvard, Vancouver, ISO, and other styles
13

Pg. Ali Kumar, Dk Siti Nur Khadhijah, S. H. Shah Newaz, Fatin Hamadah Rahman, Gyu Myoung Lee, Gour Karmakar, and Thien-Wan Au. "Green Demand Aware Fog Computing: A Prediction-Based Dynamic Resource Provisioning Approach." Electronics 11, no. 4 (2022): 608. http://dx.doi.org/10.3390/electronics11040608.

Full text
Abstract:
Fog computing could potentially cause the next paradigm shift by extending cloud services to the edge of the network, bringing resources closer to the end-user. With its close proximity to end-users and its distributed nature, fog computing can significantly reduce latency. With the appearance of more and more latency-stringent applications, in the near future, we will witness an unprecedented amount of demand for fog computing. Undoubtedly, this will lead to an increase in the energy footprint of the network edge and access segments. To reduce energy consumption in fog computing without compromising performance, in this paper we propose the Green-Demand-Aware Fog Computing (GDAFC) solution. Our solution uses a prediction technique to identify the working fog nodes (nodes serve when request arrives), standby fog nodes (nodes take over when the computational capacity of the working fog nodes is no longer sufficient), and idle fog nodes in a fog computing infrastructure. Additionally, it assigns an appropriate sleep interval for the fog nodes, taking into account the delay requirement of the applications. Results obtained based on the mathematical formulation show that our solution can save energy up to 65% without deteriorating the delay requirement performance.
APA, Harvard, Vancouver, ISO, and other styles
14

Andre, Jean-Marc, Ulf Behrens, James Branson, et al. "Experience with dynamic resource provisioning of the CMS online cluster using a cloud overlay." EPJ Web of Conferences 214 (2019): 07017. http://dx.doi.org/10.1051/epjconf/201921407017.

Full text
Abstract:
The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.
APA, Harvard, Vancouver, ISO, and other styles
15

Omarov, Abdulla, Assel Sarsembayeva, Askar Zhussupbekov, et al. "Bearing Capacity of Precast Concrete Joint Micropile Foundations in Embedded Layers: Predictions from Dynamic and Static Load Tests according to ASTM Standards." Infrastructures 9, no. 7 (2024): 104. http://dx.doi.org/10.3390/infrastructures9070104.

Full text
Abstract:
In this paper, joint precast piles with a cross-section of 400 × 400 mm and a pin-joined connection were considered, and their interaction with the soil of Western Kazakhstan has been analyzed. The following methods were used: assessment of the bearing capacity using the static compression load test (SCLT by ASTM) method, interpretation of the field test data, and the dynamic loading test (DLT) method for driving precast concrete joint piles, including Pile Driving Analyzer (PDA by ASTM) and Control and Provisioning of Wireless Access Points (CAPWAP) methods. According to the results, the composite piles tested by the PDA (by ASTM) method differ by 15 percent compared to the static load method, while the difference between the dynamic DLT (by ASTM) method and the static load (by ASTM) method was only 7 percent. So, according to the results, the alternative dynamic method DLT (by ASTM) is very effective and more accurate compared to other existing methods.
APA, Harvard, Vancouver, ISO, and other styles
16

Canosa-Reyes, Rewer M., Andrei Tchernykh, Jorge M. Cortés-Mendoza, et al. "Dynamic performance–Energy tradeoff consolidation with contention-aware resource provisioning in containerized clouds." PLOS ONE 17, no. 1 (2022): e0261856. http://dx.doi.org/10.1371/journal.pone.0261856.

Full text
Abstract:
Containers have emerged as a more portable and efficient solution than virtual machines for cloud infrastructure providing both a flexible way to build and deploy applications. The quality of service, security, performance, energy consumption, among others, are essential aspects of their deployment, management, and orchestration. Inappropriate resource allocation can lead to resource contention, entailing reduced performance, poor energy efficiency, and other potentially damaging effects. In this paper, we present a set of online job allocation strategies to optimize quality of service, energy savings, and completion time, considering contention for shared on-chip resources. We consider the job allocation as the multilevel dynamic bin-packing problem that provides a lightweight runtime solution that minimizes contention and energy consumption while maximizing utilization. The proposed strategies are based on two and three levels of scheduling policies with container selection, capacity distribution, and contention-aware allocation. The energy model considers joint execution of applications of different types on shared resources generalized by the job concentration paradigm. We provide an experimental analysis of eighty-six scheduling heuristics with scientific workloads of memory and CPU-intensive jobs. The proposed techniques outperform classical solutions in terms of quality of service, energy savings, and completion time by 21.73–43.44%, 44.06–92.11%, and 16.38–24.17%, respectively, leading to a cost-efficient resource allocation for cloud infrastructures.
APA, Harvard, Vancouver, ISO, and other styles
17

Sandeep, Kaipu. "AI-Powered Dynamic Optimization of Cloud Resource Allocation." European Journal of Advances in Engineering and Technology 9, no. 9 (2022): 100–106. https://doi.org/10.5281/zenodo.14059458.

Full text
Abstract:
Due to the exponential expansion of cloud computing and related applications, effective resource allocation has become essential for cloud service providers to ensure performance, cost-efficiency, and scalability. Conventional resource allocation techniques frequently fail to keep up with fluctuating and dynamic workloads, resulting in over- or under-provisioning of resources. To optimize cloud resource allocation, this research investigates the integration of artificial intelligence (AI) algorithms, addressing the difficulties of variable demand, performance trade-offs, and cost minimization. The study's main objective is to forecast future workloads and dynamically modify resource allocation in real time by utilizing AI-driven techniques, such as reinforcement learning, neural networks, and evolutionary algorithms. Specifically, reinforcement learning is used to develop intelligent agents that can learn from and adjust to changing cloud environments by making decisions based on historical data and continuous feedback. Because of its capacity for self-learning, the system can adjust to changing workloads and increase efficiency by continuously optimizing the distribution of resources. Additionally, the study looks into using neural networks to forecast workload patterns, which would allow the cloud platform to forecast demand and plan resource provisioning ahead of time. Neural networks can precisely predict times of high demand or low activity by evaluating past data, ensuring that resources are distributed as efficiently as possible. Additionally, resource allocation tactics are evolved and optimized through the use of genetic algorithms, which mimic natural selection to find the most effective configurations for different cloud workloads. This AI-driven method of allocating resources is put to the test in machine learning projects, web apps, and IoT systems that have varying workloads in simulated cloud settings. Comparing the results to conventional allocation techniques, it is clear that the new approach significantly improves system performance, cost savings, and resource usage. By utilizing AI approaches, cloud platforms can dynamically modify resources and circumvent the drawbacks associated with manual or static provisioning. This theoretical research has ramifications for a wide range of cloud computing-dependent industries, including data analytics, artificial intelligence, healthcare, and e-commerce. Cloud service providers may guarantee scalability, lower operating costs, and provide higher service quality while upholding strict performance criteria by employing AI to optimize resource allocation. Subsequent research endeavors will center on augmenting the applicability of these artificial intelligence models and tackling obstacles like latency and security in authentic cloud settings. In the end, this study shows how AI may revolutionize the management of intricate cloud infrastructures, opening the door to more intelligent and flexible cloud computing.
APA, Harvard, Vancouver, ISO, and other styles
18

Hampshire, Robert C., William A. Massey, and Qiong Wang. "DYNAMIC PRICING TO CONTROL LOSS SYSTEMS WITH QUALITY OF SERVICE TARGETS." Probability in the Engineering and Informational Sciences 23, no. 2 (2009): 357–83. http://dx.doi.org/10.1017/s0269964809000205.

Full text
Abstract:
Numerous examples of real-time services arise in the service industry that can be modeled as loss systems. These include agent staffing for call centers, provisioning bandwidth for private line services, making rooms available for hotel reservations, and congestion pricing for parking spaces. Given that arriving customers make their decision to join the system based on the current service price, the manager can use price as a mechanism to control the utilization of the system. A major objective for the manager is then to find a pricing policy that maximizes total revenue while meeting the quality of service targets desired by the customers. For systems with growing demand and service capacity, we provide a dynamic pricing algorithm. A key feature of our solution is congestion pricing. We use demand forecasts to anticipate future service congestion and set the present price accordingly.
APA, Harvard, Vancouver, ISO, and other styles
19

Shen, Gangxiang, and Wayne D. Grover. "Design and performance of protected working capacity envelopes based on p-cycles for dynamic provisioning of survivable services." Journal of Optical Networking 4, no. 7 (2005): 361. http://dx.doi.org/10.1364/jon.4.000361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kumar, Akkrabani Bharani Pradeep, and P. Venkata Nageswara Rao. "Energy Efficient, Resource-Aware, Prediction Based VM Provisioning Approach for Cloud Environment." International Journal of Ambient Computing and Intelligence 11, no. 3 (2020): 22–41. http://dx.doi.org/10.4018/ijaci.2020070102.

Full text
Abstract:
Over the past few decades, computing environments have progressed from a single-user milieu to highly parallel supercomputing environments, network of workstations (NoWs) and distributed systems, to more recently popular systems like grids and clouds. Due to its great advantage of providing large computational capacity at low costs, cloud infrastructures can be employed as a very effective tool, but due to its dynamic nature and heterogeneity, cloud resources consuming enormous amount of electrical power and energy consumption control becomes a major issue in cloud datacenters. This article proposes a comprehensive prediction-based virtual machine management approach that aims to reduce energy consumption by reducing active physical servers in cloud data centers. The proposed model focuses on three key aspects of resource management namely, prediction-based delay provisioning; prediction-based migration, and resource-aware live migration. The comprehensive model minimizes energy consumption without violating the service level agreement and provides the required quality of service. The experiments to validate the efficacy of the proposed model are carried out on a simulated environment, with varying server and user applications and parameter sizes.
APA, Harvard, Vancouver, ISO, and other styles
21

Siddhart Kumar Choudhary. "AI-Powered Predictive Analytics for Dynamic Cloud Resource Optimization: A Technical Implementation Framework." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 1 (2025): 1267–75. https://doi.org/10.32628/cseit251112122.

Full text
Abstract:
This article explores the transformative impact of AI-driven predictive analytics on cloud resource optimization, presenting a comprehensive technical implementation framework. The article shows how artificial intelligence and machine learning architectures revolutionize traditional cloud management approaches through advanced predictive capabilities and dynamic resource allocation. It examines the evolution from static threshold-based systems to sophisticated AI-driven solutions, analyzing their implementation strategies across various organizational contexts. The article delves into multiple optimization domains, including capacity provisioning, cost management, performance enhancement, and energy efficiency, while presenting real-world applications and impact analyses across different industries. Through extensive case studies and empirical evidence, the article demonstrates how organizations leverage AI-powered solutions to address complex cloud resource management challenges, achieve operational efficiencies, and maintain competitive advantages in the digital marketplace. The article also explores future developments and provides strategic recommendations for organizations implementing cloud optimization frameworks, emphasizing the importance of standardized approaches, stakeholder engagement, and sustainable practices in cloud resource management.
APA, Harvard, Vancouver, ISO, and other styles
22

Waghmode, Santosh T., and Bankat M. Patil. "Adaptive Load Balancing Using RR and ALB: Resource Provisioning in Cloud." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 7 (2023): 302–14. http://dx.doi.org/10.17762/ijritcc.v11i7.7940.

Full text
Abstract:
Cloud Computing context, load balancing is an issue. With a rise in the number of cloud-based technology users and their need for a broad range of services utilizing resources successfully or effectively in a cloud environment is referred to as load balancing, has become a significant obstacle. Load balancing is crucial in storage systems to increase network capacity and speed up response times. The main goal is to present a new load-balancing mechanism that can balance incoming requests from users all over globally who are in different regions requesting data from remote data sources. This method will combine effective scheduling and cloud-based techniques. A dynamic load balancing method was developed to ensure that cloud environments have the ability to respond rapidly, in addition to running cloud resources efficiently and speeding up job processing times. Applications' incoming traffic is automatically split up across a number of targets, including Amazon EC2 instances, network addresses, and other entities by elastic load balancing. Elastic load balancing offers three distinct classifications of load balancer, and each one provides high availability, intelligent scalability, and robust security to guarantee the error-free functioning of your applications. Application load balancing and round robin are the two load balancing mechanisms in database cloud that are focus of this research study.
APA, Harvard, Vancouver, ISO, and other styles
23

M.Thenmozhi*1, &. S.Kiruthika2. "NETWORK CAPACITY BASED GEOGRAPHICAL QOS ROUTING FOR MULTIMEDIA STREAMS OVER MANET." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 7, no. 4 (2018): 540–47. https://doi.org/10.5281/zenodo.1219665.

Full text
Abstract:
The Mobile Adhoc Network (MANET) support in anywhere and anytime wireless communication expands the multimedia applications’ scope. Ever demanding task of multimedia applications to determine the quality of routers among huge number of portable devices is to improve Quality of Service (QoS). To support real time multimedia streams, several geographical based QoS routing protocols have been proposed; however, the portable devices’ expansion leads to stringent QoS demands in MANET. The main reason behind the QoS demand is the dynamic network capacity which relies on network size, node mobility, density, node energy, and communication range. The existing protocols are inefficient to the delay sensitive multimedia applications over MANET due to the lack of attention on dynamic network capacity. This work presents a new Network Capacity based Geographical routing (NCG) along with several routing schemes such as Radio Range Regulation (RRR) routing, Virtual Destination Routing (VDR), and Virtual Void Routing (VVR) called NCG++.  is  called NCG++. The NCG++ routing estimates the network capacity in terms of QoS, facilitating minimum energy for every node to provide the QoS for varying network size, node mobility, and node density. Finally, this work simulates and compares the proposed NCG++ routing protocols performance with the existing QoS-GPSR. Thus, the performance evaluation illustrates that the NCG++ routing protocol outperforms the existing protocols.
APA, Harvard, Vancouver, ISO, and other styles
24

Qiao, Wenxin, Hao Lu, Yu Lu, Lijie Meng, and Yicen Liu. "A Dynamic Service Reconfiguration Method for Satellite–Terrestrial Integrated Networks." Future Internet 13, no. 10 (2021): 260. http://dx.doi.org/10.3390/fi13100260.

Full text
Abstract:
Satellite–terrestrial integrated networks (STINs) are regarded as a promising solution to meeting the demands of global high-speed seamless network access in the future. Software-defined networking and network function virtualization (SDN/NFV) are two complementary technologies that can be used to ensure that the heterogeneous resources in STINs can be easily managed and deployed. Considering the dual mobility of satellites and ubiquitous users, along with the dynamic requirements of user requests and network resource states, it is challenging to maintain service continuity and high QoE performance in STINs. Thus, we investigate the service migration and reconfiguration scheme, which are of great significance to the guarantee of continuous service provisioning. Specifically, this paper proposes a dynamic service reconfiguration method that can support flexible service configurations on integrated networks, including LEO satellites and ground nodes. We first model the migration cost as an extra delay incurred by service migration and reconfiguration and then formulate the selection processes of the location and migration paths of virtual network functions (VNFs) as an integer linear programming (ILP) optimization problem. Then, we propose a fuzzy logic and quantum genetic algorithm (FQGA) to obtain an approximate optimal solution that can accelerate the solving process efficiently with the benefits of the high-performance computing capacity of QGA. The simulation results validate the effectiveness and improved performance of the scheme proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
25

Sokratis Barmpounakis, Nancy Alonistioti, George C. Alexandropoulos, and Alexandros Kaloxylos. "Dynamic infrastructure-as-a-service: A key paradigm for 6G networks and application to maritime communications." ITU Journal on Future and Evolving Technologies 3, no. 1 (2022): 55–70. http://dx.doi.org/10.52953/wqwx1975.

Full text
Abstract:
Although, the advent of the fifth Generation (5G) of communication networks has introduced the technology enablers and system capability to support a large number of devices, and highly-demanding services, as well as ultra-low latency and high reliability, enhancements to the current systems will not suffice for future generation networks. Instead, a disruptive communication paradigm needs to be introduced that will ultimately enable the radical evolution of the current systems to a new ones, capable of self-aggregating extraordinary connectivity and computing capabilities in a seamless manner. To this end, sixth Generation (6G) networks should be able to seamlessly integrate and release, in a dynamic manner, heterogeneous types of resources, such as diverse types of network entities/nodes with nomadic, relaying, and multi-tenancy capabilities, which can enable demand-driven service provisioning, coverage extension, increased network capacity, and reduced energy consumption. This paper presents a novel networking paradigm towards Infrastructure-as-a-Service (IaaS), which introduces a disruptive and dynamic network infrastructure management and service orchestration mechanism, including nomadic networks and Artificial Intelligence (AI)-aware networking approaches for seamless and dynamic management of diverse network resources. In order to emphasize the compelling potential of the proposed paradigm, we detail its application to maritime communication networks, while identifying this use case as a key driver for the proposed dynamic IaaS concept.
APA, Harvard, Vancouver, ISO, and other styles
26

Prakash Kodali. "Cloud-native AI Ecosystems: Advancing real-time personalization in E-commerce customer experiences." Global Journal of Engineering and Technology Advances 23, no. 1 (2025): 217–25. https://doi.org/10.30574/gjeta.2025.23.1.0088.

Full text
Abstract:
This article examines the convergence of advanced artificial intelligence methodologies and cloud-native environments in revolutionizing e-commerce personalization. The article presents a comprehensive framework for implementing dynamic, real-time personalization systems that leverage transformer-based models, reinforcement learning, and adaptive neural networks to process extensive customer interaction datasets instantaneously. The article addresses critical implementation challenges through serverless computing architectures, containerization strategies, and elastic resource provisioning while emphasizing the importance of explainable AI for maintaining transparency and customer trust. The article demonstrates that cloud-native AI deployments significantly enhance the capacity to deliver highly personalized customer experiences at scale, enabling e-commerce platforms to adapt continuously to individual customer behaviors and preferences. The proposed approaches not only improve computational efficiency and reduce latency but also provide sustainable solutions for ethical compliance in an increasingly regulated digital marketplace, establishing a foundation for the next generation of intelligent e-commerce systems.
APA, Harvard, Vancouver, ISO, and other styles
27

Bogunovic, Igor, Antonio Viduka, Ivan Magdic, Leon Josip Telak, Marcos Francos, and Paulo Pereira. "Agricultural and Forest Land-Use Impact on Soil Properties in Zagreb Periurban Area (Croatia)." Agronomy 10, no. 9 (2020): 1331. http://dx.doi.org/10.3390/agronomy10091331.

Full text
Abstract:
In urban areas, land use usually increases soil degradation. However, there are areas occupied by agriculture and woodlands with an essential role in provisioning food and other services such as water and climate regulation. The objective of this work was to assess the effect of long-term land use and soil management practices on peri-urban soils in Zagreb (Croatia). Samples were collected at depth 0–10 cm within intensively tilled cropland (CROP) and vineyard (VINE), traditional grass-covered orchard (ORCH), and forest (FOR). The results showed that bulk density was significantly higher in VINE and CROP than in ORCH and FOR. The opposite dynamic was observed in water-holding capacity, air-filled porosity, aggregate stability, organic matter, and soil organic matter stocks (SOCS). Soil water infiltration was higher in FOR plot compared to the other plots. Overall, land-use change had a substantial impact on soil properties and SOCS, especially in CROP and VINE soils. Tillage, pesticides, and fertilizer applications were presumably the reasons for altered soil quality properties. Intensively used areas (VINE and CROPS) may reduce soil ecosystems services such as the capacity for flood retention and C sequestration.
APA, Harvard, Vancouver, ISO, and other styles
28

Ayoub, Omran, Davide Andreoletti, Francesco Musumeci, Massimo Tornatore, and Achille Pattavina. "Optimal Cache Deployment for Video-On-Demand in Optical Metro Edge Nodes under Limited Storage Capacity." Applied Sciences 10, no. 6 (2020): 1984. http://dx.doi.org/10.3390/app10061984.

Full text
Abstract:
Network operators must continuously explore new network architectures to satisfy increasing traffic demand due to bandwidth-hungry services, such as video-on-demand (VoD). A promising solution which enables offloading traffic consists of terminating VoD requests locally through deploying caches at the network edge. However, deciding the number of caches to deploy, their locations in the network and their dimensions in terms of storage capacity is not trivial and must be jointly optimized, to reduce costs and utilize network resources efficiently. In this paper, we aim to find the optimal deployment of caches in a hierarchical metro network, which minimizes the overall network resource occupation for VoD services, in terms of number of caches deployed across the various network levels, their locations and their dimensions (i.e., storage capacity), under limited storage capacity. We first propose an analytical model which serves as a tool to find the optimal deployment as a function of various parameters, such as popularity distribution and location of metro cache. Then, we present a discrete-event simulator for dynamic VoD provisioning to verify the correctness of the analytical model and to measure the performance of different cache deployment strategies in terms of overall network resource occupation. We prove that, to minimize resource occupation given a fixed budget in terms of storage capacity, storage capacity must be distributed among caches at different layers of the metro network. Moreover, we provide guidelines for the optimal cache deployment strategy when the available storage capacity is limited. We further show how the optimal deployment of caches across the various metro network levels varies depending on the popularity distribution, the metro network topology and the amount of storage capacity available (i.e., the budget invested in terms of storage capacity).
APA, Harvard, Vancouver, ISO, and other styles
29

Khan, Hassan Mahmood, Fang-Fang Chua, and Timothy Tzen Vun Yap. "ReSQoV: A Scalable Resource Allocation Model for QoS-Satisfied Cloud Services." Future Internet 14, no. 5 (2022): 131. http://dx.doi.org/10.3390/fi14050131.

Full text
Abstract:
Dynamic resource provisioning is made more accessible with cloud computing. Monitoring a running service is critical, and modifications are performed when specific criteria are exceeded. It is a standard practice to add or delete resources in such situations. We investigate the method to ensure the Quality of Service (QoS), estimate the required resources, and modify allotted resources depending on workload, serialization, and parallelism due to resources. This article focuses on cloud QoS violation remediation using resource planning and scaling. A Resource Quantified Scaling for QoS Violation (ReSQoV) model is proposed based on the Universal Scalability Law (USL), which provides cloud service capacity for specific workloads and generates a capacity model. ReSQoV considers the system overheads while allocating resources to maintain the agreed QoS. As the QoS violation detection decision is Probably Violation and Definitely Violation, the remedial action is triggered, and required resources are added to the virtual machine as vertical scaling. The scenarios emulate QoS parameters and their respective resource utilization for ReSQoV compared to policy-based resource allocation. The results show that after USLbased Quantified resource allocation, QoS is regained, and validation of the ReSQoV is performed through the statistical test ANOVA that shows the significant difference before and after implementation.
APA, Harvard, Vancouver, ISO, and other styles
30

Chergui, Hatim, and Christos Verikoukis. "Offline SLA-Constrained Deep Learning for 5G Networks Reliable and Dynamic End-to-End Slicing." IEEE Journal on Selected Areas in Communications, 38, no. 2 (2020): 350–60. https://doi.org/10.1109/JSAC.2019.2959186.

Full text
Abstract:
In this paper, we address the issue of resource provisioning as an enabler for end-to-end dynamic slicing in software defined networking/network function virtualization (SDN/NFV)-based fifth generation (5G) networks. The different slices' tenants (i.e. logical operators) are dynamically allocated isolated portions of physical resource blocks (PRBs), baseband processing resources, backhaul capacity as well as data forwarding elements (DFE) and SDN controller connections. By invoking massive key performance indicators (KPIs) datasets stemming from a live cellular network endowed with traffic probes, we first introduce a low-complexity slices' traffics predictor based on a soft gated recurrent unit (GRU). We then build-at each virtual network function-joint multi-slice deep neural networks (DNNs) and train them to estimate the required resources based on the traffic per slice, while not violating two service level agreement (SLA), namely, violation rate-based SLA and resource bounds-based SLA. This is achieved by integrating dataset-dependent generalized non-convex constraints into the DNN offline optimization tasks that are solved via a non-zero sum two-player game strategy. In this respect, we highlight the role of the underlying hyperparameters in the trade-off between overprovisioning and slices' isolation. Finally, using reliability theory, we provide a closed-form analysis for the lower bound of the so-called reliable convergence probability and showcase the effect of the violation rate on it.
APA, Harvard, Vancouver, ISO, and other styles
31

Saikiran Rallabandi. "AI-Driven Capacity Planning in Large-Scale Infrastructure: A Comparative Analysis of LSTM Networks and Traditional Forecasting Methods." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 6 (2024): 950–62. http://dx.doi.org/10.32628/cseit241061139.

Full text
Abstract:
Infrastructure capacity planning presents significant challenges in modern distributed systems, where traditional forecasting methods often fail to accommodate dynamic resource demands effectively. This article examines the implementation of artificial intelligence-driven capacity planning systems, specifically focusing on Long Short-Term Memory (LSTM) networks and advanced machine learning algorithms, across large-scale infrastructure deployments. Through a mixed-methods approach combining quantitative analysis of historical performance data from multiple enterprise deployments and detailed case studies of implementations at Netflix and Microsoft Azure, we demonstrate that AI-driven planning systems achieve a 47% improvement in prediction accuracy compared to traditional methods, while reducing resource over-provisioning by 31%. The findings indicate that LSTM-based models excel particularly in environments with irregular usage patterns, achieving 93% prediction accuracy in highly variable workloads compared to 76% for conventional statistical methods. The integration of cross-layer monitoring with 5G technologies introduces transformative capabilities, enabling more sophisticated resource optimization with a 45% improvement in overall resource utilization. The article also reveals significant improvements in educational environments, with Azure's implementation demonstrating a 35% reduction in per-student costs and 64% increase in system utilization. Additionally, our investigation addresses critical security and privacy considerations in AI-driven systems, providing a comprehensive framework for privacy-preserving technology implementation and industry adoption projections. These results suggest that AI-driven capacity planning represents a significant advancement in infrastructure management, offering improved operational efficiency, substantial cost benefits, and enhanced security measures across diverse operational contexts.
APA, Harvard, Vancouver, ISO, and other styles
32

Billhardt, Holger, Alberto Fernández, Pasqual Martí, Javier Prieto Tejedor, and Sascha Ossowski. "Towards the Prioritised Use of Transportation Infrastructures: The Case of Vehicle-Specific Dynamic Access Restrictions in City Centres." Electronics 11, no. 4 (2022): 576. http://dx.doi.org/10.3390/electronics11040576.

Full text
Abstract:
One of the main problems that local authorities of large cities have to face is the regulation of urban mobility. They need to provide the means to allow for the efficient movement of people and distribution of goods. However, the provisioning of transportation services needs to take into account general global objectives, like reducing emissions and having more healthy living environments, which may not always be aligned with individual interests. Urban mobility is usually provided through a transport infrastructure that includes all the elements that support mobility. On many occasions, the capacity of the elements of this infrastructure is lower than the actual demand and thus different transportation activities compete for their use. In this paper, we argue that scarce transport infrastructure elements should be assigned dynamically and in a prioritised manner to transport activities that have a higher utility from the point of view of society; for example, activities that produce less pollution and provide more value to society. In this paper, we define a general model for prioritizing the use of a particular type of transportation infrastructure element called time-unlimited elements, whose usage time is unknown a priori, and illustrate its dynamics through two use cases: vehicle-specific dynamic access restriction in city centres (i) based on the usage levels of available parking spaces and (ii) to assure sustained admissible air quality levels in the city centre. We carry out several experiments using the SUMO traffic simulation tool to evaluate our proposal.
APA, Harvard, Vancouver, ISO, and other styles
33

Barbierato, Enrico, Marco Gribaudo, and Giuseppe Serazzi. "Multi-formalism Models for Performance Engineering." Future Internet 12, no. 3 (2020): 50. http://dx.doi.org/10.3390/fi12030050.

Full text
Abstract:
Nowadays, the necessity to predict the performance of cloud and edge computing-based architectures has become paramount, in order to respond to the pressure of data growth and more aggressive level of service agreements. In this respect, the problem can be analyzed by creating a model of a given system and studying the performance indices values generated by the model’s simulation. This process requires considering a set of paradigms, carefully balancing the benefits and the disadvantages of each one. While queuing networks are particularly suited to modeling cloud and edge computing architectures, particular occurrences—such as autoscaling—require different techniques to be analyzed. This work presents a review of paradigms designed to model specific events in different scenarios, such as timeout with quorum-based join, approximate computing with finite capacity region, MapReduce with class switch, dynamic provisioning in hybrid clouds, and batching of requests in e-Health applications. The case studies are investigated by implementing models based on the above-mentioned paradigms and analyzed with discrete event simulation techniques.
APA, Harvard, Vancouver, ISO, and other styles
34

Nura Yusuf, Muhammad, Kamalrulnizam bin Abu Bakar, Babangida Isyaku, and Ajibade Lukuman Saheed. "Review of Path Selection Algorithms with Link Quality and Critical Switch Aware for Heterogeneous Traffic in SDN." International journal of electrical and computer engineering systems 14, no. 3 (2023): 345–70. http://dx.doi.org/10.32985/ijeces.14.3.12.

Full text
Abstract:
Software Defined Networking (SDN) introduced network management flexibility that eludes traditional network architecture. Nevertheless, the pervasive demand for various cloud computing services with different levels of Quality of Service requirements in our contemporary world made network service provisioning challenging. One of these challenges is path selection (PS) for routing heterogeneous traffic with end-to-end quality of service support specific to each traffic class. The challenge had gotten the research community's attention to the extent that many PSAs were proposed. However, a gap still exists that calls for further study. This paper reviews the existing PSA and the Baseline Shortest Path Algorithms (BSPA) upon which many relevant PSA(s) are built to help identify these gaps. The paper categorizes the PSAs into four, based on their path selection criteria, (1) PSAs that use static or dynamic link quality to guide PSD, (2) PSAs that consider the criticality of switch in terms of an update operation, FlowTable limitation or port capacity to guide PSD, (3) PSAs that consider flow variabilities to guide PSD and (4) The PSAs that use ML optimization in their PSD. We then reviewed and compared the techniques' design in each category against the identified SDN PSA design objectives, solution approach, BSPA, and validation approaches. Finally, the paper recommends directions for further research.
APA, Harvard, Vancouver, ISO, and other styles
35

Rengarajan, Guruprasath, Nagarajan Ramalingam, and Kannadhasan Suriyan. "Performance enhancement of mobile ad hoc network life time using energy efficient techniques." Bulletin of Electrical Engineering and Informatics 12, no. 5 (2023): 2870–77. http://dx.doi.org/10.11591/eei.v12i5.5184.

Full text
Abstract:
Due to the dynamic topology and limited resources in mobile ad hoc networks (MANETs), multicast routing and quality of service (QoS) provisioning are difficult issues. This study introduces an agent-based QoS routing method that uses fuzzy logic to choose the best route while taking into account a variety of independent QoS indicators, including buffer occupancy rate, remaining mobile node battery capacity, and hop count. On the other hand, finding such pathways requires a lot of work in terms of efficiency and security. This study continues to test the security of weak models, and it has been shown that it may be challenging to accept various sorts of assaults. A distributed approach is given that may be used to determine the best resource distribution at each node. Additionally, the least energy-intensive directed acyclic network network flow is selected from a group using the embedded sleep scheduling algorithm. The process of choosing the flow and allocating the resources for each video frame is adjusted to the characteristics of the network connection channel. Results show that the suggested resource allocation and flow selection algorithms provide considerable performance benefits with minimal optimality gaps at a reasonable computational cost when applied to various network topologies.
APA, Harvard, Vancouver, ISO, and other styles
36

Yakubu, Ismail Zaharaddeen, Lele Muhammed, Zainab Aliyu Musa, Zakari Idris Matinja, and Ilya Musa Adamu. "A Multi Agent Based Dynamic Resource Allocation in Fog-Cloud Computing Environment." Trends in Sciences 18, no. 22 (2021): 413. http://dx.doi.org/10.48048/tis.2021.413.

Full text
Abstract:
Cloud high latency limitation has necessitated the introduction of Fog computing paradigm that extends computing infrastructures in the cloud data centers to the edge network. Extended cloud resources provide processing, storage and network services to time sensitive request associated to the Internet of Things (IoT) services in network edge. The rapid increase in adoption of IoT devices, variations in user requirements, limited processing and storage capacity of fog resources and problem of fog resources over saturation has made provisioning and allotment of computing resources in fog environment a formidable task. Satisfying application and request deadline is the most substantial challenge compared to other dynamic variations in parameters of client requirements. To curtail these issues, the integrated fog-cloud computing environment and efficient resource selection method is highly required. This paper proposed an agent based dynamic resource allocation that employs the use of host agent to analyze the QoSrequirements of application and request and select a suitable execution layer. The host agent forwards the application request to a layer agent which is responsible for the allocation of best resource that satisfies the requirement of the application request. Host agent and layers agents maintains resource information tables for matching of task and computing resources. CloudSim toolkit functionalities were extended to simulate a realistic fog environment where the proposed method is evaluated. The experimental results proved that the proposed method performs better in terms of processing time, latency and percentage QoS delivery.
 HIGHLIGHTS
 
 The distance between the cloud infrastructure and the edge IoT devices makes the cloud not too competent for some IoT applications, especially the sensitive ones
 To minimize the latency in the cloud and ensure prompt response to user requests, Fog computing, which extends the cloud services to edge network was introduced
 The proliferation in adoption of IoT devices and fog resource limitations has made resource scheduling in fog computing a tedious one
 
 GRAPHICAL ABSTRACT
APA, Harvard, Vancouver, ISO, and other styles
37

Tania, Panayiotou, P. Chatzis Sotirios, and Ellinas Georgios. "Performance Analysis of a Data-Driven QoT Decision Approach on a Dynamic Multicast-Capable Metro Optical Network." IEEE/OSA Journal of Optical Communications and Networking 9, no. 1 (2017): 98–108. https://doi.org/10.5281/zenodo.1036413.

Full text
Abstract:
The performance of a data-driven quality-of-transmission (QoT) model is investigated on a dynamic metro optical network capable of supporting both unicast and multicast connections. The data-driven QoT technique analyzes data of previous connection requests and, through a training procedure that is performed on a neural network, returns a data-driven QoT model that nearaccurately decides the QoT of the newly arriving requests. The advantages of the data-driven QoT approach over the existing Q-factor techniques are that it is self-adaptive, it is a function of data that are independent from the physical layer impairments (PLIs) eliminating the requirement of specific measurement equipment, and it does not assume the existence of a system with extensive processingandstorage capabilities. Further, it is fast in processing new data and fast in finding a near-accurateQoT model provided that such a model exists. On the contrary, existing Q-factor models lack self-adaptiveness; they are a function of the PLIs, and their evaluation requires time-consuming simulations, lab experiments, specific measurement equipment, and considerable human effort. It is shown that the data-driven QoT model exhibits a high accuracy (close to 92%–95%) in determining, during the provisioning phase, whether a connection to be established has a sufficient (or insufficient) QoT, when compared with the QoT decisions performed by the Q-factor model. It is also shown that, when sufficient wavelength capacity is available in the network, the network performance is not significantly affected when the data-driven QoT model is used for the dynamic system instead of the Q-factor model, which is an indicator that the proposed approach can efficiently replace the existing Q-factor model.
APA, Harvard, Vancouver, ISO, and other styles
38

Weng, Qiyan, Yijing Tang, and Hangguan Shan. "Multiuser Access Control for 360° VR Video Service Systems Exploiting Proactive Caching and Mobile Edge Computing." Applied Sciences 15, no. 8 (2025): 4201. https://doi.org/10.3390/app15084201.

Full text
Abstract:
Mobile virtual reality (VR) is considered a killer application for future mobile broadband networks. However, for cloud VR, the long content delivery path and time-varying transmission rate from the content provider’s cloud VR server to the users make the quality-of-service (QoS) provisioning for VR users very challenging. To this end, in this paper, we design a 360° VR video service system that leverages proactive caching and mobile edge computing (MEC) technologies. Furthermore, we propose a multiuser access control algorithm tailored to the system, based on analytical results of the delay violation probability, which is derived considering the impact of both the multi-hop wired network from the cloud VR server to the MEC server and the wireless network from the MEC server-connected base station (BS) to the users. The proposed access control algorithm aims to maximize the number of served users by exploiting real-time and dynamic network resources, while ensuring that the end-to-end delay violation probability for each accessed user remains within an acceptable limit. Simulation results are presented to analyze the impact of diverse system parameters on both the user access probability and the delay violation probability of the accessed users, demonstrating the effectiveness of the proposed multiuser access control algorithm. It is observed in the simulation that increasing the computing capacity of the MEC server or the communication bandwidth of the BS is one of the most effective methods to accommodate more users for the system. In the tested scenarios, when the MEC server’s computing capacity (the BS’s bandwidth) increases from 0.8 Tbps (50 MHz) to 3.2 Tbps (150 MHz), the user access probability improves on average by 92.53% (85.49%).
APA, Harvard, Vancouver, ISO, and other styles
39

Dr.A.Shaji, George. "Democratizing Compute Power: The Rise of Computation as a Commodity and its Impacts." Partners Universal Innovative Research Publication (PUIRP) 02, no. 03 (2024): 57–74. https://doi.org/10.5281/zenodo.11654354.

Full text
Abstract:
This paper investigates the emerging concept of Compute as a Commodity (CaaC), which promises to revolutionize business innovation by providing easy access to vast compute resources, unlocked by cloud computing. CaaC aims to treat compute like electricity or water - conveniently available for consumption on demand. The pay-as-you-go cloud model enables click-button provisioning of processing capacity, without major capital investments. Our research defines CaaC, its objectives of ubiquitous, low-cost compute, and its self-service consumption vision. We analyze the CaaC technical model, which comprises a code/data repository, automated resource discovery, and a dynamic deployment engine. Innovations like spot pricing, provider federation, and deployment automation are highlighted. Numerous CaaC benefits are studied, including heightened business agility from scalable compute, lowered costs from utilizing surplus capacity, and boosted creativity from removing innovation barriers. Despite its advantages, CaaC poses infrastructural intricacies around seamless management across environments. Our work then elucidates CaaC's transformative capacity across verticals like healthcare, banking, media, and retail. For instance, healthcare workloads around genomic sequencing, drug discovery datasets, clinical trial analytics, personalized medicine, and more can leverage CaaC's elastic resources. Financial sectors can tap scalable computing to enable real-time fraud analysis, trade insights, and security evaluations. Media production houses can parallelize rendering and animation via CaaC instead of investing in high-performance computing farms. Further CaaC innovations are expected to be elaborated, like edge computing for reduced latency analytics, quantum computing for tackling complex optimizations, and serverless architectures for simplified access. In conclusion, CaaC represents an important shift in democratizing compute power, unlocking a new wave of innovation by making high-performance computing affordable and accessible. As CaaC matures, widespread adoption can transform businesses, industries, and society by accelerating digital transformation and fueling new data-driven competition. This paper serves as a primer on CaaC capabilities and provides both technological and strategic recommendations for its adoption. Further research can evaluate the societal impacts of democratized computing and guide policy decisions around data regulation, algorithmic accountability, and technology leadership in the age of CaaC.
APA, Harvard, Vancouver, ISO, and other styles
40

Kolla, Sudheer. "Serverless Computing: Transforming Application Development with Serverless Databases: Benefits, Challenges, and Future Trends." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 10, no. 1 (2019): 810–19. https://doi.org/10.61841/turcomat.v10i1.15043.

Full text
Abstract:
Serverless computing has revolutionized cloud services by abstracting infrastructure management, providing developers with an environment that automatically scales to meet demand. Initially popular in computing, serverless computing has since expanded into the database realm with services such as Amazon Aurora Serverless and Google Cloud Firestore. These databases offer dynamic scaling of storage and compute capacity without the need for developers to manage the underlying infrastructure. Serverless databases have transformed application development by providing a pay-per-use pricing model, which is particularly cost-effective for workloads with unpredictable or fluctuating demand. The serverless model is especially well-suited for microservices, Internet of Things (IoT) applications, and event-driven workloads. With the serverless approach, developers can focus on writing business logic, while the cloud service provider manages the infrastructure. Serverless databases eliminate the need for provisioning, scaling, or patching servers, reducing operational overhead significantly. Furthermore, the model encourages agility and cost efficiency in modern software architectures. This research explores the evolution of serverless computing into the database space, examining its benefits, challenges, and practical applications. By analysing current state-of-the-art serverless databases, we highlight the key features and functionalities of these services and explore their potential for supporting scalable, resilient, and cost-effective applications. Additionally, we evaluate performance characteristics and limitations of serverless databases compared to traditional database management systems
APA, Harvard, Vancouver, ISO, and other styles
41

Runsewe, Oluwayemisi, Olajide Soji Osundare, Samuel Olaoluwa Folorunsho, and Lucy Anthony Akwawa. "CHALLENGES AND SOLUTIONS IN MONITORING AND MANAGING CLOUD INFRASTRUCTURE: A SITE RELIABILITY PERSPECTIVE." Information Management and Computer Science 7, no. 1 (2023): 47–55. https://doi.org/10.26480/imcs.01.2024.47.55.

Full text
Abstract:
Monitoring and managing cloud infrastructure presents unique challenges from a site reliability engineering (SRE) perspective, requiring a balance between operational efficiency and system stability. As cloud environments grow in complexity, traditional monitoring tools and practices often fall short, leading to issues such as inadequate visibility, alert fatigue, and the inability to predict failures. Additionally, the dynamic nature of cloud resources complicates capacity planning and cost management. Key challenges include ensuring consistent performance across distributed systems, managing multi-cloud and hybrid cloud environments, and maintaining security and compliance in real-time. To address these challenges, SRE teams must adopt innovative solutions that leverage automation, artificial intelligence (AI), and machine learning (ML) to enhance monitoring capabilities and enable proactive incident management. Implementing observability frameworks that provide end-to-end visibility, combined with AI-driven anomaly detection, can significantly reduce response times and prevent outages. Additionally, adopting Infrastructure as Code (IaC) practices can streamline cloud management by automating provisioning, scaling, and configuration of cloud resources. By integrating these advanced tools and practices, SRE teams can better manage the complexities of modern cloud infrastructures, ensuring reliability, performance, and cost-efficiency. This review provides an overview of the key challenges faced in monitoring and managing cloud infrastructure from an SRE perspective and outlines effective solutions that can be implemented to overcome these hurdles. The discussion is crucial for organizations looking to enhance their cloud operations while maintaining a high level of reliability and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
42

Ali, Qutaiba I. "Performance Optimization and Architectural Advancements in Cloud Radio Access Networks (C-RAN) for 5G and Beyond." Journal of Electronic & Information Systems 7, no. 1 (2025): 22–38. https://doi.org/10.30564/jeis.v7i1.9960.

Full text
Abstract:
As the demand for high-speed, low-latency, and energy-efficient mobile communications continues to surge with the proliferation of IoT, AR/VR, and ultra-reliable applications, traditional Distributed Radio Access Network (D-RAN) architectures face critical limitations. Cloud Radio Access Network (C-RAN) emerges as a promising alternative that centralizes baseband processing to improve scalability, resource utilization, and operational flexibility. This paper presents a comprehensive evaluation of C-RAN architecture, focusing on structural models, fronthaul technologies, and cloud-based service logic. A detailed mathematical modeling framework is developed to assess key performance indicators, including latency, spectral efficiency, energy efficiency, and fronthaul capacity. Extensive results demonstrate that C-RAN achieves up to 45% gains in energy efficiency, a 35% improvement in spectral efficiency, and latency reductions of over 40% compared to D-RAN. Additional results reveal enhanced handover success rates, better BBU pool utili-zation, and increased reliability, with packet loss rates reduced to under 0.5%. Despite increased fronthaul bandwidth requirements, optical solutions such as DWDM and PON mitigate the bottleneck effectively. The findings confirm that C-RAN offers a robust, scalable, and cost-efficient solution for 5G and future mobile networks, enabling dynamic resource allocation, advanced interference management, and centralized network intelligence. The paper also addresses implementation challenges, including fronthaul provisioning and security, and outlines future research directions such as virtualization, AI-driven orchestration, and edge-cloud integration to fully harness the potential of C-RAN in ultra-dense and heterogeneous network environments.
APA, Harvard, Vancouver, ISO, and other styles
43

G.S, Hari Priya, Janani P.R, and J. Nagavardhan Reddy. "Exploring AWS S3: In-Depth Analysis of Bucket Management." Journal of Sensor and Cloud Computing 1, no. 2 (2024): 1–5. http://dx.doi.org/10.46610/joscc.2024.v01i02.001.

Full text
Abstract:
Amazon Simple Storage Service (S3) is a cornerstone in modern data management strategies with in Amazon Web Services (AWS). It is pivotal in providing scalable, durable, and accessible object storage infrastructure. A key advantage of S3 is its scalability, enabling seamless storage capacity expansion without upfront investments or provisioning delays. This flexibility empowers organizations to accommodate dynamic data growth while maintaining operational agility, which is crucial in today's rapidly evolving digital landscape. S3's exceptional durability, boasting an impressive 99.999999999% reliability, ensures steadfast data integrity and resilience against potential failures, mitigating concerns about data loss and bolstering confidence in managing mission-critical information effectively. Accessibility is another crucial aspect, with robust access controls and APIs facilitating secure data retrieval, storage, and management from any location. This accessibility fosters collaboration and innovation by enabling seamless data interaction and utilization across geographical boundaries.S3's integration capabilities with other AWS services enhance operational efficiency and optimize data analytics workflows. It is a foundational element in data analytics, facilitating data ingestion, storage, and processing to derive actionable insights from diverse datasets. Furthermore, S3's versatility extends beyond storage, supporting applications such as content delivery, backup and archiving, disaster recovery, and machine learning model training. This versatility drives innovation across industries and underscores S3's role in driving digital transformation and fostering a culture rooted in data-driven decision-making. In conclusion, Amazon S3's unparalleled scalability, durability, and accessibility make it indispensable for organizations navigating the data-centric landscape of modern business environments, underlining the imperative of harnessing its full potential for organizational success.
APA, Harvard, Vancouver, ISO, and other styles
44

Río, Alberto del, Javier Serrano, David Jimenez, Luis M. Contreras, and Federico Alvarez. "A Deep Reinforcement Learning Quality Optimization Framework for Multimedia Streaming over 5G Networks." Applied Sciences 12, no. 20 (2022): 10343. http://dx.doi.org/10.3390/app122010343.

Full text
Abstract:
Media applications are amongst the most demanding services. They require high amounts of network capacity as well as computational resources for synchronous high-quality audio–visual streaming. Recent technological advances in the domain of new generation networks, specifically network virtualization and Multiaccess Edge Computing (MEC) have unlocked the potential of the media industry. They enable high-quality media services through dynamic and efficient resource allocation taking advantage of the flexibility of the layered architecture offered by 5G. The presented work demonstrates the potential application of Artificial Intelligence (AI) capabilities for multimedia services deployment. The goal was targeted to optimize the Quality of Experience (QoE) of real-time video using dynamic predictions by means of Deep Reinforcement Learning (DRL) algorithms. Specifically, it contains the initial design and test of a self-optimized cloud streaming proof-of-concept. The environment is implemented through a virtualized end-to-end architecture for multimedia transmission, capable of adapting streaming bitrate based on a set of actions. A prediction algorithm is trained through different state conditions (QoE, bitrate, encoding quality, and RAM usage) that serves the optimizer as the encoding values of the environment for action prediction. Optimization is applied by selecting the most suitable option from a set of actions. These consist of a collection of predefined network profiles with associated bitrates, which are validated by a list of reward functions. The optimizer is built employing the most prominent algorithms in the DRL family, with the use of two Neural Networks (NN), named Advantage Actor–Critic (A2C). As a result of its application, the ratio of good quality video segments increased from 65% to 90%. Furthermore, the number of image artifacts is reduced compared to standard sessions without applying intelligent optimization. From these achievements, the global QoE obtained is clearly better. These results, based on a simulated scenario, increase the interest in further research on the potential of applying intelligence to enhance the provisioning of media services under real conditions.
APA, Harvard, Vancouver, ISO, and other styles
45

Mazhar, Tehseen, Muhammad Amir Malik, Syed Agha Hassnain Mohsan, et al. "Quality of Service (QoS) Performance Analysis in a Traffic Engineering Model for Next-Generation Wireless Sensor Networks." Symmetry 15, no. 2 (2023): 513. http://dx.doi.org/10.3390/sym15020513.

Full text
Abstract:
Quality of Service (QoS) refers to techniques that function on a network to dependably execute high-priority applications and traffic reliably run high-priority applications and traffic even when the network’s capacity is limited. It is expected that data transmission over next-generation WSNs (Wireless Sensor Networks) 5G (5th generation) and beyond will increase significantly, especially for multimedia content such as video. Installing multiple IoT (Internet of Things refers to the network of devices that are all connected to each other) nodes on top of 5G networks makes the design more challenging. Maintaining a minimal level of service quality becomes more challenging as data volume and network density rise. QoS is critical in modern networks because it ensures critical performance metrics and improves end-user experience. Every client attempts to fulfill QoS access needs by selecting the optimal access device(s). Controllers will then identify optimum routes to meet clients’ core QoS needs in their core network. QoS-aware delivery is one of the most important aspects of wireless communications. Various models are proposed in the literature; however, an adaptive buffer size according to service type, priority, and incoming communication requests is required to ensure QoS-aware wireless communication. This article offers a hybrid end-to-end QoS delivery method involving customers and controllers and proposes a QoS-aware service delivery model for various types of communication with an adaptive buffer size according to the priority of the incoming service requests. For this purpose, this paper evaluates various QoS delivery models devised for service delivery in real time over IP networks. Multiple vulnerabilities are outlined that weaken QoS delivery in different models. Performance optimization is needed to ensure QoS delivery in next-generation WSN networks. This paper addresses the shortcomings of the existing service delivery models for real-time communication. An efficient queuing mechanism is adopted that assigns priorities based on input data type and queue length. This queuing mechanism ensures QoS efficiency in limited bandwidth networks and real-time traffic. The model reduces the over-provisioning of resources, delay, and packet loss ratio. The paper contributes a symmetrically-designed traffic engineering model for QoS-ensured service delivery for next-generation WSNs. A dynamic queuing mechanism that assigns priorities based on input data type and queue length is proposed to ensure QoS for wireless next-generation networks. The proposed queuing mechanism discusses topological symmetry to ensure QoS efficiency in limited bandwidth networks with real-time communication. The experimental results describe that the proposed model reduces the over-provisioning of resources, delay, and packet loss ratio.
APA, Harvard, Vancouver, ISO, and other styles
46

Serôdio, Carlos, José Cunha, Guillermo Candela, Santiago Rodriguez, Xosé Ramón Sousa, and Frederico Branco. "The 6G Ecosystem as Support for IoE and Private Networks: Vision, Requirements, and Challenges." Future Internet 15, no. 11 (2023): 348. http://dx.doi.org/10.3390/fi15110348.

Full text
Abstract:
The emergence of the sixth generation of cellular systems (6G) signals a transformative era and ecosystem for mobile communications, driven by demands from technologies like the internet of everything (IoE), V2X communications, and factory automation. To support this connectivity, mission-critical applications are emerging with challenging network requirements. The primary goals of 6G include providing sophisticated and high-quality services, extremely reliable and further-enhanced mobile broadband (feMBB), low-latency communication (ERLLC), long-distance and high-mobility communications (LDHMC), ultra-massive machine-type communications (umMTC), extremely low-power communications (ELPC), holographic communications, and quality of experience (QoE), grounded in incorporating massive broad-bandwidth machine-type (mBBMT), mobile broad-bandwidth and low-latency (MBBLL), and massive low-latency machine-type (mLLMT) communications. In attaining its objectives, 6G faces challenges that demand inventive solutions, incorporating AI, softwarization, cloudification, virtualization, and slicing features. Technologies like network function virtualization (NFV), network slicing, and software-defined networking (SDN) play pivotal roles in this integration, which facilitates efficient resource utilization, responsive service provisioning, expanded coverage, enhanced network reliability, increased capacity, densification, heightened availability, safety, security, and reduced energy consumption. It presents innovative network infrastructure concepts, such as resource-as-a-service (RaaS) and infrastructure-as-a-service (IaaS), featuring management and service orchestration mechanisms. This includes nomadic networks, AI-aware networking strategies, and dynamic management of diverse network resources. This paper provides an in-depth survey of the wireless evolution leading to 6G networks, addressing future issues and challenges associated with 6G technology to support V2X environments considering presenting +challenges in architecture, spectrum, air interface, reliability, availability, density, flexibility, mobility, and security.
APA, Harvard, Vancouver, ISO, and other styles
47

Anurag, Reddy, Naik Anil, and Reddy Sandeep. "Optimizing Expansions at Data Center." European Journal of Advances in Engineering and Technology 9, no. 11 (2022): 52–58. https://doi.org/10.5281/zenodo.10889988.

Full text
Abstract:
<strong>ABSTRACT</strong> This paper proposes a novel approach to data center expansion planning and execution, aiming to streamline the process and enhance responsiveness to dynamic compute demands. The traditional model, involving Commercial Readiness, Cage Readiness, Network HW Readiness, and Compute Provisioning, often leads to long lead times and high variability in short-term planning. The suggested paradigm introduces an intermediate "Medium Term Planning" phase to bridge the gap between site preparation and short-term demand, dissociating Cage/Network Readiness from compute expansions. <strong><em>Key </em></strong>components of the proposed model include a Colocation Model for cage readiness forecasting, Compute Model for prioritizing Network Readiness work orders, and a Heat Map to trigger server execution based on CPU utilization. The Compute Execution Trigger identifies three factors influencing server deployment lead time, facilitating a Just-in-Time approach. Additionally, a Network Readiness Heat Map is proposed to monitor the readiness headroom for server deployments. The benefits of this approach include mitigating cage/network bottlenecks ahead of compute demands, reducing server deployment lead time, and enabling a reactive capacity expansion strategy. A Total Cost of Ownership (TCO) analysis highlights the advantages of a Just-in-Time strategy over traditional batch-based deployments. To implement this model, a recommended ticket structure is proposed, emphasizing dedicated BOM for Cage/Network Readiness post-Commercial closure. The BOM Scope in Network Readiness is designed to achieve a zero-touch BOM review for server expansions, further reducing lead time in the first compute batch. Overall, this paradigm shift in data center expansion planning and execution promises increased agility, reduced deployment lead times, and improved responsiveness to evolving compute demands.
APA, Harvard, Vancouver, ISO, and other styles
48

Mikhailova, Elena A., Hamdi A. Zurqani, Christopher J. Post, Mark A. Schlautman, and Gregory C. Post. "Soil Diversity (Pedodiversity) and Ecosystem Services." Land 10, no. 3 (2021): 288. http://dx.doi.org/10.3390/land10030288.

Full text
Abstract:
Soil ecosystem services (ES) (e.g., provisioning, regulation/maintenance, and cultural) and ecosystem disservices (ED) are dependent on soil diversity/pedodiversity (variability of soils), which needs to be accounted for in the economic analysis and business decision-making. The concept of pedodiversity (biotic + abiotic) is highly complex and can be broadly interpreted because it is formed from the interaction of atmospheric diversity (abiotic + biotic), biodiversity (biotic), hydrodiversity (abiotic + biotic), and lithodiversity (abiotic) within ecosphere and anthroposphere. Pedodiversity is influenced by intrinsic (within the soil) and extrinsic (outside soil) factors, which are also relevant to ES/ED. Pedodiversity concepts and measures may need to be adapted to the ES framework and business applications. Currently, there are four main approaches to analyze pedodiversity: taxonomic (diversity of soil classes), genetic (diversity of genetic horizons), parametric (diversity of soil properties), and functional (soil behavior under different uses). The objective of this article is to illustrate the application of pedodiversity concepts and measures to value ES/ED with examples based on the contiguous United States (U.S.), its administrative units, and the systems of soil classification (e.g., U.S. Department of Agriculture (USDA) Soil Taxonomy, Soil Survey Geographic (SSURGO) Database). This study is based on a combination of original research and literature review examples. Taxonomic pedodiversity in the contiguous U.S. exhibits high soil diversity, with 11 soil orders, 65 suborders, 317 great groups, 2026 subgroups, and 19,602 series. The ranking of “soil order abundance” (area of each soil order within the U.S.) expressed as the proportion of the total area is: (1) Mollisols (27%), (2) Alfisols (17%), (3) Entisols (14%), (4) Inceptisols and Aridisols (11% each), (5) Spodosols (3%), (6) Vertisols (2%), and (7) Histosols and Andisols (1% each). Taxonomic, genetic, parametric, and functional pedodiversity are an essential context for analyzing, interpreting, and reporting ES/ED within the ES framework. Although each approach can be used separately, three of these approaches (genetic, parametric, and functional) fall within the “umbrella” of taxonomic pedodiversity, which separates soils based on properties important to potential use. Extrinsic factors play a major role in pedodiversity and should be accounted for in ES/ED valuation based on various databases (e.g., National Atmospheric Deposition Program (NADP) databases). Pedodiversity is crucial in identifying soil capacity (pedocapacity) and “hotspots” of ES/ED as part of business decision making to provide more sustainable use of soil resources. Pedodiversity is not a static construct but is highly dynamic, and various human activities (e.g., agriculture, urbanization) can lead to soil degradation and even soil extinction.
APA, Harvard, Vancouver, ISO, and other styles
49

Pahl-Wostl, Claudia, and James J. Patterson. "Commentary: Transformative Change in Governance Systems A conceptual framework for analysing adaptive capacity and multi-level learning processes in resource governance regimes." Global Environmental Change 71 (November 3, 2021): 102405. https://doi.org/10.1016/j.gloenvcha.2021.102405.

Full text
Abstract:
Most environmental crises have their origin in governance failures. Resource management paradigms that focus on exploitation of provisioning ecosystem services are ill suited to deal with the complexity of human-environment interactions. They often have rendered human-environment systems vulnerable to shocks and crises. The need for profound paradigm shifts and transformative change of governance systems was thus advocated during the first decade of this millennium, when prospects of global and climate change became increasingly core matters of concern. Environmental governance was still a young research field. Comprehensive frameworks capturing the complex dynamics and structures of governance systems were lacking. The paper &ldquo;<em>A conceptual framework for analysing adaptive capacity and multi-level learning processes in resource governance regimes&rdquo;</em> (Pahl-Wostl, 2009), proved to be a major step in addressing these gaps.
APA, Harvard, Vancouver, ISO, and other styles
50

Borowiecka, Malgorzata, Mar Alcaraz, and Marisol Manzano. "An Application of the Ecosystem Services Assessment Approach to the Provision of Groundwater for Human Supply and Aquifer Management Support." Hydrology 12, no. 6 (2025): 137. https://doi.org/10.3390/hydrology12060137.

Full text
Abstract:
Increasing pressures on groundwater in the last decades have led to a deterioration in the quality of groundwater for human consumption around the world. Beyond the essential evaluation of groundwater dynamics and quality, analyzing the situation from the perspective of the Ecosystem Services Assessment (ESA) approach can be useful to support aquifer management plans aiming to recover aquifers’ capacity to provide good quality water. This work illustrates how to implement the ESA using groundwater flow and nitrate transport modelling for evaluating future trends of the provisioning service Groundwater of Good Quality for Human Supply. It has been applied to the Medina del Campo Groundwater Body (Spain), where the intensification of agricultural activities and groundwater exploitation since the 1970s caused severe nitrate pollution. Nitrate status and future trends under different fertilizer and aquifer exploitation scenarios were modelled with MT3DMS coupled to a MODFLOW model calibrated with piezometric time series. Historical land use and fertilizer data were compiled to assess nitrogen loadings. Besides the uncertainties of the model, the results clearly show that: (i) managing fertilizer loads is more effective than managing aquifer exploitation; and (ii) only the cessation of nitrogen application by the year 2030 would improve the evaluated provisioning service in the long term. The study illustrates how the ESA can be incorporated to evaluate the expected relative impact of different management actions aimed at improving significant groundwater services to humans.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography