To see the other types of publications on this topic, follow the link: Datacenter modell.

Journal articles on the topic 'Datacenter modell'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 27 journal articles for your research on the topic 'Datacenter modell.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhao, Mengmeng, and Xiaoying Wang. "A Synthetic Approach for Datacenter Power Consumption Regulation towards Specific Targets in Smart Grid Environment." Energies 14, no. 9 (May 2, 2021): 2602. http://dx.doi.org/10.3390/en14092602.

Full text
Abstract:
With the large-scale grid connection of renewable energy sources, the frequency stability problem of the power system has become increasingly prominent. At the same time, the development of cloud computing and its applications has attracted people’s attention to the high energy consumption characteristics of datacenters. Therefore, it was proposed to use the characteristics of the high power consumption and high flexibility of datacenters to respond to the demand response signal of the smart grid to maintain the stability of the power system. Specifically, this paper establishes a synthetic model that integrates multiple methods to precisely control and regulate the power consumption of the datacenter while minimizing the total adjustment cost. First, according to the overall characteristics of the datacenter, the power consumption models of servers and cooling systems were established. Secondly, by controlling the temperature, different kinds of energy storage devices, load characteristics and server characteristics, the working process of various regulation methods and the corresponding adjustment cost models were obtained. Then, the cost and penalty of each power regulation method were incorporated. Finally, the proposed dynamic synthetic approach was used to achieve the goal of accurately adjusting the power consumption of the datacenter with least adjustment cost. Through comparative analysis of evaluation experiment results, it can be observed that the proposed approach can better regulate the power consumption of the datacenter with lower adjustment cost than other alternative methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Da Silva, Luana Barreto, Leonardo Henrique da Silva Bomfim, George Leite Junior, and Marcelino Nascimento De Oliveira. "TI Verde: Uma Proposta de Economia Energética usando Virtualização." Interfaces Científicas - Exatas e Tecnológicas 1, no. 2 (May 28, 2015): 57–74. http://dx.doi.org/10.17564/2359-4942.2015v1n2p57-74.

Full text
Abstract:
Information Technology (IT) is one of the main responsable for enviroment troubles. This is a challenge to be overcome by IT Managers to reduce the cust and the maintenance of datacenteres. To promote the user of computer resources on a efficient and less harmfull to enviroment, the Green IT method propose sustainable ways to support a datacenter. One of those ways is the datacenters virtualization, which means that one physical server has virtual ones, working as singles server. It is important to analyze the viability of keep a virtualized datacenter by the analysis of availability of the servers. If the use of virtualization enables a cost reduction, it can also make the system more susceptible to downtime. This work analyzes the availability of two environments, one with a virtualized server and the other with non-virtualized servers. The services offered are e-mail, DNS, Web Server and File Server, a typical scenario in many companies. It is developed a case study using analytical modeling with Fault Tree and Markov Chains. The Fault Tree is used to model the servers and Markov Chains to model the behavior of each component of hardware and software. The non-virtualized environment is composed of four servers, each one providing specific services, while the virtualized consists of a single server with four virtual machines, each one providing a service. By analyzing the models developed, the results show that although the non-virtualized system has less downtime, because has less dependence between the services, the difference in this case is 0.06% annually, becomes irrelevant when compared to the benefits brought by virtualization.
APA, Harvard, Vancouver, ISO, and other styles
3

Osman, Ahmed, Assim Sagahyroon, Raafat Aburukba, and Fadi Aloul. "Optimization of energy consumption in cloud computing datacenters." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 1 (February 1, 2021): 686. http://dx.doi.org/10.11591/ijece.v11i1.pp686-698.

Full text
Abstract:
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
APA, Harvard, Vancouver, ISO, and other styles
4

Lu, Yao, John Panneerselvam, Lu Liu, and Yan Wu. "RVLBPNN: A Workload Forecasting Model for Smart Cloud Computing." Scientific Programming 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/5635673.

Full text
Abstract:
Given the increasing deployments of Cloud datacentres and the excessive usage of server resources, their associated energy and environmental implications are also increasing at an alarming rate. Cloud service providers are under immense pressure to significantly reduce both such implications for promoting green computing. Maintaining the desired level of Quality of Service (QoS) without violating the Service Level Agreement (SLA), whilst attempting to reduce the usage of the datacentre resources is an obvious challenge for the Cloud service providers. Scaling the level of active server resources in accordance with the predicted incoming workloads is one possible way of reducing the undesirable energy consumption of the active resources without affecting the performance quality. To this end, this paper analyzes the dynamic characteristics of the Cloud workloads and defines a hierarchy for the latency sensitivity levels of the Cloud workloads. Further, a novel workload prediction model for energy efficient Cloud Computing is proposed, named RVLBPNN (Rand Variable Learning Rate Backpropagation Neural Network) based on BPNN (Backpropagation Neural Network) algorithm. Experiments evaluating the prediction accuracy of the proposed prediction model demonstrate that RVLBPNN achieves an improved prediction accuracy compared to the HMM and Naïve Bayes Classifier models by a considerable margin.
APA, Harvard, Vancouver, ISO, and other styles
5

Alouf, Sara, and Alain Jean-Marie. "Short-Scale Stochastic Solar Energy Models: A Datacenter Use Case." Mathematics 8, no. 12 (November 27, 2020): 2127. http://dx.doi.org/10.3390/math8122127.

Full text
Abstract:
Modeling the amount of solar energy received by a photovoltaic panel is an essential part of green IT research. The specific motivation of this work is the management of the energy consumption of large datacenters. We propose a new stochastic model for the solar irradiance that features minute-scale variations and is therefore suitable for short-term control of performances. Departing from previous models, we use a weather-oriented classification of days obtained from past observations to parameterize the solar source. We demonstrate through extensive simulations, using real workloads, that our model outperforms the existing ones in predicting performance metrics related to energy storage.
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar, Jitendra, and Ashutosh Kumar Singh. "Cloud datacenter workload estimation using error preventive time series forecasting models." Cluster Computing 23, no. 2 (October 26, 2019): 1363–79. http://dx.doi.org/10.1007/s10586-019-03003-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bliedy, Doaa, Sherif Mazen, and Ehab Ezzat. "Datacentre Total Cost of Ownership (TCO) Models : A Survey." International Journal of Computer Science, Engineering and Applications 8, no. 2/3/4 (August 30, 2018): 47–62. http://dx.doi.org/10.5121/ijcsea.2018.8404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Berenjian, Golnaz, Homayun Motameni, Mehdi Golsorkhtabaramiri, and Ali Ebrahimnejad. "Distribution slack allocation algorithm for energy aware task scheduling in cloud datacenters." Journal of Intelligent & Fuzzy Systems 41, no. 1 (August 11, 2021): 251–72. http://dx.doi.org/10.3233/jifs-201696.

Full text
Abstract:
Regarding the ever-increasing development of data and computational centers due to the contribution of high-performance computing systems in such sectors, energy consumption has always been of great importance due to CO2 emissions that can result in adverse effects on the environment. In recent years, the notions such as “energy” and also “Green Computing” have played crucial roles when scheduling parallel tasks in datacenters. The duplication and clustering strategies, as well as Dynamic Voltage and Frequency Scaling (DVFS) techniques, have focused on the reduction of the energy consumption and the optimization of the performance parameters. Concerning scheduling Directed Acyclic Graph (DAG) of a datacenter processors equipped with the technique of DVFS, this paper proposes an energy- and time-aware algorithm based on dual-phase scheduling, called EATSDCDD, to apply the combination of the strategies for duplication and clustering along with the distribution of slack-time among the tasks of a cluster. DVFS and control procedures in the proposed green system are mapped into Petri net-based models, which contribute to designing a multiple decision process. In the first phase, we use an intelligent combined approach of the duplication and clustering strategies to run the immediate tasks of DAG along with monitoring the throughput by concentrating on the reduction of makespan and the energy consumed in the processors. The main idea of the proposed algorithm involves the achievement of a maximum reduction in energy consumption in the second phase. To this end, the slack time was distributed among non-critical dependent tasks. Additionally, we cover the issues of negotiation between consumers and service providers at the rate of μ based on Green Service Level Agreement (GSLA) to achieve a higher saving of the energy. Eventually, a set of data established for conducting the examinations and also different parameters of the constructed random DAG are assessed to examine the efficiency of our proposed algorithm. The obtained results confirms that our algorithm outperforms compared the other algorithms considered in this study.
APA, Harvard, Vancouver, ISO, and other styles
9

Jiang, Lili, Xiaolin Chang, Runkai Yang, Jelena Mišić, and Vojislav B. Mišić. "Model-Based Comparison of Cloud-Edge Computing Resource Allocation Policies." Computer Journal 63, no. 10 (July 1, 2020): 1564–83. http://dx.doi.org/10.1093/comjnl/bxaa062.

Full text
Abstract:
Abstract The rapid and widespread adoption of internet of things-related services advances the development of the cloud-edge framework, including multiple cloud datacenters (CDCs) and edge micro-datacenters (EDCs). This paper aims to apply analytical modeling techniques to assess the effectiveness of cloud-edge computing resource allocation policies from the perspective of improving the performance of cloud-edge service. We focus on two types of physical device (PD)-allocation policies that define how to select a PD from a CDC/EDC for service provision. The first is randomly selecting a PD, denoted as RandAvail. The other is denoted as SEQ, in which an available idle PD is selected to serve client requests only after the waiting queues of all busy PDs are full. We first present the models in the case of an On–Off request arrival process and verify the approximate accuracy of the proposed models through simulations. Then, we apply analytical models for comparing RandAvail and SEQ policies, in terms of request rejection probability and mean response time, under various system parameter settings.
APA, Harvard, Vancouver, ISO, and other styles
10

Kang, Dong-Ki, Fawaz Al-Hazemi, Seong-Hwan Kim, Min Chen, Limei Peng, and Chan-Hyun Youn. "Adaptive VM Management with Two Phase Power Consumption Cost Models in Cloud Datacenter." Mobile Networks and Applications 21, no. 5 (March 31, 2016): 793–805. http://dx.doi.org/10.1007/s11036-016-0690-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Buchaca, David, Joan Marcual, Josep LLuis Berral, and David Carrera. "Sequence-to-sequence models for workload interference prediction on batch processing datacenters." Future Generation Computer Systems 110 (September 2020): 155–66. http://dx.doi.org/10.1016/j.future.2020.03.058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gupta, Shaifu, A. D. Dileep, and Timothy A. Gonsalves. "Online Sparse BLSTM Models for Resource Usage Prediction in Cloud Datacentres." IEEE Transactions on Network and Service Management 17, no. 4 (December 2020): 2335–49. http://dx.doi.org/10.1109/tnsm.2020.3013922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zhou, Zhi, Fangming Liu, and Zongpeng Li. "Bilateral Electricity Trade Between Smart Grids and Green Datacenters: Pricing Models and Performance Evaluation." IEEE Journal on Selected Areas in Communications 34, no. 12 (December 2016): 3993–4007. http://dx.doi.org/10.1109/jsac.2016.2611898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kumar, Sumit, Himani Sharma, and Gurpreet Singh. "Green Cloud Computing: Energy Efficiency and Management." CGC International Journal of Contemporary Technology and Research 2, no. 2 (June 26, 2020): 110–15. http://dx.doi.org/10.46860/cgcijctr.2020.06.26.110.

Full text
Abstract:
As cloud computing services become popular, it is important for cloud service providers (CSPs) to ensure their duty to society by reducing the impact on the environment due to their operations. CSPs use higher amounts of energy because of the large power demand to run datacenters (DCs). The transient development of cloud computing models has led to the establishment of multiple DCs worldwide. Energy efficiency of ICT and CO2 emissions is a major issue in cloud computing. Scientists are constantly working on this issue to find a solution. Among other options, there is well-known virtualization that is accepted by IT organizations to reduce CO2 emissions and power usage. The basic goal of this paper is to present important methods of VM placement used for measurement of PUE in distributed DCs and with varying carbon emission rates. Finally, we present an analysis of several open stack techniques with ACO, DVFS, ECE, and Two-phase Carbon Aware techniques.
APA, Harvard, Vancouver, ISO, and other styles
15

Butt, Umer Ahmed, Muhammad Mehmood, Syed Bilal Hussain Shah, Rashid Amin, M. Waqas Shaukat, Syed Mohsan Raza, Doug Young Suh, and Md Jalil Piran. "A Review of Machine Learning Algorithms for Cloud Computing Security." Electronics 9, no. 9 (August 26, 2020): 1379. http://dx.doi.org/10.3390/electronics9091379.

Full text
Abstract:
Cloud computing (CC) is on-demand accessibility of network resources, especially data storage and processing power, without special and direct management by the users. CC recently has emerged as a set of public and private datacenters that offers the client a single platform across the Internet. Edge computing is an evolving computing paradigm that brings computation and information storage nearer to the end-users to improve response times and spare transmission capacity. Mobile CC (MCC) uses distributed computing to convey applications to cell phones. However, CC and edge computing have security challenges, including vulnerability for clients and association acknowledgment, that delay the rapid adoption of computing models. Machine learning (ML) is the investigation of computer algorithms that improve naturally through experience. In this review paper, we present an analysis of CC security threats, issues, and solutions that utilized one or several ML algorithms. We review different ML algorithms that are used to overcome the cloud security issues including supervised, unsupervised, semi-supervised, and reinforcement learning. Then, we compare the performance of each technique based on their features, advantages, and disadvantages. Moreover, we enlist future research directions to secure CC models.
APA, Harvard, Vancouver, ISO, and other styles
16

Ikken, Sonia, Eric Renault, Abdelkamel Tari, and Tahar Kechadi. "Exact and Heuristic Data Workflow Placement Algorithms for Big Data Computing in Cloud Datacenters." Scalable Computing: Practice and Experience 19, no. 3 (September 14, 2018): 223–44. http://dx.doi.org/10.12694/scpe.v19i3.1365.

Full text
Abstract:
Several big data-driven applications are currently carried out in collaboration using distributed infrastructure. These data-driven applications usually deal with experiments at massive scale. Data generated by such experiments are huge and stored at multiple geographic locations for reuse. Workflow systems, composed of jobs using collaborative task-based models, present new dependency and data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of applications is on time, and resource usage-cost-efficient. In this paper, we present an efficient data placement approach to improve the performance of workflow processing in distributed data centres. The proposed approach involves two types of data: splittable and unsplittable intermediate data. Moreover, we place intermediate data by considering not only their source location but also their dependencies. The main objective is to minimise the total storage cost, including the effort for transferring, storing, and moving that data according to the applications needs. We first propose an exact algorithm which takes into account the intra-job dependencies, and we show that the optimal fractional intermediate data placement problem is NP-hard. To solve the problem of unsplittable intermediate data placement, we propose a greedy heuristic algorithm based on a network flow optimisation framework. The experimental results show that the performance of our approach is very promising. We also show that even with divergent conditions, the cost ratio of the heuristic approach is close to the optimal solution.
APA, Harvard, Vancouver, ISO, and other styles
17

Da Silva, Luana Barreto, and Leonardo Henrique Silva Bomfim. "Análise de Disponibilidade de Servidores Virtualizados com Cadeias de Markov." Interfaces Científicas - Exatas e Tecnológicas 1, no. 2 (May 28, 2015): 21–34. http://dx.doi.org/10.17564/2359-4942.2015v1n2p21-34.

Full text
Abstract:
The analysis of availability of virtualized servers is an important tool for managers in information technology and communication especially, when it comes to planning and design of datacenters to provide many services for general companies. If the use of virtualization enables a cost reduction, it can also make the system more susceptible to downtime. This work analyzes the availability of two environments, one with a virtualized server and the other environment with non-virtualized servers. The services offered are e-mail, Domain Name System (DNS), Web Server and File Server, a typical scenario in many companies. It is developed a case study using analytical modeling with Fault Tree and Markov Chains. The Fault Tree is used to model the servers and Markov Chains to model the behavior of each component of hardware and software. The non-virtualized environment is composed of four servers, each one providing specific services, while the virtualized consists of a single server with four virtual machines, each one providing a service. By analyzing the models developed, the results show that although the non-virtualized system has less downtime, because has less dependence between the services, the difference in this case is 0.06% annually, becomes irrelevant when compared to the benefits brought by virtualization in the companies.
APA, Harvard, Vancouver, ISO, and other styles
18

Junaid, Muhammad, Asadullah Shaikh, Mahmood Ul Hassan, Abdullah Alghamdi, Khairan Rajab, Mana Saleh Al Reshan, and Monagi Alkinani. "Smart Agriculture Cloud Using AI Based Techniques." Energies 14, no. 16 (August 19, 2021): 5129. http://dx.doi.org/10.3390/en14165129.

Full text
Abstract:
This research proposes a generic smart cloud-based system in order to accommodate multiple scenarios where agriculture farms using Internet of Things (IoTs) need to be monitored remotely. The real-time and stored data are analyzed by specialists and farmers. The cloud acts as a central digital data store where information is collected from diverse sources in huge volumes and variety, such as audio, video, image, text, and digital maps. Artificial Intelligence (AI) based machine learning models such as Support Vector Machine (SVM), which is one of many classification types, are used to accurately classify the data. The classified data are assigned to the virtual machines where these data are processed and finally available to the end-users via underlying datacenters. This processed form of digital information is then used by the farmers to improve their farming skills and to update them as pre-disaster recovery for smart agri-food. Furthermore, it will provide general and specific information about international markets relating to their crops. This proposed system discovers the feasibility of the developed digital agri-farm using IoT-based cloud and provides solutions to problems. Overall, the approach works well and achieved performance efficiency in terms of execution time by 14%, throughput time by 5%, overhead time by 9%, and energy efficiency by 13.2% in the presence of competing smart farming baselines.
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, Minseong, Seon Wook Kim, and Youngsun Han. "EPSim-C: A Parallel Epoch-Based Cycle-Accurate Microarchitecture Simulator Using Cloud Computing." Electronics 8, no. 6 (June 24, 2019): 716. http://dx.doi.org/10.3390/electronics8060716.

Full text
Abstract:
Recently, computing platforms have been being configured on a large scale to satisfy the diverse requirements of emerging applications like big data and graph processing, neural network, speech recognition and so on. In these computing platforms, each computing node consists of a multicore, an accelerator, and a complex memory hierarchy, which are connected to other nodes using a variety of high-performance networks. Up to now, researchers have been using cycle-accurate simulators to evaluate the performance of computer systems in detail. However, the execution of the simulators, which models modern computing architecture for multi-core, multi-node, datacenter, memory hierarchy, new memory, and new interconnection, is too slow and infeasible; since the architecture has become more complex today, the complexity of the simulator is rapidly increasing. Therefore, it is seriously challenging to employ them in the research and development of next-generation computer systems. To solve this problem, we previously presented EPSim (Epoch-based Simulator), which defines epochs that can be run independently by dividing the simulation run into several sections and executes them in parallel on a multicore platform, resulting in only the limited simulation speedup. In this paper, to overcome the computing resource limitations on multi-core platforms, we propose a novel EPSim-C (EPSim on Cloud) simulator that extends EPSim and achieves higher performance using a cloud computing platform. EPSim-C is designed to perform the epoch-based executions in a massively parallel fashion by using MapReduce on Hadoop-based systems. According to our experiments, we have achieved a maximum speed of 87.0× and an average speed of 46.1× using 256 cores. As far as we know, EPSim-C is the only existing way to accelerate the cycle-accurate simulator on cloud platforms; thus, our significant performance enhancement allows researchers to model and research current and future cutting-edge computing platforms using real workloads.
APA, Harvard, Vancouver, ISO, and other styles
20

Muraña, Jonathan, and Sergio Nesmachnow. "Simulation and evaluation of multicriteria planning heuristics for demand response in datacenters." SIMULATION, June 14, 2021, 003754972110200. http://dx.doi.org/10.1177/00375497211020083.

Full text
Abstract:
This article presents the evaluation of multicriteria planning heuristics for demand response in datacenters and supercomputing facilities. This is a relevant problem for science nowadays, when the growing application of cutting-edge technologies (numerical methods, big data processing, artificial intelligence, smart systems, etc.) has raised the energy demands in datacenters. The proposed approach involves a negotiation mechanism for colocation datacenters, where the datacenter operator agrees prices and quality of service with a group of tenants. Twelve different multicriteria heuristics are proposed for planning using both local and global information at tenants and datacenter operator levels. The proposed approach is evaluated applying simulations over realistic scenarios considering different tenant sizes and heterogeneity levels that model different business models for datacenters. Several metrics are computed and Pareto analysis is provided. The main results indicate that accurate trade-off values between the problem objectives are obtained, offering different options for decision making. The proposed approach provides a useful and applicable method for demand response planning in modern datacenters.
APA, Harvard, Vancouver, ISO, and other styles
21

Ramanathan, Manikandan, and Kumar Narayanan. "Disk storage failure prediction in datacenter using machine learning models." Applied Nanoscience, September 21, 2021. http://dx.doi.org/10.1007/s13204-021-02039-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Bliedy, Doaa, Sherif Mazen, and Ehab Ezzat. "Datacentre Total Cost of Ownership (TCO) Models: A Survey." SSRN Electronic Journal, 2018. http://dx.doi.org/10.2139/ssrn.3474802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kumar, Jitendra, and Ashutosh Kumar Singh. "Performance Assessment of Time Series Forecasting Models for Cloud Datacenter Networks’ Workload Prediction." Wireless Personal Communications, September 3, 2020. http://dx.doi.org/10.1007/s11277-020-07773-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Valarmathi, K., and S. Kanaga Suba Raja. "Resource utilization prediction technique in cloud using knowledge based ensemble random forest with LSTM model." Concurrent Engineering, August 12, 2021, 1063293X2110326. http://dx.doi.org/10.1177/1063293x211032622.

Full text
Abstract:
Future computation of cloud datacenter resource usage is a provoking task due to dynamic and Business Critic workloads. Accurate prediction of cloud resource utilization through historical observation facilitates, effectively aligning the task with resources, estimating the capacity of a cloud server, applying intensive auto-scaling and controlling resource usage. As imprecise prediction of resources leads to either low or high provisioning of resources in the cloud. This paper focuses on solving this problem in a more proactive way. Most of the existing prediction models are based on a mono pattern of workload which is not suitable for handling peculiar workloads. The researchers address this problem by making use of a contemporary model to dynamically analyze the CPU utilization, so as to precisely estimate data center CPU utilization. The proposed design makes use of an Ensemble Random Forest-Long Short Term Memory based deep architectural models for resource estimation. This design preprocesses and trains data based on historical observation. The approach is analyzed by using a real cloud data set. The empirical interpretation depicts that the proposed design outperforms the previous approaches as it bears 30%–60% enhanced accuracy in resource utilization.
APA, Harvard, Vancouver, ISO, and other styles
25

Muraña, Jonathan. "Empirical power consumption characterization and energy aware scheduling in data centers." CLEI Electronic Journal 24, no. 1 (April 14, 2021). http://dx.doi.org/10.19153/cleiej.24.1.3.

Full text
Abstract:
Energy-efficient management is key to reduce operational cost and environmental contamination in modern data centers. Energy management and renewable energy utilization are strategies to optimize energy consumption in high-performance computing. In any case, understanding the power consumption behavior of physical servers in datacenter is fundamental to implement energy-aware policies effectively. These policies should deal with possible performance degradation of applications to ensure quality of service. This manuscript presents an empirical evaluation of power consumption for scientific computing applications in multicore systems. Three types of applications are studied, in single and combined executions on Intel and AMD servers, for evaluating the overall power consumption of each application. The main results indicate that power consumption behavior has a strong dependency with the type of application. Additional performance analysis shows that the best load of the server regarding energy efficiency depends on the type of the applications, with efficiency decreasing in heavily loaded situations. These results allow formulating models to characterize applications according to power consumption, efficiency, and resource sharing, which provide useful information for resource management and scheduling policies. Several scheduling strategies are evaluated using the proposed energy model over realistic scientific computing workloads. Results confirm that strategies that maximize host utilization provide the best energy efficiency.
APA, Harvard, Vancouver, ISO, and other styles
26

Austin, Lisa M. "Technological tattletales and constitutional black holes: communications intermediaries and constitutional constraints." Theoretical Inquiries in Law 17, no. 2 (January 1, 2016). http://dx.doi.org/10.1515/til-2016-0017.

Full text
Abstract:
AbstractIn this Article I argue that the emerging public/private nexus of surveillance involves the augmentation of state power and calls for new models of constitutional constraint. The key phenomenon is the role played by communications intermediaries in collecting the information that the state subsequently accesses. These intermediaries are not just powerful companies engaged in collecting and analyzing the information of users and the information they hold are not just business records. The key feature of these companies is that, through their information practices and architecture, they mediate other relationships. I argue that this mediating function, and its underlying technological form, interacts with legal and social norms in ways that can lead to the erosion of constraints on state power. This Article maps two stories of erosion, rooted in two kinds of community displacement. The first involves the displacement of community participation in law enforcement and the emergence of “technological tattletales” where intermediaries cooperate with the state. Unlike citizen cooperation, this practice augments state power and undermines more traditional informal modes of constraint on state power. The second involves the displacement of national legal and political community. Communications intermediaries are often large multinational companies that operate in multiple jurisdictions and move their data to various datacenters across the world even as the individual data subject remains in one geographical location. Laws that treat nonresident aliens differently from residents and citizens can create “constitutional black holes” where the communications data of an individual is not protected by any constitutional constraints.
APA, Harvard, Vancouver, ISO, and other styles
27

Jha, Pranay, Sartaj Singh, and Ashok Sharma. "Data Control in Public Cloud Computing: Issues and Challenges." Recent Patents on Computer Science 12 (June 17, 2019). http://dx.doi.org/10.2174/2213275912666190617164550.

Full text
Abstract:
Background: : The Advancement in the Hardware and Progress in IoT based devices has led to significant transformation in digitalization and globalization of business models in the IT World. In fact Cloud Computing has attracted many companies to expand their business by providing IT infrastructure with very less budget in pay per use model. The Expansion and Migration of Companies to Cloud Computing facilities has really brought many pros and cons and opened new area of Research. The Management of IT infrastructure as per business requirement is a great challenge for the IT Infrastructure managers because of complex business models which needs to be updated with market trends and it requires huge and updated infrastructure to accelerate their business requirements. No doubt there are many benefits of moving to Cloud but several vulnerabilities and potential threats related to security is a major concern for any business sensitive data. These security challenges place restrictions on moving on-premises workloads to the Cloud. This paper has discussed key differences in cloud models and existing various Cloud Security Architectures and challenges in cloud computing related to Data Security at Rest and in Transit. Also data controlling mechanism need to be adopted by IT Industry along with end to end security mechanism has been explained. Objective:: The main objective of this paper is to discuss about the prevailing issues in cloud in terms of data security which is discouraging the Industry and organizations to move their data into public cloud and also to discuss how to enhance security mechanism in cloud during data migration and multitenant environment. Methods:: Based on different reports and analysis, it has been pointed that data breach and data security are most challenging and concerning factor for any customer when someone think to migrate the workloads from On-Premises datacenter to Cloud Computing. It needs more attention in every consideration. All criteria and considerations to secure and protect the customer’s information and data have been classified and discussed. Data-at-rest and Data-in-transit are transmission method for storing and moving the data from one source to destination. Different encryption methods for protecting and security data-at-rest and data-in-transit have been identity. However, there are still more areas need to work in for filling gaps for Cloud data control and security which is still a serious concern and on top of attackers every day. Conclusion:: Since, cyber-attacks occurring very frequently and causing a huge amount of investment on re-establishing the environment, it needs more control with effective usages of technology. All those concerns related to security are very reasonable concerns that needs to be addressed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography