Littérature scientifique sur le sujet « Limit CPU usage »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Limit CPU usage ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Limit CPU usage"

1

Raza, Syed M., Jaeyeop Jeong, Moonseong Kim, Byungseok Kang et Hyunseung Choo. « Empirical Performance and Energy Consumption Evaluation of Container Solutions on Resource Constrained IoT Gateways ». Sensors 21, no 4 (16 février 2021) : 1378. http://dx.doi.org/10.3390/s21041378.

Texte intégral
Résumé :
Containers virtually package a piece of software and share the host Operating System (OS) upon deployment. This makes them notably light weight and suitable for dynamic service deployment at the network edge and Internet of Things (IoT) devices for reduced latency and energy consumption. Data collection, computation, and now intelligence is included in variety of IoT devices which have very tight latency and energy consumption conditions. Recent studies satisfy latency condition through containerized services deployment on IoT devices and gateways. They fail to account for the limited energy and computing resources of these devices which limit the scalability and concurrent services deployment. This paper aims to establish guidelines and identify critical factors for containerized services deployment on resource constrained IoT devices. For this purpose, two container orchestration tools (i.e., Docker Swarm and Kubernetes) are tested and compared on a baseline IoT gateways testbed. Experiments use Deep Learning driven data analytics and Intrusion Detection System services, and evaluate the time it takes to prepare and deploy a container (creation time), Central Processing Unit (CPU) utilization for concurrent containers deployment, memory usage under different traffic loads, and energy consumption. The results indicate that container creation time and memory usage are decisive factors for containerized micro service architecture.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Găitan, Vasile Gheorghiță, et Ionel Zagan. « An Overview of the nMPRA and nHSE Microarchitectures for Real-Time Applications ». Sensors 21, no 13 (30 juin 2021) : 4500. http://dx.doi.org/10.3390/s21134500.

Texte intégral
Résumé :
In the context of real-time control systems, it has become possible to obtain temporal resolutions of microseconds due to the development of embedded systems and the Internet of Things (IoT), the optimization of the use of processor hardware, and the improvement of architectures and real-time operating systems (RTOSs). All of these factors, together with current technological developments, have led to efficient central processing unit (CPU) time usage, guaranteeing both the predictability of thread execution and the satisfaction of the timing constraints required by real-time systems (RTSs). This is mainly due to time sharing in embedded RTSs and the pseudo-parallel execution of tasks in single-processor and multi-processor systems. The non-deterministic behavior triggered by asynchronous external interrupts and events in general is due to the fact that, for most commercial RTOSs, the execution of the same instruction ends in a variable number of cycles, primarily due to hazards. The software implementation of RTOS-specific mechanisms may lead to significant delays that can affect deadline requirements for some RTSs. The main objective of this paper was the design and deployment of innovative solutions to improve the performance of RTOSs by implementing their functions in hardware. The obtained architectures are intended to provide feasible scheduling, even if the total CPU utilization is close to the maximum limit. The contributions made by the authors will be followed by the validation of a high-performing microarchitecture, which is expected to allow a thread context switching time and event response time of only one clock cycle each. The main purpose of the research presented in this paper is to improve these factors of RTSs, as well as the implementation of the hardware structure used for the static and dynamic scheduling of tasks, for RTOS mechanisms specific to resource sharing and intertask communication.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Gao, Meng, Bryan A. Franz, Kirk Knobelspiesse, Peng-Wang Zhai, Vanderlei Martins, Sharon Burton, Brian Cairns et al. « Efficient multi-angle polarimetric inversion of aerosols and ocean color powered by a deep neural network forward model ». Atmospheric Measurement Techniques 14, no 6 (4 juin 2021) : 4083–110. http://dx.doi.org/10.5194/amt-14-4083-2021.

Texte intégral
Résumé :
Abstract. NASA's Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) mission, scheduled for launch in the timeframe of 2023, will carry a hyperspectral scanning radiometer named the Ocean Color Instrument (OCI) and two multi-angle polarimeters (MAPs): the UMBC Hyper-Angular Rainbow Polarimeter (HARP2) and the SRON Spectro-Polarimeter for Planetary EXploration one (SPEXone). The MAP measurements contain rich information on the microphysical properties of aerosols and hydrosols and therefore can be used to retrieve accurate aerosol properties for complex atmosphere and ocean systems. Most polarimetric aerosol retrieval algorithms utilize vector radiative transfer models iteratively in an optimization approach, which leads to high computational costs that limit their usage in the operational processing of large data volumes acquired by the MAP imagers. In this work, we propose a deep neural network (NN) forward model to represent the radiative transfer simulation of coupled atmosphere and ocean systems for applications to the HARP2 instrument and its predecessors. Through the evaluation of synthetic datasets for AirHARP (airborne version of HARP2), the NN model achieves a numerical accuracy smaller than the instrument uncertainties, with a running time of 0.01 s in a single CPU core or 1 ms in a GPU. Using the NN as a forward model, we built an efficient joint aerosol and ocean color retrieval algorithm called FastMAPOL, evolved from the well-validated Multi-Angular Polarimetric Ocean coLor (MAPOL) algorithm. Retrievals of aerosol properties and water-leaving signals were conducted on both the synthetic data and the AirHARP field measurements from the Aerosol Characterization from Polarimeter and Lidar (ACEPOL) campaign in 2017. From the validation with the synthetic data and the collocated High Spectral Resolution Lidar (HSRL) aerosol products, we demonstrated that the aerosol microphysical properties and water-leaving signals can be retrieved efficiently and within acceptable error. Comparing to the retrieval speed using a conventional radiative transfer forward model, the computational acceleration is 103 times faster with CPU or 104 times with GPU processors. The FastMAPOL algorithm can be used to operationally process the large volume of polarimetric data acquired by PACE and other future Earth-observing satellite missions with similar capabilities.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Rastogi, Saumya, Bimal Charles et Asirvatham Edwin Sam. « Prevalence and Predictors of Self-Reported Consistent Condom Usage among Male Clients of Female Sex Workers in Tamil Nadu, India ». Journal of Sexually Transmitted Diseases 2014 (1 juin 2014) : 1–7. http://dx.doi.org/10.1155/2014/952035.

Texte intégral
Résumé :
Clients of female sex workers (FSWs) possess a high potential of transmitting HIV and other sexually transmitted infections from high risk FSWs to the general population. Promotion of safer sex practices among the clients is essential to limit the spread of HIV/AIDS epidemic. The aim of this study is to estimate the prevalence of consistent condom use (CCU) among clients of FSWs and to assess the factors associated with CCU in Tamil Nadu. 146 male respondents were recruited from the hotspots who reportedly had sex with FSWs in exchange for cash at least once in the past one month. Data were analyzed using bivariate and multivariate methods. Overall, 48.6 and 0.8 percent clients consistently used condoms in the past 12 months with FSWs and regular partners, respectively. Logistic regression showed that factors such as education, peers’ use of condoms, and alcohol consumption significantly influenced clients’ CCU with FSWs. Strategies for safe sex-behaviour are needed among clients of FSWs in order to limit the spread of HIV/AIDS epidemic in the general population. The role of peer-educators in experience sharing and awareness generation must also be emphasized.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Wu, Wenjing, et David Cameron. « Backfilling the Grid with Containerized BOINC in the ATLAS computing ». EPJ Web of Conferences 214 (2019) : 07015. http://dx.doi.org/10.1051/epjconf/201921407015.

Texte intégral
Résumé :
Virtualization is a commonly used solution for utilizing the opportunistic computing resources in the HEP field, as it provides a unified software and OS layer that the HEP computing tasks require over the heterogeneous opportunistic computing resources. However there is always performance penalty with virtualization, especially for short jobs which are always the case for volunteer computing tasks, the overhead of virtualization reduces the CPU efficiency of the jobs, hence it leads to low CPU efficiency of the jobs. With the wide usage of containers in HEP computing, we explore the possibility of adopting the container technology into the ATLAS BOINC project, hence we implemented a Native version in BOINC, which uses the Singularity container or direct usage of the Operating System of the host machines to replace VirtualBox. In this paper, we will discuss 1) the implementation and workflow of the Native version in the ATLAS BOINC; 2) the performance measurement of the Native version comparing to the previous virtualization version. 3) the limits and shortcomings of the Native version; 4) The practice and outcome of the Native version which includes using it in backfilling the ATLAS Grid Tier2 sites and other clusters, and to utilize the idle computers from the CERN computing centre.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Jacquet, Éric. « Construction de la limite interne et « bon usage du double interdit du toucher » dans des groupes thérapeutiques de jeunes enfants ». Cahiers de psychologie clinique 38, no 1 (2012) : 179. http://dx.doi.org/10.3917/cpc.038.0179.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Marta, Deni, M. Angga Eka Putra et Guntoro Barovih. « Analisis Perbandingan Performa Virtualisasi Server Sebagai Basis Layanan Infrastructure As A Service Pada Jaringan Cloud ». MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer 19, no 1 (15 septembre 2019) : 1–8. http://dx.doi.org/10.30812/matrik.v19i1.433.

Texte intégral
Résumé :
Cloud Computing provides convenience and comfort to every service. Infrastructure as a Service is one of the cloud computing services that is a choice of several users, it is very important to know the performance of each existing platform in order to get the maximum result according to our needs. In this study, testing 3 platforms of cloud computing service providers are VMWare ESXi, XenServer, and Proxmox, using action research methods. From the results of performance measurements, then analyzed and compared with the minimum and maximum limits. The tested indicators are response time, throughput, and resource-utilization as a comparison of server virtualization performance implementations. In the resource utilization testing when the condition of installing an operating system, CPU usage on the Proxmox platform shows the lowest usage of 10.72%, and the lowest RAM usage of 53.32% also on the Proxmox platform. In the resource test utilization when idle state shows the lowest usage of 5.78% on the Proxmox platform, while the lowest RAM usage is 57.25% on the VMWare ESXi platform. The mean resource utilization tests indicate that the Proxmox platform is better. At the throughput test when the upload measurement of the XenServer platform is better 1.37 MB/s, while the throughput test when the download of the VMWare ESXi platform is better than 1.39 MB/s. On response time testing shows the platform VMWare ESXi as the fastest is 0.180 sec.
Styles APA, Harvard, Vancouver, ISO, etc.
8

V, Dr Kiran, Akshay Narayan Pai et Gautham S. « Performance Analysis of Virtual Machine in Cloud Architecture ». Journal of University of Shanghai for Science and Technology 23, no 07 (19 juillet 2021) : 924–29. http://dx.doi.org/10.51201/jusst/21/07210.

Texte intégral
Résumé :
Cloud computing is a technique for storing and processing data that makes use of a network of remote servers. Cloud computing is gaining popularity due to its vast storage capacity, ease of access, and diverse variety of services. When cloud computing advanced and technologies such as virtual machines appeared, virtualization entered the scene. When customers’ computing demands for storage and servers increased, however, virtual machines were unable to match those expectations due to scalability and resource allocation limits. As a consequence, containerization became a reality. Containerization is the process of packaging software code along with all of its essential components, including frameworks, libraries, and other dependencies, such that they may be separated or separated in their own container. The program operating in containers may execute reliably in any environment or infrastructure. Containers provide OS-level virtualization, which reduces the computational load on the host machine and enables programs to run much faster and more reliably. Performance analysis is very important in comparing the throughput of both VM-based and Container-based designs. To analyze it same web application is running in both the designs. CPU usage and RAM usage in both designs were compared. Results obtained are tabulated and a Proper conclusion has been given.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Negru, Adrian Eduard, Latchezar Betev, Mihai Carabaș, Costin Grigoraș, Nicolae Țăpuş et Sergiu Weisz. « Analysis of data integrity and storage quality of a distributed storage system ». EPJ Web of Conferences 251 (2021) : 02035. http://dx.doi.org/10.1051/epjconf/202125102035.

Texte intégral
Résumé :
CERN uses the world’s largest scientific computing grid, WLCG, for distributed data storage and processing. Monitoring of the CPU and storage resources is an important and essential element to detect operational issues in its systems, for example in the storage elements, and to ensure their proper and efficient function. The processing of experiment data depends strongly on the data access quality, as well as its integrity and both of these key parameters must be assured for the data lifetime. Given the substantial amount of data, O(200 PB), already collected by ALICE and kept at various storage elements around the globe, scanning every single data chunk would be a very expensive process, both in terms of computing resources usage and in terms of execution time. In this paper, we describe a distributed file crawler that addresses these natural limits by periodically extracting and analyzing statistically significant samples of files from storage elements, evaluates the results and is integrated with the existing monitoring solution, MonALISA.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Saylan, Erdem, Cihangir et Denizli. « Detecting Fingerprints of Waterborne Bacteria on a Sensor ». Chemosensors 7, no 3 (25 juillet 2019) : 33. http://dx.doi.org/10.3390/chemosensors7030033.

Texte intégral
Résumé :
Human fecal contamination is a crucial threat that results in difficulties in access to clean water. Enterococcus faecalis is a bacteria which is utilized as an indicator in polluted water. Nevertheless, existing strategies face several challenges, including low affinity and the need for labelling, which limit their access to large scale applications. Herein, a label-free fingerprint of the surface proteins of waterborne bacteria on a sensor was demonstrated for real-time bacteria detection from aqueous and water samples. The kinetic performance of the sensor was evaluated and shown to have a range of detection that spanned five orders of magnitude, having a low detection limit (3.4 × 104 cfu/mL) and a high correlation coefficient (R2 = 0.9957). The sensor also designated a high selectivity while other competitor bacteria were employed. The capability for multiple usage and long shelf-life are superior to other modalities. This is an impressive surface modification method that uses the target itself as a recognition element, ensuring a broad range of variability to replicate others with different structure, size and physical and chemical properties.
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Thèses sur le sujet "Limit CPU usage"

1

Rego, Paulo Antonio Leal. « FairCPU : Uma Arquitetura para Provisionamento de MÃquinas Virtuais Utilizando CaracterÃsticas de Processamento ». Universidade Federal do CearÃ, 2012. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=7653.

Texte intégral
Résumé :
FundaÃÃo Cearense de Apoio ao Desenvolvimento Cientifico e TecnolÃgico
O escalonamento de recursos à um processo chave para a plataforma de ComputaÃÃo em Nuvem, que geralmente utiliza mÃquinas virtuais (MVs) como unidades de escalonamento. O uso de tÃcnicas de virtualizaÃÃo fornece grande flexibilidade com a habilidade de instanciar vÃrias MVs em uma mesma mÃquina fÃsica (MF), modificar a capacidade das MVs e migrÃ-las entre as MFs. As tÃcnicas de consolidaÃÃo e alocaÃÃo dinÃmica de MVs tÃm tratado o impacto da sua utilizaÃÃo como uma medida independente de localizaÃÃo. à geralmente aceito que o desempenho de uma MV serà o mesmo, independentemente da MF em que ela à alocada. Esta à uma suposiÃÃo razoÃvel para um ambiente homogÃneo, onde as MFs sÃo idÃnticas e as MVs estÃo executando o mesmo sistema operacional e aplicativos. No entanto, em um ambiente de ComputaÃÃo em Nuvem, espera-se compartilhar um conjunto composto por recursos heterogÃneos, onde as MFs podem variar em termos de capacidades de seus recursos e afinidades de dados. O objetivo principal deste trabalho à apresentar uma arquitetura que possibilite a padronizaÃÃo da representaÃÃo do poder de processamento das MFs e MVs, em funÃÃo de Unidades de Processamento (UPs), apoiando-se na limitaÃÃo do uso da CPU para prover isolamento de desempenho e manter a capacidade de processamento das MVs independente da MF subjacente. Este trabalho busca suprir a necessidade de uma soluÃÃo que considere a heterogeneidade das MFs presentes na infraestrutura da Nuvem e apresenta polÃticas de escalonamento baseadas na utilizaÃÃo das UPs. A arquitetura proposta, chamada FairCPU, foi implementada para trabalhar com os hipervisores KVM e Xen, e foi incorporada a uma nuvem privada, construÃda com o middleware OpenNebula, onde diversos experimentos foram realizados para avaliar a soluÃÃo proposta. Os resultados comprovam a eficiÃncia da arquitetura FairCPU em utilizar as UPs para reduzir a variabilidade no desempenho das MVs, bem como para prover uma nova maneira de representar e gerenciar o poder de processamento das MVs e MFs da infraestrutura.
Resource scheduling is a key process for cloud computing platform, which generally uses virtual machines (VMs) as scheduling units. The use of virtualization techniques provides great flexibility with the ability to instantiate multiple VMs on one physical machine (PM), migrate them between the PMs and dynamically scale VMâs resources. The techniques of consolidation and dynamic allocation of VMs have addressed the impact of its use as an independent measure of location. It is generally accepted that the performance of a VM will be the same regardless of which PM it is allocated. This assumption is reasonable for a homogeneous environment where the PMs are identical and the VMs are running the same operating system and applications. Nevertheless, in a cloud computing environment, we expect that a set of heterogeneous resources will be shared, where PMs will face changes both in terms of their resource capacities and as also in data affinities. The main objective of this work is to propose an architecture to standardize the representation of the processing power by using processing units (PUs). Adding to that, the limitation of CPU usage is used to provide performance isolation and maintain the VMâs processing power at the same level regardless the underlying PM. The proposed solution considers the PMs heterogeneity present in the cloud infrastructure and provides scheduling policies based on PUs. The proposed architecture is called FairCPU and was implemented to work with KVM and Xen hypervisors. As study case, it was incorporated into a private cloud, built with the middleware OpenNebula, where several experiments were conducted. The results prove the efficiency of FairCPU architecture to use PUs to reduce VMsâ performance variability, as well as to provide a new way to represent and manage the processing power of the infrastructureâs physical and virtual machines.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Limit CPU usage"

1

Różycki, Rafał, Tomasz Lemański et Joanna Józefowska. « Scheduling UAV’s on Autonomous Charging Station ». Dans Modern Technologies Enabling Safe and Secure UAV Operation in Urban Airspace. IOS Press, 2021. http://dx.doi.org/10.3233/nicsp210010.

Texte intégral
Résumé :
The paper considers the concept of a charging station for an Unmanned Aerial Vehicles (UAV, drone) fleet. The special feature of the station is its autonomy understood as independence from a constant energy source and an external module for managing its operation. It is assumed that the station gives the possibility to charge batteries of many drones simultaneously. However, the maximum number of simultaneously charged drones is limited by a temporary total charging current (i.e. there is a power limit). The paper proposes a mathematical model of charging a single drone battery. The problem of finding a schedule of charging tasks is formulated, in which the minimum time of the charging process for all drones is assumed as the optimization criterion. Searching for a solution to this problem is performed by an autonomous charging station with an appropriate computing module equipped with a Variable Speed Processor (VSP). To that end an appropriate algorithm is activated (i.e. a computational job), the execution of which consumes a certain amount of limited energy available to the charging station. In the paper we consider energy-aware execution of an implementation of an evolutionary algorithm (EA) as a computational job. The possibility of saving energy by controlling the CPU frequency of a VSP is analyzed. A characteristic feature of the processor is the non-linear relationship between the processing rate and electric power usage. According to this relationship, it turns out that slower execution of the computational job saves electrical energy consumed by the processor.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Boldrin, Fabio, Chiara Taddia et Gianluca Mazzini. « Web Distributed Computing Systems ». Dans Technological Innovations in Adaptive and Dependable Systems, 181–97. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0255-7.ch011.

Texte intégral
Résumé :
This article proposes a new approach for distributed computing. The main novelty consists in the exploitation of Web browsers as clients, thanks to the availability of Javascript, AJAX and Flex. The described solution has two main advantages: it is client-free, so no additional programs have to be installed to perform the computation, and it requires low CPU usage, so client-side computation is no invasive for users. The solution is developed using both AJAX and Adobe®Flex®technologies embedding a pseudo-client into a Web page that hosts the computation. While users browse the hosting Web page, computation takes place resolving single sub-problems and sending the solution to the server-side part of the system. Our client-free solution is an example of high resilient and auto-administrated system that is able to organize the scheduling of the processes and the error management in an autonomic manner. A mathematical model has been developed over this solution. The main goals of the model are to describe and classify different categories of problems on the basis of the feasibility and to find the limits in the dimensioning of the scheduling systems to have convenience in the use of this approach. The new architecture has been tested through different performance metrics by implementing two examples of distributed computing, the cracking of an RSA cryptosystem through the factorization of the public key and the correlation index between samples in genetic data sets. Results have shown good feasibility of this approach both in a closed environment and also in an Internet environment, in a typical real situation.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Limit CPU usage"

1

Singer, Joe, Thomas Roth, Chenli Wang, Cuong Nguyen et Hohyun Lee. « EnergyPlus Integration Into Co-Simulation Environment to Improve Home Energy Saving Through Cyber-Physical Systems Development ». Dans ASME 2018 12th International Conference on Energy Sustainability collocated with the ASME 2018 Power Conference and the ASME 2018 Nuclear Forum. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/es2018-7295.

Texte intégral
Résumé :
This paper presents a co-simulation platform which combines a building simulation tool with a Cyber-Physical Systems (CPS) approach. Residential buildings have a great potential of energy reduction by controlling home equipment based on usage information. A CPS can eliminate unnecessary energy usage on a small, local scale by autonomously optimizing equipment activity, based on sensor measurements from the home. It can also allow peak shaving from the grid if a collection of homes are connected. However, lack of verification tools limits effective development of CPS products. The present work integrates EnergyPlus, which is a widely adopted building simulation tool, into an open-source development environment for CPS released by the National Institute of Standards and Technology (NIST). The NIST environment utilizes the IEEE High Level Architecture (HLA) standard for data exchange and logical timing control to integrate a suite of simulators into a common platform. A simple CPS model, which controls local HVAC temperature set-point based on environmental conditions, was tested with the developed co-simulation platform. The proposed platform can be expanded to integrate various simulation tools and various home simulations, thereby allowing for co-simulation of more intricate building energy systems.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie