To see the other types of publications on this topic, follow the link: HPC workload manager.

Journal articles on the topic 'HPC workload manager'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'HPC workload manager.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sochat, Vanessa, Aldo Culquicondor, Antonio Ojea, and Daniel Milroy. "The Flux Operator." F1000Research 13 (March 21, 2024): 203. http://dx.doi.org/10.12688/f1000research.147989.1.

Full text
Abstract:
Converged computing is an emerging area of computing that brings together the best of both worlds for high performance computing (HPC) and cloud-native communities. The economic influence of cloud computing and the need for workflow portability, flexibility, and manageability are driving this emergence. Navigating the uncharted territory and building an effective space for both HPC and cloud require collaborative technological development and research. In this work, we focus on developing components for the converged workload manager, the central component of batch workflows running in any environment. From the cloud we base our work on Kubernetes, the de facto standard batch workload orchestrator. From HPC the orchestrator counterpart is Flux Framework, a fully hierarchical resource management and graph-based scheduler with a modular architecture that supports sophisticated scheduling and job management. Bringing these managers together consists of implementing Flux inside of Kubernetes, enabling hierarchical resource management and scheduling that scales without burdening the Kubernetes scheduler. This paper introduces the Flux Operator – an on-demand HPC workload manager deployed in Kubernetes. Our work describes design decisions, mapping components between environments, and experimental features. We perform experiments that compare application performance when deployed by the Flux Operator and the MPI Operator and present the results. Finally, we review remaining challenges and describe our vision of the future for improved technological innovation and collaboration through converged computing.
APA, Harvard, Vancouver, ISO, and other styles
2

Du, R., J. Shi, J. Zou, X. Jiang, Z. Sun, and G. Chen. "A Feasibility Study on workload integration between HT-Condor and Slurm Clusters." EPJ Web of Conferences 214 (2019): 08004. http://dx.doi.org/10.1051/epjconf/201921408004.

Full text
Abstract:
There are two production clusters co-existed in the Institute of High Energy Physics (IHEP). One is a High Throughput Computing (HTC) cluster with HTCondor as the workload manager, the other is a High Performance Computing (HPC) cluster with Slurm as the workload manager. The resources of the HTCondor cluster are funded by multiple experiments, and the resource utilization reached more than 90% by adopting a dynamic resource share mechanism. Nevertheless, there is a bottleneck if more resources are requested by multiple experiments at the same moment. On the other hand, parallel jobs running on the Slurm cluster reflect some specific attributes, such as high degree of parallelism, low quantity and long wall time. Such attributes make it easy to generate free resource slots which are suitable for jobs from the HTCondor cluster. As a result, if there is a mechanism to schedule jobs from the HTCon-dor cluster to the Slurm cluster transparently, it would improve the resource utilization of the Slurm cluster, and reduce job queue time for the HTCondor cluster. In this proceeding, we present three methods to migrate HTCondor jobs to the Slurm cluster, and concluded that HTCondor-C is more preferred. Furthermore, because design philosophy and application scenes are di↵erent between HTCondor and Slurm, some issues and possible solutions related with job scheduling are presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Dakić, Vedran, Mario Kovač, and Jurica Slovinac. "Evolving High-Performance Computing Data Centers with Kubernetes, Performance Analysis, and Dynamic Workload Placement Based on Machine Learning Scheduling." Electronics 13, no. 13 (2024): 2651. http://dx.doi.org/10.3390/electronics13132651.

Full text
Abstract:
In the past twenty years, the IT industry has moved away from using physical servers for workload management to workloads consolidated via virtualization and, in the next iteration, further consolidated into containers. Later, container workloads based on Docker and Podman were orchestrated via Kubernetes or OpenShift. On the other hand, high-performance computing (HPC) environments have been lagging in this process, as much work is still needed to figure out how to apply containerization platforms for HPC. Containers have many advantages, as they tend to have less overhead while providing flexibility, modularity, and maintenance benefits. This makes them well-suited for tasks requiring a lot of computing power that are latency- or bandwidth-sensitive. But they are complex to manage, and many daily operations are based on command-line procedures that take years to master. This paper proposes a different architecture based on seamless hardware integration and a user-friendly UI (User Interface). It also offers dynamic workload placement based on real-time performance analysis and prediction and Machine Learning-based scheduling. This solves a prevalent issue in Kubernetes: the suboptimal placement of workloads without needing individual workload schedulers, as they are challenging to write and require much time to debug and test properly. It also enables us to focus on one of the key HPC issues—energy efficiency. Furthermore, the application we developed that implements this architecture helps with the Kubernetes installation process, which is fully automated, no matter which hardware platform we use—x86, ARM, and soon, RISC-V. The results we achieved using this architecture and application are very promising in two areas—the speed of workload scheduling and workload placement on a correct node. This also enables us to focus on one of the key HPC issues—energy efficiency.
APA, Harvard, Vancouver, ISO, and other styles
4

Spiga, Danele, Stefano Dal Pra, Davide Salomoni, Andrea Ceccanti, and Roberto Alfieri. "Dynamic integration of distributed, Cloud-based HPC and HTC resources using JSON Web Tokens and the INDIGO IAM Service." EPJ Web of Conferences 245 (2020): 07020. http://dx.doi.org/10.1051/epjconf/202024507020.

Full text
Abstract:
In the past couple of years, we have been actively developing the Dynamic On-Demand Analysis Service (DODAS) as an enabling technology to deploy container-based clusters over hybrid, private or public, Cloud infrastructures with almost zero effort. DODAS is particularly suitable for harvesting opportunistic computing resources; this is why several scientific communities already integrated their computing use cases into DODAS-instantiated clusters, automating the instantiation, management and federation of HTCondor batch systems. The increasing demand, availability and utilization of HPC resources by and for multidisciplinary user communities, often mandates the possibility to transparently integrate, manage and mix HTC and HPC resources. In this paper, we discuss our experience extending and using DODAS to connect HPC and HTC resources in the context of a distributed Italian regional infrastructure involving multiple sites and communities. In this use case, DODAS automatically generates HTCondor batch system on-demand. Moreover it dynamically and transparently federates sites that may also include HPC resources managed by SLURM; DODAS allows user workloads to make opportunistic and automated use of both HPC and HTC resources, thus effectively maximizing and optimizing resource utilization. We also report on our experience of using and federating HTCondor batch systems exploiting the JSON Web Token capabilities introduced in recent HTCondor versions, replacing the traditional X509 certificates in the whole chain of workload authorization. In this respect we also report on how we integrated HTCondor using OAuth with the INDIGO IAM service.
APA, Harvard, Vancouver, ISO, and other styles
5

Hildreth, Michael, Kenyi Paolo Hurtado Anampa, Cody Kankel, et al. "Large-scale HPC deployment of Scalable CyberInfrastructure for Artificial Intelligence and Likelihood Free Inference (SCAILFIN)." EPJ Web of Conferences 245 (2020): 09011. http://dx.doi.org/10.1051/epjconf/202024509011.

Full text
Abstract:
The NSF-funded Scalable CyberInfrastructure for Artificial Intelligence and Likelihood Free Inference (SCAILFIN) project aims to develop and deploy artificial intelligence (AI) and likelihood-free inference (LFI) techniques and software using scalable cyberinfrastructure (CI) built on top of existing CI elements. Specifically, the project has extended the CERN-based REANA framework, a cloud-based data analysis platform deployed on top of Kubernetes clusters that was originally designed to enable analysis reusability and reproducibility. REANA is capable of orchestrating extremely complicated multi-step workflows, and uses Kubernetes clusters both for scheduling and distributing container-based workloads across a cluster of available machines, as well as instantiating and monitoring the concrete workloads themselves. This work describes the challenges and development efforts involved in extending REANA and the components that were developed in order to enable large scale deployment on High Performance Computing (HPC) resources. Using the Virtual Clusters for Community Computation (VC3) infrastructure as a starting point, we implemented REANA to work with a number of differing workload managers, including both high performance and high throughput, while simultaneously removing REANA’s dependence on Kubernetes support at the workers level.
APA, Harvard, Vancouver, ISO, and other styles
6

Hearn, Cherie, Adam Govier, and Adam Ivan Semciw. "Clinical care ratios: quantifying clinical versus non-clinical care for allied health professionals." Australian Health Review 41, no. 3 (2017): 321. http://dx.doi.org/10.1071/ah16017.

Full text
Abstract:
Objective Clinical care ratios (CCRs) are a useful tool that can be used to quantify and benchmark the clinical and non-clinical workloads of allied health professionals. The purpose of this study was to determine if CCRs are influenced by level of seniority, type of role or profession. This will provide meaningful information for allied health service managers to better manage service demand and capacity. Method Data was collected from 2036 allied health professionals from five professions across 11 Australian tertiary hospitals. Mean (95% confidence intervals) CCRs were calculated according to profession, seniority and role type. A two-way ANOVA was performed to assess the association of CCRs (dependent variable) with seniority level and profession (independent variables). Post-hoc pairwise comparisons identified where significant main or interaction effects occurred (α = 0.05). Results Significant main effects for seniority level and profession were identified (P < 0.05), but there was no interaction effect. Post-hoc comparisons revealed significant differences between all tier combinations (P < 0.05) with more senior staff having the lowest CCRs. Conclusion The direct and non-direct clinical components of the allied health professional’s workload can be quantified and benchmarked with like roles and according to seniority. The benchmarked CCRs for predominantly clinical roles will enable managers to compare and evaluate like roles and modify non-direct clinical components according to seniority and discipline. What is known about the topic? CCRs are a useful tool to quantify, monitor and compare workloads of allied health professionals. They are thought to change with increased seniority of roles. The CCRs for different allied health professional roles has yet to be defined in the literature. What does this paper add? CCRs decrease as level of seniority increases, indicating higher seniority increases non-clinical time. CCRs differ across professions, suggesting that benchmarking with CCRs must be profession specific. What are the implications for practitioners? The direct and non-direct clinical components of a workload can be quantified, defined and benchmarked with like roles to ensure cost-effective and optimal service delivery and patient outcomes.
APA, Harvard, Vancouver, ISO, and other styles
7

Alwi, Suhaimi, and Rachmat Mulyono. "Pengaruh Stres Kerja, Beban Kerja, dan Spiritualitas terhadap Kinerja Penyelenggara Pemilu (Badan Ad-Hoc) di Komisi Pemilihan Umum (KPU) DKI Jakarta." AKADEMIK: Jurnal Mahasiswa Humanis 5, no. 2 (2025): 621–29. https://doi.org/10.37481/jmh.v5i2.1360.

Full text
Abstract:
This study examines the impact of work stress, workload, and spirituality on the performance of election officers in the Ad-Hoc bodies of the General Election Commission (KPU) in DKI Jakarta. The research explores how work stress and excessive workloads negatively affect performance and well-being, while workplace spirituality plays a significant role in improving performance. A quantitative approach was used, with a survey conducted among the District Election Committee (PPK) and the Voting Committee (PPS). The findings reveal that high work stress and heavy workloads lead to reduced productivity, increased errors, and mental fatigue. In contrast, a strong sense of spirituality enhances ethical awareness, resilience, and intrinsic motivation, thereby improving overall performance. This study recommends policies to manage work stress, better distribute workloads, and integrate spirituality into the workplace to support election officers' well-being and effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
8

Petrosyan, Davit, and Hrachya Astsatryan. "Serverless High-Performance Computing over Cloud." Cybernetics and Information Technologies 22, no. 3 (2022): 82–92. http://dx.doi.org/10.2478/cait-2022-0029.

Full text
Abstract:
Abstract HPC clouds may provide fast access to fully configurable and dynamically scalable virtualized HPC clusters to address the complex and challenging computation and storage-intensive requirements. The complex environmental, software, and hardware requirements and dependencies on such systems make it challenging to carry out our large-scale simulations, prediction systems, and other data and compute-intensive workloads over the cloud. The article aims to present an architecture that enables HPC workloads to be serverless over the cloud (Shoc), one of the most critical cloud capabilities for HPC workloads. On one hand, Shoc utilizes the abstraction power of container technologies like Singularity and Docker, combined with the scheduling and resource management capabilities of Kubernetes. On the other hand, Shoc allows running any CPU-intensive and data-intensive workloads in the cloud without needing to manage HPC infrastructure, complex software, and hardware environment deployments.
APA, Harvard, Vancouver, ISO, and other styles
9

Merzky, Andre, Pavlo Svirin, and Matteo Turilli. "PanDA and RADICAL-Pilot Integration: Enabling the Pilot Paradigm on HPC Resources." EPJ Web of Conferences 214 (2019): 03057. http://dx.doi.org/10.1051/epjconf/201921403057.

Full text
Abstract:
PanDA executes millions of ATLAS jobs a month on Grid systems with more than 300,000 cores. Currently, PanDA is compatible only with few high-performance computing (HPC) resources due to different edge services and operational policies; does not implement the pilot paradigm on HPC; and does not dynamically optimize resource allocation among queues. We integrated the PanDA Harvester service and the RADICAL-Pilot (RP) system to overcome these limitations and enable the execution of ATLAS, Molecular Dy-namics and other workloads on HPC resources. This paper offer two main con-tributions: (1) introducing PanDA Harvester and RADICAL-Pilot, two systems independent developed to support high-throughput computing (HTC) on high-performance computing (HPC) infrastructures; (2) describing the integration between these two systems to produce a middleware component with unique functionalities, including the concurrent execution of heterogeneous workloads on the Titan OLCF machine. We integrated Harvester and RP by prototyping a Next Generation Executor (NGE) to expose RP capabilities and manage the execution of PanDA workloads. In this way, we minimized the reengineering of the two systems, allowing their integration while being in production.
APA, Harvard, Vancouver, ISO, and other styles
10

Cocaña-Fernández, Alberto, Emilio San José Guiote, Luciano Sánchez, and José Ranilla. "Eco-Efficient Resource Management in HPC Clusters through Computer Intelligence Techniques." Energies 12, no. 11 (2019): 2129. http://dx.doi.org/10.3390/en12112129.

Full text
Abstract:
High Performance Computing Clusters (HPCCs) are common platforms for solving both up-to-date challenges and high-dimensional problems faced by IT service providers. Nonetheless, the use of HPCCs carries a substantial and growing economic and environmental impact, owing to the large amount of energy they need to operate. In this paper, a two-stage holistic optimisation mechanism is proposed to manage HPCCs in an eco-efficiently manner. The first stage logically optimises the resources of the HPCC through reactive and proactive strategies, while the second stage optimises hardware allocation by leveraging a genetic fuzzy system tailored to the underlying equipment. The model finds optimal trade-offs among quality of service, direct/indirect operating costs, and environmental impact, through multiobjective evolutionary algorithms meeting the preferences of the administrator. Experimentation was done using both actual workloads from the Scientific Modelling Cluster of the University of Oviedo and synthetically-generated workloads, showing statistical evidence supporting the adoption of the new mechanism.
APA, Harvard, Vancouver, ISO, and other styles
11

Pérez-Calero Yzquierdo, A., M. Mascheroni, M. Acosta Flechas, et al. "Reaching new peaks for the future of the CMS HTCondor Global Pool." EPJ Web of Conferences 251 (2021): 02055. http://dx.doi.org/10.1051/epjconf/202125102055.

Full text
Abstract:
The CMS experiment at CERN employs a distributed computing infrastructure to satisfy its data processing and simulation needs. The CMS Submission Infrastructure team manages a dynamic HTCondor pool, aggregating mainly Grid clusters worldwide, but also HPC, Cloud and opportunistic resources. This CMS Global Pool, which currently involves over 70 computing sites worldwide and peaks at 350k CPU cores, is employed to successfully manage the simultaneous execution of up to 150k tasks. While the present infrastructure is sufficient to harness the current computing power scales, CMS latest estimates predict a noticeable expansion in the amount of CPU that will be required in order to cope with the massive data increase of the High-Luminosity LHC (HL-LHC) era, planned to start in 2027. This contribution presents the latest results of the CMS Submission Infrastructure team in exploring and expanding the scalability reach of our Global Pool, in order to preventively detect and overcome any barriers in relation to the HL-LHC goals, while maintaining high effciency in our workload scheduling and resource utilization.
APA, Harvard, Vancouver, ISO, and other styles
12

Llopis, Pablo, Carolina Lindqvist, Nils Høimyr, Dan van der Ster, and Philippe Ganz. "Integrating HPC into an agile and cloud-focused environment at CERN." EPJ Web of Conferences 214 (2019): 07025. http://dx.doi.org/10.1051/epjconf/201921407025.

Full text
Abstract:
CERN’s batch and grid services are mainly focused on High Throughput computing (HTC) for processing data from the Large Hadron Collider (LHC) and other experiments. However, part of the user community requires High Performance Computing (HPC) for massively parallel applications across many cores on MPI-enabled infrastructure. This contribution addresses the implementation of HPC infrastructure at CERN for Lattice QCD application development, as well as for different types of simulations for the accelerator and technology sector at CERN. Our approach has been to integrate the HPC facilities as far as possible with the HTC services in our data centre, and to take advantage of an agile infrastructure for updates, configuration and deployment. The HPC cluster has been orchestrated with the OpenStack Ironic component, and is hence managed with the same tools as the CERN internal OpenStack cloud. Experience and benchmarks of MPI applications across Infiniband with shared storage on CephFS is discussed, as well the setup of the SLURM scheduler for HPC jobs with a provision for backfill of HTC workloads.
APA, Harvard, Vancouver, ISO, and other styles
13

Moussa, Taifi1 Abdallah Khreishah2 and Justin Y. Shi1. "BUILDING A PRIVATE HPC CLOUD FOR COMPUTE AND DATA-INTENSIVE APPLICATIONS." International Journal on Cloud Computing: Services and Architecture (IJCCSA) 3, April (2018): 01–20. https://doi.org/10.5281/zenodo.1434573.

Full text
Abstract:
Traditional HPC (High Performance Computing) clusters are best suited for well-formed calculations. The orderly batch-oriented HPC cluster offers maximal potential for performance per application, but limits resource efficiency and user flexibility. An HPC cloud can host multiple virtual HPC clusters, giving the scientists unprecedented flexibility for research and development. With the proper incentive model, resource efficiency will be automatically maximized. In this context, there are three new challenges. The first is the virtualization overheads. The second is the administrative complexity for scientists to manage the virtual clusters. The third is the programming model. The existing HPC programming models were designed for dedicated homogeneous parallel processors. The HPC cloud is typically heterogeneous and shared. This paper reports on the practice and experiences in building a private HPC cloud using a subset of a traditional HPC cluster. We report our evaluation criteria using Open Source software, and performance studies for compute-intensive and data-intensive applications. We also report the design and implementation of a Puppet-based virtual cluster administration tool called HPCFY. In addition, we show that even if the overhead of virtualization is present, efficient scalability for virtual clusters can be achieved by understanding the effects of virtualization overheads on various types of HPC and Big Data workloads. We aim at providing a detailed experience report to the HPC community, to ease the process of building a private HPC cloud using Open Source software.
APA, Harvard, Vancouver, ISO, and other styles
14

Lai, Jianqi, Hang Yu, Zhengyu Tian, and Hua Li. "Hybrid MPI and CUDA Parallelization for CFD Applications on Multi-GPU HPC Clusters." Scientific Programming 2020 (September 25, 2020): 1–15. http://dx.doi.org/10.1155/2020/8862123.

Full text
Abstract:
Graphics processing units (GPUs) have a strong floating-point capability and a high memory bandwidth in data parallelism and have been widely used in high-performance computing (HPC). Compute unified device architecture (CUDA) is used as a parallel computing platform and programming model for the GPU to reduce the complexity of programming. The programmable GPUs are becoming popular in computational fluid dynamics (CFD) applications. In this work, we propose a hybrid parallel algorithm of the message passing interface and CUDA for CFD applications on multi-GPU HPC clusters. The AUSM + UP upwind scheme and the three-step Runge–Kutta method are used for spatial discretization and time discretization, respectively. The turbulent solution is solved by the K−ω SST two-equation model. The CPU only manages the execution of the GPU and communication, and the GPU is responsible for data processing. Parallel execution and memory access optimizations are used to optimize the GPU-based CFD codes. We propose a nonblocking communication method to fully overlap GPU computing, CPU_CPU communication, and CPU_GPU data transfer by creating two CUDA streams. Furthermore, the one-dimensional domain decomposition method is used to balance the workload among GPUs. Finally, we evaluate the hybrid parallel algorithm with the compressible turbulent flow over a flat plate. The performance of a single GPU implementation and the scalability of multi-GPU clusters are discussed. Performance measurements show that multi-GPU parallelization can achieve a speedup of more than 36 times with respect to CPU-based parallel computing, and the parallel algorithm has good scalability.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Jiali, Cheng Wang, Vishwas Rao, Andrew Orr, Eugene Yan, and Rao Kotamarthi. "A parallel workflow implementation for PEST version 13.6 in high-performance computing for WRF-Hydro version 5.0: a case study over the midwestern United States." Geoscientific Model Development 12, no. 8 (2019): 3523–39. http://dx.doi.org/10.5194/gmd-12-3523-2019.

Full text
Abstract:
Abstract. The Weather Research and Forecasting Hydrological (WRF-Hydro) system is a state-of-the-art numerical model that models the entire hydrological cycle based on physical principles. As with other hydrological models, WRF-Hydro parameterizes many physical processes. Hence, WRF-Hydro needs to be calibrated to optimize its output with respect to observations for the application region. When applied to a relatively large domain, both WRF-Hydro simulations and calibrations require intensive computing resources and are best performed on multimode, multicore high-performance computing (HPC) systems. Typically, each physics-based model requires a calibration process that works specifically with that model and is not transferrable to a different process or model. The parameter estimation tool (PEST) is a flexible and generic calibration tool that can be used in principle to calibrate any of these models. In its existing configuration, however, PEST is not designed to work on the current generation of massively parallel HPC clusters. To address this issue, we ported the parallel PEST to HPCs and adapted it to work with WRF-Hydro. The porting involved writing scripts to modify the workflow for different workload managers and job schedulers, as well as to connect the parallel PEST to WRF-Hydro. To test the operational feasibility and the computational benefits of this first-of-its-kind HPC-enabled parallel PEST, we developed a case study using a flood in the midwestern United States in 2013. Results on a problem involving the calibration of 22 parameters show that on the same computing resources used for parallel WRF-Hydro, the HPC-enabled parallel PEST can speed up the calibration process by a factor of up to 15 compared with commonly used PEST in sequential mode. The speedup factor is expected to be greater with a larger calibration problem (e.g., more parameters to be calibrated or a larger size of study area).
APA, Harvard, Vancouver, ISO, and other styles
16

Villebonnet, Violaine, Georges Da Costa, Laurent Lefevre, Jean-Marc Pierson, and Patricia Stolf. "“Big, Medium, Little”: Reaching Energy Proportionality with Heterogeneous Computing Scheduler." Parallel Processing Letters 25, no. 03 (2015): 1541006. http://dx.doi.org/10.1142/s0129626415410066.

Full text
Abstract:
Energy savings are among the most important topics concerning Cloud and HPC infrastructures nowadays. Servers consume a large amount of energy, even when their computing power is not fully utilized. These static costs represent quite a concern, mostly because many datacenter managers are over-provisioning their infrastructures compared to the actual needs. This results in a high part of wasted power consumption. In this paper, we proposed the BML (“Big, Medium, Little”) infrastructure, composed of heterogeneous architectures, and a scheduling framework dealing with energy proportionality. We introduce heterogeneous power processors inside datacenters as a way to reduce energy consumption when processing variable workloads. Our framework brings an intelligent utilization of the infrastructure by dynamically executing applications on the architecture that suits their needs, while minimizing energy consumption. In this paper we focus on distributed stateless web servers scenario and we analyze the energy savings achieved through energy proportionality.
APA, Harvard, Vancouver, ISO, and other styles
17

Dakić, Vedran, Jasmin Redžepagić, Matej Bašić, and Luka Žgrablić. "Performance and Latency Efficiency Evaluation of Kubernetes Container Network Interfaces for Built-In and Custom Tuned Profiles." Electronics 13, no. 19 (2024): 3972. http://dx.doi.org/10.3390/electronics13193972.

Full text
Abstract:
In the era of DevOps, developing new toolsets and frameworks that leverage DevOps principles is crucial. This paper demonstrates how Ansible’s powerful automation capabilities can be harnessed to manage the complexity of Kubernetes environments. This paper evaluates efficiency across various CNI (Container Network Interface) plugins by orchestrating performance analysis tools across multiple power profiles. Our performance evaluations across network interfaces with different theoretical bandwidths gave us a comprehensive understanding of CNI performance and overall efficiency, with performance efficiency coming well below expectations. Our research confirms that certain CNIs are better suited for specific use cases, mainly when tuning our environment for smaller or larger network packets and workload types, but also that there are configuration changes we can make to mitigate that. This paper also provides research into how to use performance tuning to optimize the performance and efficiency of our CNI infrastructure, with practical implications for improving the performance of Kubernetes environments in real-world scenarios, particularly in more demanding scenarios such as High-Performance Computing (HPC) and Artificial Intelligence (AI).
APA, Harvard, Vancouver, ISO, and other styles
18

Marella, Venkat. "Enhancing Cloud Infrastructure Resilience through Kubernetes and Open Shift Cluster Management." International Journal of Computer Science and Mobile Computing 13, no. 12 (2024): 9–22. https://doi.org/10.47760/ijcsmc.2024.v13i12.002.

Full text
Abstract:
Over the last two decades, the IT sector has shifted from managing workloads on physical servers to virtualizing them and then further consolidating them into containers in the subsequent iteration. Later, Kubernetes or Open Shift were used to coordinate container workloads built on Docker and Podman. The use of containerization platforms for high-performance computing (HPC) settings, however, has been trailing behind since there is still much to learn. Containers provide flexibility, adaptability, and maintenance advantages, and they often have lower overhead. Additionally, by monitoring their performance and constantly regulating their behaviour, container orchestration platforms—which are being embraced by companies more and more—are crucial for efficiently managing the life-cycle of multi-container informatics systems. Specifically, the Kubernetes platform, a new open standard for cloud services, is helping to achieve service agnostic deployment by helping to isolate individual cloud providers via its Cloud Controller Manager mechanism. Finding the best container strategy for typical use scenarios and evaluating each solution's advantages and disadvantages were the goals. Basic performance measurements, such as average service recovery time, average transfer rate, and total number of unsuccessful calls, were gathered via a series of organized tests. Docker, Kubernetes, and Proxmox were among the container systems that were examined. Based on a thorough analysis, it can be said that the most efficient high-availability solution for widely used Linux containers is often Docker with Docker Swarm. However, there are other situations where Proxmox excels, such as when load balancing is not a crucial need or where quick data transmission is a top concern.
APA, Harvard, Vancouver, ISO, and other styles
19

Duffield, Christine, Di Twigg, Michael Roche, Anne Williams, and Sarah Wise. "Uncovering the Disconnect Between Nursing Workforce Policy Intentions, Implementation, and Outcomes: Lessons Learned From the Addition of a Nursing Assistant Role." Policy, Politics, & Nursing Practice 20, no. 4 (2019): 228–38. http://dx.doi.org/10.1177/1527154419877571.

Full text
Abstract:
The use of nursing assistants has increased across health systems in the past 20 years, to alleviate licensed nurses' workload and to meet rising health care demands at lower costs. Evidence suggests that, when used as a substitute for licensed nurses, assistants are associated with poorer patient and nurse outcomes. Our multimethods study evaluated the impact of a policy to add nursing assistants to existing nurse staffing in Western Australia's public hospitals, on a range of outcomes. In this article, we draw the metainferences from previously published quantitative data and unpublished qualitative interview data. A longitudinal analysis of patient records found significantly higher rates adverse patient outcomes on wards that introduced nursing assistants compared with wards that did not. These findings are explained with ward-level data that show nursing assistants were added to wards with preexisting workload and staffing problems and that those problems persisted despite the additional resources. There were also problems integrating assistants into the nursing team, due to ad hoc role assignments and variability in assistants' knowledge and skills. The disconnect between policy intention and outcomes reflects a top-down approach to role implementation where assistants were presented as a solution to nurses' workload problems, without an understanding of the causes of those problems. We conclude that policy makers and managers must better understand individual care environments to ensure any new roles are properly tailored to patient and staff needs. Further, standardized training and accreditation for nursing assistant roles would reduce the supervisory burden on licensed nurses.
APA, Harvard, Vancouver, ISO, and other styles
20

Vijay Kartik, S., Michael Sprung, Fabian Westermeier, and Anton Barty. "MENTO: Automated near real-time data analysis at PETRA III." Journal of Physics: Conference Series 2380, no. 1 (2022): 012104. http://dx.doi.org/10.1088/1742-6596/2380/1/012104.

Full text
Abstract:
Abstract With the advent of next-generation X-ray detectors with large sensor areas and high sampling rates, photon science experiments are now having to deal with a data explosion. We present a scalable, near real-time data processing toolkit developed to address this challenge at PETRA III, which is currently in use at the coherence applications beamline P10 and at the macromolecular crystallography beamline P11. This toolkit runs automated analysis pipelines on high-volume data concurrently with data acquisition, thus providing quick feedback during experiments at the beamlines. Named mento (Maxwell-Enhanced Near real-Time Online analysis), the toolkit leverages the computing resources of the in-house HPC cluster ‘Maxwell’ to enhance analysis performance, and additionally takes advantage of the available distributed data storage to provide analysis results directly back to the experimenter with minimal delay. mento works seamlessly with the experiment control mechanisms used at P10, P11, and other PETRA III beamlines, and ensures that analysis on the HPC cluster is triggered automatically during data acquisition at the beamline, and that results are available for visualization at the beamline control hutch even though the analysis itself is performed remotely. mento thus helps the human-in-the-loop concentrate on novel science aspects of the experiment, without needing to manage the computational workload in a high data-rate regime. mento has been used with both in-house and commercial analysis software to achieve speedups of 50x-100x relative to the existing analysis pipelines, thus proving its utility for different kinds of experiments at PETRA III. The source code for mento is available at https://gitlab.desy.de/fs-sc/mento.
APA, Harvard, Vancouver, ISO, and other styles
21

Nickel, Philip J. "The Prospect of Artificial Intelligence‐Supported Ethics Review." Ethics & Human Research 46, no. 6 (2024): 25–28. http://dx.doi.org/10.1002/eahr.500230.

Full text
Abstract:
ABSTRACTThe burden of research ethics review falls not just on researchers, but also on those who serve on research ethics committees (RECs). With the advent of automated text analysis and generative artificial intelligence (AI), it has recently become possible to teach AI models to support human judgment, for example, by highlighting relevant parts of a text and suggesting actionable precedents and explanations. It is time to consider how such tools might be used to support ethics review and oversight. This essay argues that with a suitable strategy of engagement, AI can be used in a variety of ways that genuinely support RECs to manage their workload and improve the quality of review. It would be wiser to take an active role in the development of AI tools for ethics review, rather than to adopt ad hoc tools after the fact.
APA, Harvard, Vancouver, ISO, and other styles
22

Vandebon, Jessica, Jose G. F. Coutinho, and Wayne Luk. "Scheduling Hardware-Accelerated Cloud Functions." Journal of Signal Processing Systems 93, no. 12 (2021): 1419–31. http://dx.doi.org/10.1007/s11265-021-01695-7.

Full text
Abstract:
AbstractThis paper presents a Function-as-a-Service (FaaS) approach for deploying managed cloud functions onto heterogeneous cloud infrastructures. Current FaaS systems, such as AWS Lambda, allow domain-specific functionality, such as AI, HPC and image processing, to be deployed in the cloud while abstracting users from infrastructure and platform concerns. Existing approaches, however, use a single type of resource configuration to execute all function requests. In this paper, we present a novel FaaS approach that allows cloud functions to be effectively executed across heterogeneous compute resources, including hardware accelerators such as GPUs and FPGAs. We implement heterogeneous scheduling to tailor resource selection to each request, taking into account performance and cost concerns. In this way, our approach makes use of different processor types and quantities (e.g. 2 CPU cores), uniquely suited to handle different types of workload, potentially providing improved performance at a reduced cost. We validate our approach in three application domains: machine learning, bio-informatics, and physics, and target a hardware platform with a combined computational capacity of 24 FPGAs and 12 CPU cores. Compared to traditional FaaS, our approach achieves a cost improvement for non-uniform traffic of up to 8.9 times, while maintaining performance objectives.
APA, Harvard, Vancouver, ISO, and other styles
23

Alekseev, Aleksandr, Simone Campana, Xavier Espinal, et al. "On the road to a scientific data lake for the High Luminosity LHC era." International Journal of Modern Physics A 35, no. 33 (2020): 2030022. http://dx.doi.org/10.1142/s0217751x20300227.

Full text
Abstract:
The experiments at CERN’s Large Hadron Collider use the Worldwide LHC Computing Grid, the WLCG, for its distributed computing infrastructure. Through the distributed workload and data management systems, they provide seamless access to hundreds of grid, HPC and cloud based computing and storage resources that are distributed worldwide to thousands of physicists. LHC experiments annually process more than an exabyte of data using an average of 500,000 distributed CPU cores, to enable hundreds of new scientific results from the collider. However, the resources available to the experiments have been insufficient to meet data processing, simulation and analysis needs over the past five years as the volume of data from the LHC has grown. The problem will be even more severe for the next LHC phases. High Luminosity LHC will be a multiexabyte challenge where the envisaged Storage and Compute needs are a factor 10 to 100 above the expected technology evolution. The particle physics community needs to evolve current computing and data organization models in order to introduce changes in the way it uses and manages the infrastructure, focused on optimizations to bring performance and efficiency not forgetting simplification of operations. In this paper we highlight a recent R&D project related to scientific data lake and federated data storage.
APA, Harvard, Vancouver, ISO, and other styles
24

Souza, Renan, Vitor Silva, Alexandre A. B. Lima, Daniel de Oliveira, Patrick Valduriez, and Marta Mattoso. "Distributed in-memory data management for workflow executions." PeerJ Computer Science 7 (May 7, 2021): e527. http://dx.doi.org/10.7717/peerj-cs.527.

Full text
Abstract:
Complex scientific experiments from various domains are typically modeled as workflows and executed on large-scale machines using a Parallel Workflow Management System (WMS). Since such executions usually last for hours or days, some WMSs provide user steering support, i.e., they allow users to run data analyses and, depending on the results, adapt the workflows at runtime. A challenge in the parallel execution control design is to manage workflow data for efficient executions while enabling user steering support. Data access for high scalability is typically transaction-oriented, while for data analysis, it is online analytical-oriented so that managing such hybrid workloads makes the challenge even harder. In this work, we present SchalaDB, an architecture with a set of design principles and techniques based on distributed in-memory data management for efficient workflow execution control and user steering. We propose a distributed data design for scalable workflow task scheduling and high availability driven by a parallel and distributed in-memory DBMS. To evaluate our proposal, we develop d-Chiron, a WMS designed according to SchalaDB’s principles. We carry out an extensive experimental evaluation on an HPC cluster with up to 960 computing cores. Among other analyses, we show that even when running data analyses for user steering, SchalaDB’s overhead is negligible for workloads composed of hundreds of concurrent tasks on shared data. Our results encourage workflow engine developers to follow a parallel and distributed data-oriented approach not only for scheduling and monitoring but also for user steering.
APA, Harvard, Vancouver, ISO, and other styles
25

Du, Ran, Jingyan Shi, Xiaowei Jiang, and Jiaheng Zou. "Cosmos : A Unified Accounting System both for the HTCondor and Slurm Clusters at IHEP." EPJ Web of Conferences 245 (2020): 07060. http://dx.doi.org/10.1051/epjconf/202024507060.

Full text
Abstract:
HTCondor was adopted to manage the High Throughput Computing (HTC) cluster at IHEP in 2016. In 2017 a Slurm cluster was set up to run High Performance Computing (HPC) jobs. To provide accounting services for these two clusters, we implemented a unified accounting system named Cosmos. Multiple workloads bring different accounting requirements. Briefly speaking, there are four types of jobs to account. First of all, 30 million single-core jobs run in the HTCondor cluster every year. Secondly, Virtual Machine (VM) jobs run in the legacy HTCondor VM cluster. Thirdly, parallel jobs run in the Slurm cluster, and some of these jobs are run on the GPU worker nodes to accelerate computing. Lastly, some selected HTC jobs are migrated from the HTCondor cluster to the Slurm cluster for research purposes. To satisfy all the mentioned requirements, Cosmos is implemented with four layers: acquisition, integration, statistics and presentation. Details about the issues and solutions of each layer will be presented in the paper. Cosmos has run in production for two years, and the status shows that it is a well-functioning system, also meets the requirements of the HTCondor and Slurm clusters.
APA, Harvard, Vancouver, ISO, and other styles
26

Mohd Noor, Noorfaizalfarid, Nadhirah Mohd Napi, and Izzati Farzana Ibni Amin. "The Development of Autonomous Examination Paper Application: A Case Study in UiTM Cawangan Perlis." Journal of Computing Research and Innovation 4, no. 2 (2019): 21–30. http://dx.doi.org/10.24191/jcrinn.v4i2.105.

Full text
Abstract:
Examination is a vital role to measure the capabilities of students in their learning. Hence, generating question paper in an effective way is a decisive job for educators in educational institution. Using traditional method, it is monotonous and time consuming. Today, Autonomous Examination Paper (AEP) is used to produce exam paper. Many researchers have proposed effective AEPs to be used by educators. This paper aims to investigate about AEP development and to construct AEP in UiTM Cawangan Perlis. As a result, Ad-Hoc Question Paper Application (AQPA) has been developed using Fisher-Yates algorithm to generate questions for exam paper in the university. Evaluation based on Perceived Ease of Use (PEOU) and Perceived Usefulness (PU) reveal that lecturers in the university manage to interact with AQPA and willing to use it as a tool to minimize their workload. However, more improvement must be done on AQPA to be an effective AEP. To conclude, AEP brings significance to educators and can be improved with the latest technology.
APA, Harvard, Vancouver, ISO, and other styles
27

McBride, Liza-Jane, Cate Fitzgerald, Laura Morrison, and Julie Hulcombe. "Pre-entry student clinical placement demand: can it be met?" Australian Health Review 39, no. 5 (2015): 577. http://dx.doi.org/10.1071/ah14156.

Full text
Abstract:
Objectives The Clinical Education Workload Management Initiative (the Initiative) is a unique, multiprofessional, jurisdiction-wide approach and reform process enshrined within an industrial agreement. The Initiative enabled significant investment in allied health clinical education across Queensland public health services to address the workload associated with providing pre-entry clinical placements. This paper describes the outcomes of a quality review activity to measure the impact of the Initiative on placement capacity and workload management for five allied health professions. Data related to several key factors impacting on placement supply and demand in addition to qualitative perspectives from workforce surveys are reported. Methods Data from a range of quality review actions including collated placement activity data, and workforce and student cohort statistics were appraised. Stakeholder perspectives reported in surveys were analysed for emerging themes. Results Placement offers showed an upward trend in the context of increased university program and student numbers and in contrast with a downward trend in full-time equivalent (FTE) staff numbers. Initiative-funded positions were identified as a major factor in individual practitioners taking more students, and staff and managers valued the Initiative-funded positions’ support before and during placements, in the coordination of placements, and in building partnerships with universities. Conclusions The Initiative enabled a co-ordinated response to meeting placement demand and enhanced collaborations between the health and education sectors. Sustaining pre-entry student placement provision remains a challenge for the future. What is known about the topic? The literature clearly identifies factors impacting on increasing demand for clinical placements and a range of strategies to increase clinical placement capacity. However, reported initiatives have mostly been ad hoc or reactive responses, often isolated within services or professions. What does this paper add? This paper describes implementation of a clinical placement capacity building initiative within public sector health services developed from a unique opportunity to provide funding through an industrial agreement. The Initiative aimed to address the workload associated with clinical education of pre-entry students and new graduates. What are the implications for practitioners? This paper demonstrates that systematic commitment to, and funding of, clinical education across a jurisdiction’s public health services is able to increase placement capacity, even when staffing numbers are in decline.
APA, Harvard, Vancouver, ISO, and other styles
28

SHARMA, ANUPAMA, and RAM BHAROSE. "SUGGESTIONS TO REDUCE RISK FACTORS AMONG THE HEALTH WORKERS IN AGRA CITY OF UTTAR PRADESH, INDIA." Pollution Research 42, no. 04 (2023): 524–26. http://dx.doi.org/10.53550/pr.2023.v42i04.020.

Full text
Abstract:
This study was carried out among the HCWs (both males and females) of 30 hospitals of Agra city, Uttar Pradesh, India. There were a variety of HCWs in the study group, including senior residents, junior residents, interns, undergraduate medical students, staff and student nurses, and staff and student laboratory technicians. The study was carried out with participation from 600 HCWs. In Agra, Uttar Pradesh, India, a city tertiary hospital conducted a hospital-based cross-sectional study. Data were collected using a predesigned schedule from 600 respondents (Doctors, nurses, laboratory technicians and staff). Therefore, this study was undertaken to study the awareness of standard occupational safety measures such as universal precautions and compliance in daily practice among paramedical workers. Healthcare professionals (HCP) face risks while providing preventive, curative, and rehabilitation services. Medical science advancements have increased safety to some extent, but contemporary technology has also made healthcare very complex and dangerous. This study was therefore conducted to find out how well-informed paramedical workers were about common workplace safety precautions like general safety measures and how often they followed them. There are numerous risks associated with medical care, including radiation exposure, violence, psychiatric disorders, patient stalking, and suicide. Due to patient handling and the rising number of obese patients, HCP are at a high risk for musculoskeletal disorders. Human immunodeficiency virus-related workload growth has resulted in greater difficulties. Many HCP are unaware of prevention strategies despite the possibility of exposure to hazards. Additionally, the system is not supportive, the prevention policies are unclear or difficult to access, or there is a problem with attitudes. So, HCP still experience problems, especially in developing nations. Health managers must make sure that healthcare is focused on assessing the risks faced by HCP, their causes, and doing everything they can to prevent them.
APA, Harvard, Vancouver, ISO, and other styles
29

Hamilton, Ellen, Lydia Shone, Catherine Reynolds, Jianhua Wu, Ramesh Nadarajah, and Chris Gale. "Perceptions of healthcare professionals on the use of a risk prediction model to inform atrial fibrillation screening: qualitative interview study in English primary care." BMJ Open 15, no. 2 (2025): e091675. https://doi.org/10.1136/bmjopen-2024-091675.

Full text
Abstract:
ObjectivesThere is increasing interest in guiding atrial fibrillation (AF) screening by risk rather than age. The perceptions of healthcare professionals (HCPs) towards the implementation of risk prediction models to target AF screening are unknown. We aimed to explore HCP perceptions about using risk prediction models for this purpose, and how models could be implemented.DesignSemistructured interviews with HCPs engaged in the Future Innovations in Novel Detection of AF (FIND-AF) study. Data were thematically analysed and synthesised to understand barriers and facilitators to AF screening and guiding screening using risk assessment.SettingFive primary care practices in England taking part in the FIND-AF study.Participants15 HCPs (doctors, nurses/nurse practitioners, healthcare assistants, receptionists and practice managers).ResultsParticipants knew the health implications of AF and were supportive of the risk prediction models for AF screening. Four main themes developed: (1) health implications of AF, (2) positives and negatives of risk prediction in AF screening, (3) strategies to implement a risk prediction model and (4) barriers and facilitators to risk-guided AF screening. HCPs thought risk-guided AF screening would improve patient outcomes by reducing AF-related stroke, and this outweighed concerns over health anxiety and the impact on workload. Pop-up notifications and practice worklists were the main suggestions for risk-guided screening implementation and for this to be predominantly run by administrative staff. Many recommended the need for educating staff on AF and the prediction models to help aid the implementation of a clear protocol for longitudinal follow-up of high-risk patients and communication of risk.ConclusionsOverall, HCPs participating in the FIND-AF study were supportive of using risk prediction to guide AF screening and willing to take on extra workload to facilitate risk-guided AF screening. The best pathway design and the method of how risk is communicated to patients require further consideration.Trial registration numberNCT05898165.
APA, Harvard, Vancouver, ISO, and other styles
30

Cortis, Natasha, and Abigail Powell. "Playing catch up? An exploration of supplementary work at home among Australian public servants." Journal of Industrial Relations 60, no. 4 (2018): 538–59. http://dx.doi.org/10.1177/0022185618769340.

Full text
Abstract:
Working at home has conventionally been understood as a formal, employer-sanctioned flexibility or ‘telework’ arrangement adopted primarily to promote work–life balance. However, work at home is now most commonly performed outside of normal working hours on an informal, ad hoc basis, to prepare for or catch up on tasks workers usually perform in the workplace. Scholarly assessment of this type of work has been sparse. To fill this gap, we undertook secondary analysis of a large data set, the Australian Public Service Employee Census, to explore the personal and organisational factors associated with middle-level managers regularly taking work home to perform outside of and in addition to their usual working hours. We conceptualise this as ‘supplementary work’. The analysis shows how supplementary work is a flexibility practice associated with high workloads and poor organisational supports for work–life balance, distinguishing it from other forms of home-based work. Whereas previous studies have not found gendered effects, we found women with caring responsibilities had higher odds of performing supplementary work. These findings expand understandings of contemporary flexibility practices and the factors that affect them, and underline the need for more nuanced theories of working at home.
APA, Harvard, Vancouver, ISO, and other styles
31

Lestari, Indah, Fahrizal Fahrizal, Sumarjo Sumarjo, Lisa Rahmi, and Amirzan Amirzan. "Managerial Strategies for Curriculum Development in Sports, Nursing, and History Education Programs Based on Outcome-Based Education (OBE) and Research Findings." EduLine: Journal of Education and Learning Innovation 5, no. 2 (2025): 295–302. https://doi.org/10.35877/454ri.eduline3968.

Full text
Abstract:
This study investigates managerial strategies for curriculum development in sports science, nursing, and history education programs based on OBE and research findings. The research examines how program managers implement these strategies, the extent of research integration into curricula, and obstacles that affect curriculum effectiveness in producing professional graduates. This study employs qualitative data collected through in-depth interviews, participatory observation, and document analysis from three universities in Aceh, Indonesia. The participants included faculty deans, program heads, curriculum development team leaders, permanent faculty, and final-year students selected through purposive sampling. Data collection focused on managerial practices, stakeholder involvement, theory-practice integration, and research incorporation into curriculum development processes. The collected data was analyzed using interactive data analysis techniques involving data reduction, presentation, and conclusion drawing. The results show that three main managerial strategies are currently implemented: stakeholder-inclusive curriculum development involving both internal and external stakeholders, competency-based curriculum design structured around measurable learning outcomes, and theory-practice integration mechanisms through mandatory collaboration between theoretical and practical course instructors. However, three primary challenges emerged: research-curriculum misalignment reported by 78% of faculty, faculty workload imbalance identified by 85% of respondents, and industry-academia gaps noted by 67% of program administrators. Research integration showed varying implementation levels, with systematic approaches demonstrating higher effectiveness than ad-hoc methods. The findings suggest that effective OBE-based curriculum management requires systematic research-curriculum alignment mechanisms, flexible faculty role definitions supporting research-teaching integration, and ongoing industry partnerships informing both research directions and curricular content. Success depends on institutional commitment to developing coherent systems rather than implementing isolated practices
APA, Harvard, Vancouver, ISO, and other styles
32

Downie, Samantha, Graeme Nicol, Peter Young, et al. "COST-BENEFIT ANALYSIS FOR DEVELOPING A COST-EFFECTIVE TERTIARY METASTATIC BONE DISEASE PATHWAY FOR COMPLEX PERIACETABULAR METASTASES." Orthopaedic Proceedings 107-B, SUPP_2 (2025): 51. https://doi.org/10.1302/1358-992x.2025.2.051.

Full text
Abstract:
Patients with peri-acetabular bone metastases are currently treated by trauma services ad hoc, or by tertiary referral services such as sarcoma teams. These patients are high risk, with a significant complication rate, and benefit from a tertiary MDT approach. We aimed to assess the cost benefit of instituting a tertiary MBD service in a medium size DGH.This was a prospective modelling study estimating the cost of managing the MBD workload at a medium-sized orthopaedic centre (population 423,000) over a ten-year period (2026–2035). The cost-analysis projected cost differences in the following areas: implant cost, outpatient/inpatient care cost, intraoperative fracture rate and metalwork failure.In the study Health board (2026–2035), cancer prevalence will increase from 4528 to 8004 and the number of complex hip metastases will rise from 19 to 34/year.Instituting a tertiary MBD service will see implant costs rise from a mean of £94,536/year (baseline model) to £153,839 (total £593,026 over ten years). With more patients managed electively, patient care cost will fall from £2 million (baseline model) to £1.7 million in the MBD service (average saving of £29,748/year).Three metalwork failure scenarios were modelled versus the baseline projected 5% failure rate (13 with metalwork failure over ten years). A failure rate of 3% in the MBD service (8 patients) would save £222,634, 2% (5 patients) would save £333,951 and 1% (3 patients) would save £445,268.Overall, failure rates of 2% and 1% would favour an MBD service over the status quo (less expensive by £215/year 2% failure rate and £10,917/year 1% failure rate).This is the first cost-analysis to be performed on setting up a tertiary referral MBD service in a UK orthopaedic trauma unit. With a metalwork failure rate of 1% at five years, a tertiary MBD service would save £109,168 over ten years.
APA, Harvard, Vancouver, ISO, and other styles
33

Palacholla, Ramya sita, Ali Soltani, Josep A. Pous, Monika Dutkiewicz-piasecka, Debra Carter, and Elisabeth Piault-Louis. "Abstract 3663: A digital health solution for monitoring skin toxicities in cancer treatment." Cancer Research 85, no. 8_Supplement_1 (2025): 3663. https://doi.org/10.1158/1538-7445.am2025-3663.

Full text
Abstract:
Abstract Introduction: Skin toxicities such as maculo-papular rash, xerosis, or pruritis, can hinder the optimal dosing of immunotherapy, chemotherapy, and targeted therapies1-4. Their management is primarily palliative and prompted by patients’ report of skin symptoms or impact on their activities of daily living (ADL). A digital product for the collection, transfer and visualization of data relevant to skin toxicities could contribute to early identification and palliative management and ultimately decreased morbidity for patients between clinic visits. Methods: A review of the published literature and clinical expert interviews (n=8) informed the content of a digital product. Behavioral psychology, measurement and data science drove the design of the interfaces and functionalities with a focus on user-engagement and on supporting diversity. Patients (n=20) receiving anticancer therapies and clinicians (oncologists and nurses, n=8) participated in qualitative user testing of the instructions, question flow, response options, educational material and disclaimers to document comprehension and usage. Results: Key determinants in identifying and reporting skin symptoms include the actual hue of the skin and the body location and surface affected by the skin manifestation(s). The app supports selection of the closest match to the user skin tone; then the patients access tone-matched pictures from an AI-generated library of skin lesions as relevant references to report and rate symptoms severity. A 3D interactive body avatar allows participants to zoom, record and annotate the symptom(s) location and extent. Patient-captured pictures of the skin lesions are automatically anonymized by an AI- photo processing to ensure patient privacy. The HCP portal provides an easy visualization of patient-reported symptoms and pictures within context of ADL impact and patient’s skin history. The digital solution includes a symptom-worsening alert to nudge a discussion between the patient and clinician. In addition, patients receive non-medical tips to self-manage non-severe symptoms to prevent higher morbidity. Most patients appreciated personalizing the app based on their skin tone. Content was easy to understand yet adapted on rewording suggestions. Tracking skin symptoms was seen as enabler of more precise discussions with clinicians and the main intrinsic usage driver. Clinicians were positive of the potential for useful insights into patients' skin status. HCPs reflected on a positive tradeoff between lower risk of significant morbidity and additional workload. Conclusion: The digital solution features several innovative components for reliable and sustained data capture of the onset and evolution of skin symptoms providing relevant insight for the remote monitoring of patient at risk of skin toxicities; Further research will inform its clinical validity and utility. Citation Format: Ramya sita Palacholla, Ali Soltani, Josep A. Pous, Monika Dutkiewicz-piasecka, Debra Carter, Elisabeth Piault-Louis. A digital health solution for monitoring skin toxicities in cancer treatment [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular Abstracts); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1):Abstract nr 3663.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhou, Naweiluo, Yiannis Georgiou, Marcin Pospieszny, et al. "Container orchestration on HPC systems through Kubernetes." Journal of Cloud Computing 10, no. 1 (2021). http://dx.doi.org/10.1186/s13677-021-00231-z.

Full text
Abstract:
AbstractContainerisation demonstrates its efficiency in application deployment in Cloud Computing. Containers can encapsulate complex programs with their dependencies in isolated environments making applications more portable, hence are being adopted in High Performance Computing (HPC) clusters. Singularity, initially designed for HPC systems, has become their de facto standard container runtime. Nevertheless, conventional HPC workload managers lack micro-service support and deeply-integrated container management, as opposed to container orchestrators. We introduce a Torque-Operator which serves as a bridge between HPC workload manager (TORQUE) and container orchestrator (Kubernetes). We propose a hybrid architecture that integrates HPC and Cloud clusters seamlessly with little interference to HPC systems where container orchestration is performed on two levels.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhou, Naweiluo, Yiannis Georgiou, Marcin Pospieszny, et al. "Container orchestration on HPC systems through Kubernetes." February 22, 2021. https://doi.org/10.1186/s13677-021-00231-z.

Full text
Abstract:
Containerisation demonstrates its efficiency in application deployment in Cloud Computing. Containers can encapsulate complex programs with their dependencies in isolated environments making applications more portable, hence are being adopted in High Performance Computing (HPC) clusters. Singularity, initially designed for HPC systems, has become their de facto standard container runtime. Nevertheless, conventional HPC workload managers lack micro-service support and deeply-integrated container management, as opposed to container orchestrators. We introduce a Torque-Operator which serves as a bridge between HPC workload manager (TORQUE) and container orchestrator (Kubernetes). We propose a hybrid architecture that integrates HPC and Cloud clusters seamlessly with little interference to HPC systems where container orchestration is performed on two levels.
APA, Harvard, Vancouver, ISO, and other styles
36

López, Marta, Esteban Stafford, and Jose Luis Bosque. "Intelligent energy pairing scheduler (InEPS) for heterogeneous HPC clusters." Journal of Supercomputing 81, no. 2 (2025). https://doi.org/10.1007/s11227-024-06907-y.

Full text
Abstract:
Abstract In recent years, energy consumption has become a limiting factor in the evolution of high-performance computing (HPC) clusters in terms of environmental concern and maintenance cost. The computing power of these clusters is increasing, together with the demands of the workloads they execute. A key component in HPC systems is the workload manager, whose operation has a substantial impact on the performance and energy consumption of the clusters. Recent research has employed machine learning techniques to optimise the operation of this component. However, these attempts have focused on homogeneous clusters where all the cores are pooled together and considered equal, disregarding the fact that they are contained in nodes and that they can have different performances. This work presents an intelligent job scheduler based on deep reinforcement learning that focuses on reducing energy consumption of heterogeneous HPC clusters. To this aim it leverages information provided by the users as well as the power consumption specifications of the compute resources of the cluster. The scheduler is evaluated against a set of heuristic algorithms showing that it has potential to give similar results, even in the face of the extra complexity of the heterogeneous cluster.
APA, Harvard, Vancouver, ISO, and other styles
37

Thomas, Rhalena A., Michael R. Fiorini, Saeid Amiri, Edward A. Fon, and Sali M. K. Farhan. "ScRNAbox: empowering single-cell RNA sequencing on high performance computing systems." BMC Bioinformatics 25, no. 1 (2024). http://dx.doi.org/10.1186/s12859-024-05935-y.

Full text
Abstract:
Abstract Background Single-cell RNA sequencing (scRNAseq) offers powerful insights, but the surge in sample sizes demands more computational power than local workstations can provide. Consequently, high-performance computing (HPC) systems have become imperative. Existing web apps designed to analyze scRNAseq data lack scalability and integration capabilities, while analysis packages demand coding expertise, hindering accessibility. Results In response, we introduce scRNAbox, an innovative scRNAseq analysis pipeline meticulously crafted for HPC systems. This end-to-end solution, executed via the SLURM workload manager, efficiently processes raw data from standard and Hashtag samples. It incorporates quality control filtering, sample integration, clustering, cluster annotation tools, and facilitates cell type-specific differential gene expression analysis between two groups. We demonstrate the application of scRNAbox by analyzing two publicly available datasets. Conclusion ScRNAbox is a comprehensive end-to-end pipeline designed to streamline the processing and analysis of scRNAseq data. By responding to the pressing demand for a user-friendly, HPC solution, scRNAbox bridges the gap between the growing computational demands of scRNAseq analysis and the coding expertise required to meet them.
APA, Harvard, Vancouver, ISO, and other styles
38

Linnert, Barry, Cesar Augusto F. De Rose, and Hans‐Ulrich Heiss. "Toward a Dynamic Allocation Strategy for Deadline‐Oriented Resource and Job Management in HPC Systems." Concurrency and Computation: Practice and Experience, October 26, 2024. http://dx.doi.org/10.1002/cpe.8310.

Full text
Abstract:
ABSTRACTAs high‐performance computing (HPC) becomes a tool used in many different workflows, quality of service (QoS) becomes increasingly important. In many cases, this includes the reliable execution of an HPC job and the generation of the results by a certain deadline. The resource and job management system (RJMS) or simply RMS is responsible for receiving the job requests and executing the jobs with a deadline‐oriented policy to support the workflows. In this article, we evaluate how well static resource management policies cope with deadline‐constrained HPC jobs and explore two variations of a dynamic policy in this context. As the Hilbert curve‐based approach used by the SLURM workload manager represents the state‐of‐the‐art in production environments, it was selected as one of the static allocation strategies. The Manhattan median approach as a second allocation strategy was introduced as a research work that aims to minimize the communication overhead of the parallel programs by providing compact partitions more than the Hilbert curve approach. In contrast to the static partitions provided by the Hilbert curve approach and the Manhattan median approach, the leak approach focuses on supporting dynamic runtime behavior of the jobs and assigning nodes of the HPC system on demand at runtime. Since the contiguous leak version also relies on a compact set of nodes, the noncontiguous leak can provide additional nodes at a greater distance from the nodes already used by the job. Our preliminary results clearly show that a dynamic policy is needed to meet the requirements of a modern deadline‐oriented RMS scenario.
APA, Harvard, Vancouver, ISO, and other styles
39

Elshambakey, Mohammed, Aya I. Maiyza, Mona S. Kashkoush, Ghada M. Fathy, and Hanan A. Hassan. "The Egyptian national HPC grid (EN-HPCG): open-source Slurm implementation from cluster to grid approach." Journal of Supercomputing, April 17, 2024. http://dx.doi.org/10.1007/s11227-024-06041-9.

Full text
Abstract:
AbstractRecently, Egypt has recognized the pivotal role of High Performance Computing in advancing science and innovation. Additionally, Egypt realizes the importance of collaboration between different institutions and universities to consolidate their own computational and data resources into a unified platform to serve different disciplines (e.g., scientific, industrial, governmental). Otherwise, additional resources would be needed to be purchased with the associated cost, effort, and time difficulties (e.g., setup, administration, maintenance, etc.). Thus, this paper delves into the architecture and capabilities of the EN-HPCG grid using two different workload management systems: (i) Slurm (Open-Source) and (ii) PBS Pro (Licensed). This paper compares the performance of the grid between Slurm and PBS Pro in specific high-throughput computing (HTC) applications using the NAS Grid parallel benchmark (NGB) to determine which workload manager is more suitable for EN-HPCG. The evaluation includes grid-level performance metrics such as throughput, and the number of tasks completed as a function of time. Also, the presented methodology aims to assist potential partners in their decision-making process to join the EN-HPCG grid, with a focus on the site speed-up metric. Our results showed that, unless an open-source solution without cost and license problems is an obligation (in which case, Slurm is the viable solution), then it is not advisable to integrate a cluster with high-speed hardware with a cluster possessing outdated hardware when using the Slurm scheduler. In contrast, the PBS Pro scheduler takes into account online decision-making in a dynamic environment using a unified grid.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Lingfei, Maria A. Rodriguez, and Nir Lipovetzky. "Optimizing HPC scheduling: a hierarchical reinforcement learning approach for intelligent job selection and allocation." Journal of Supercomputing 81, no. 8 (2025). https://doi.org/10.1007/s11227-025-07396-3.

Full text
Abstract:
Abstract High-performance computing (HPC) systems face increasing challenges in job scheduling due to the evolving complexity of computational tasks and the growing diversity and heterogeneity of resources. Job allocation is a critical aspect in contemporary HPC systems, due to compute nodes possessing an increased capacity in terms of physical resources and having the capability to execute multiple jobs simultaneously. However, job allocation is often overlooked in existing reinforcement learning (RL)-based schedulers that mainly focus on selecting suitable jobs from the job queue and leave allocation to overly simplistic policies, such as first-available allocation. The bin-packing nature at the node level of modern HPC necessitates more refined and intelligent allocation strategies. This paper introduces HeraSched, a novel hierarchical reinforcement learning (HRL)-based scheduler, adept at intelligent job selection without separate backfilling and heterogeneity-aware allocation, tailored for modern HPC environments. It efficiently manages diverse workloads across CPU and GPU cluster partitions. We evaluate HeraSched using real-world workloads, demonstrating significant improvements in reducing job waiting times and preventing job starvation compared to 27 scheduling combinations. In validation, the best maximum waiting time among compared methods is 78% higher than HeraSched’s result in overloaded CPU partitions. This performance demonstrates HeraSched’s ability to manage intensely stressed workloads and adapt to previously unseen, high-demand scenarios, thereby establishing a new standard in HPC job scheduling.
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Jiali, Cheng Wang, Andrew Orr, and Rao Kotamarthi. "A parallel workflow implementation for PEST version 13.6 in high-performance computing for WRF-Hydro version 5.0: a case study over the Midwestern United States." March 9, 2019. https://doi.org/10.5281/zenodo.3247116.

Full text
Abstract:
This study ported the parallel PEST to High Performance Computing (HPC) clusters and adapted it to work with the The Weather Research and Forecasting Hydrological (WRF-Hydro) modeling system. The porting involved writing scripts to modify the workflow for different workload managers and job schedulers, as well as developing code to connect parallel-PEST to WRF-Hydro. We developed a case study using a flood in the midwestern United States in 2013 to test the operational feasibility and computational benefits of the HPC-enabled parallel PEST linked to WRF-Hydro. The files include (1) PESTCode.zip, the original code of PEST with a slight modification, see ReadMe file; (2) ScriptsForEdison.zip, the scripts that customized for PEST to run on Edison, one of U.S. Department of Energy operated supercomputers; (3) PESTFiles.zip, the four types of files that are required by parallel PEST and custom-designed for WRF-Hydro.  
APA, Harvard, Vancouver, ISO, and other styles
42

Allen, Tyler, Bennett Cooper, and Rong Ge. "Fine-Grain Quantitative Analysis of Demand Paging in Unified Virtual Memory." ACM Transactions on Architecture and Code Optimization, November 14, 2023. http://dx.doi.org/10.1145/3632953.

Full text
Abstract:
The abstraction of a shared memory space over separate CPU and GPU memory domains has eased the burden of portability for many HPC codebases. However, users pay for ease of use provided by system-managed memory with a moderate-to-high performance overhead. NVIDIA Unified Virtual Memory (UVM) is currently the primary real-world implementation of such abstraction and offers a functionally equivalent testbed for in-depth performance study for both UVM and future Linux Heterogeneous Memory Management (HMM) compatible systems. The continued advocacy for UVM and HMM motivates improvement of the underlying system. We focus on UVM-based systems and investigate root causes of UVM overhead, a non-trivial task due to complex interactions of multiple hardware and software constituents and the desired cost granularity. In our prior work, we delved deeply into UVM system architecture and showed internal behaviors of page fault servicing in batches. We provided quantitative evaluation of batch handling for various applications under different scenarios, including prefetching and oversubscription. We revealed the driver workload depends on the interactions among application access patterns, GPU hardware constraints, and host OS components. Host OS components have significant overhead present across implementations, warranting close attention. This extension furthers our prior study in three aspects: fine-grain cost analysis and breakdown, extension to multiple GPUs, and investigation of platforms with different GPU-GPU interconnects. We take a top-down approach to quantitative batch analysis and uncover how constituent component costs accumulate and overlap governed by synchronous and asynchronous operations. Our multi-GPU analysis shows reduced cost of GPU-GPU batch workloads compared to CPU-GPU workloads. We further demonstrate that while specialized interconnects, NVLink, can improve batch cost, their benefits are limited by host OS software overhead and GPU oversubscription. This study serves as a proxy for future shared memory systems, such as those that interface with HMM, and the development of interconnects.
APA, Harvard, Vancouver, ISO, and other styles
43

Shanks, Emelie. "Temporary Agency Workers in the Personal Social Services—Doing Core Tasks in the Periphery." British Journal of Social Work, November 14, 2023. http://dx.doi.org/10.1093/bjsw/bcad244.

Full text
Abstract:
Abstract Despite concerns about negative consequences for clients and permanent staff, temporary agency workers (TAWs) are frequently employed to manage staff shortages in personal social services (PSS) in Sweden and elsewhere. Drawing on qualitative interviews with thirty-four TAWs, managers and permanent social workers, this article aims to enhance our understanding of how TAWs are utilised in the PSS and the impact this has on (i) the preconditions for TAWs and (ii) the work environment for permanent employees. The findings suggest that TAWs are mainly contracted for core tasks, and often for heavy-duty work. In order to meet demands for expedient case administration, the supportive aspects of social work are sometimes deprioritised. Permanent staff report that positive effects of the use of TAWs include relief of workload and an influx of new knowledge, whereas negative effects include stagnated work development, deteriorating group dynamics and additional work. Moreover, it is shown that TAWs often reside in the periphery of the organisation and that they typically are contracted on an ad-hoc basis and during times of crisis. It is suggested that the organisational conditions that TAWs are contracted to help remedy paradoxically are unlikely to create the best preconditions for a successful use.
APA, Harvard, Vancouver, ISO, and other styles
44

Haroon, Faisal. "Designing a Hybrid Cloud-Grid Architecture for High-Performance Computing: A Unified Model for Resource Sharing, Task Distribution, and Scalability in Heterogeneous Environments." Global Research Journal of Natural Science and Technology, April 28, 2025. https://doi.org/10.53762/grjnst.03.02.09.

Full text
Abstract:
The modern and rapidly growing number of computational problems in the scientific and industrial applications requires not only powerful systems but also adaptive, scalable and fault tolerant high performance computing architectures. This paper presents a new HCG architecture that integrates the merits of the conventional grid computing and the novel cloud services to overcome critical issues including, resource provisioning, scheduling and allocation, failure tolerant systems, and cost minimization in distributed environments. It features an intelligent middle tier to manage resources, cloud resource shedding mechanisms for scaling, and AI-based scheduling for workload distribution. Several simulation-based performance analysis results highlight the proposed hybrid model’s effectiveness in cutting down general task solving time up to 35.7%, improving the usage of resources by up to 81.3% on an average, bringing down recovery time and operational costs by approximately 37.7% than existing standalone systems. Moreover, it provides better energy efficiency and resource reliability based on a given workload level. The results obtained here indicate that the hybrid cloud-grid framework may be considered as a viable model for future HPC solutions that are scalable, reliable, and economically feasible for tackling the requirements posed by data-intensive or near-real time applications. Future works involve deploying the model in a real environment, incorporating the model in a multi-cloud and edge setting, and incorporating more sophisticated auto management techniques that would augment the performance and flexibility of the model.
APA, Harvard, Vancouver, ISO, and other styles
45

Griffith, Lauren Miller, Cameron S. Griffith, and J. Hunter Peden. "Tribes or Nomads: A Comparative Study of Collaborative Learning Frameworks." Journal 5, no. 1 (2015). http://dx.doi.org/10.22582/ta.v5i1.324.

Full text
Abstract:
This paper explores the relative value of "permanent" working groups versus "ad hoc" groups in large introductory level anthropology courses. The aim is to manage tutor workload while simultaneously enhancing students’ attainment of the learning objectives. In addition, a main learning objective was for students to practice critical thinking and develop an understanding of cultural relativism. We argue that one effective experiential approach to teaching such concepts is collaborative learning with others in diverse learning groups. We explore the factors enhancing such learning experiences. Based on our survey research we conclude that ad hoc groups are better for exposing students to diverse perspectives and permanent working groups are better for fostering an intimate learning experience within a large class. Although our original goals for using groups were mainly pragmatic, our research on teaching methods shows that it exposes students to diverse perspectives. We find this particularly appropriate for courses in anthropology aiming to teach the meaning of diversity and other related concepts. Therefore, we recommend that tutors/instructors choose their collaborative learning strategy based both on their intended learning outcomes and their learning environments.
APA, Harvard, Vancouver, ISO, and other styles
46

Gorbenko, Ksenia, Eliezer Mendelev, Marla Dubinsky, and Laurie Keefer. "Establishing a medical home for patients with inflammatory bowel diseases: a qualitative study." Qualitative Research in Medicine and Healthcare 4, no. 2 (2020). http://dx.doi.org/10.4081/qrmh.2020.8801.

Full text
Abstract:
The Patient-Centered Medical Home model has gained popularity in primary care to provide early effective care to patients with chronic conditions. Prior research on specialty medical homes has been cross-sectional and focused on patient outcomes. The objective of this longitudinal qualitative study was to identify best practices in establishing a specialty medical home in Inflammatory Bowel Diseases (IBD Home). The multimethod study included direct observations of multidisciplinary team meetings (30 hours over one year) and in-depth interviews with individual team members (N=11) and referring physicians (N=6) around their participation in the IBD home. All interviews were professionally transcribed verbatim. Two researchers coded transcripts for themes using NVivo software. Weekly team meetings (N=9±3) included behavioral health providers, nurse practitioners, nurses, dietitians, a clinical pharmacist, and clinical coordinators. Physicians referred patients with psychosocial comorbidities to the IBD home. Initially the team enrolled all referred patients. Later, they developed exclusion criteria and a patient complexity score to manage the volume. Some providers reported increase in their workload (social work, nutrition) while others’ workload was unaffected (gastroenterology, nursing). No physicians attended team meetings regularly. Regular in-person meetings helped to strengthen the team. Involving physicians as consultants on an ad hoc basis without regular meeting attendance empowered other team members to take ownership of the IBD Home.
APA, Harvard, Vancouver, ISO, and other styles
47

Oh, Deok-Jae, Yaebin Moon, Do Kyu Ham, et al. "MaPHeA: A Framework for Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation." ACM Transactions on Embedded Computing Systems, March 31, 2022. http://dx.doi.org/10.1145/3527853.

Full text
Abstract:
Hardware performance monitoring units (PMUs) are a standard feature in modern microprocessors, providing a rich set of microarchitectural event samplers. Recently, numerous profile-guided optimization (PGO) frameworks have exploited them to feature much lower profiling overhead compared to conventional instrumentation-based frameworks. However, existing PGO frameworks mainly focus on optimizing the layout of binaries; they overlook rich information provided by the PMU about data access behaviors over the memory hierarchy. Thus, we propose MaPHeA, a lightweight M emory hierarchy- a ware P rofile-guided He ap A llocation framework applicable to both HPC and embedded systems. MaPHeA guides and applies the optimized allocation of dynamically allocated heap objects with very low profiling overhead and without additional user intervention to improve application performance. To demonstrate the effectiveness of MaPHeA, we apply it to optimizing heap object allocation in an emerging DRAM-NVM heterogeneous memory system (HMS), selective huge-page utilization, and controlling the cacheability of the objects with the low temporal locality. In an HMS, by identifying and placing frequently accessed heap objects to the fast DRAM region, MaPHeA improves the performance of memory-intensive graph-processing and Redis workloads by 56.0% on average over the default configuration that uses DRAM as a hardware-managed cache of slow NVM. By identifying large heap objects that cause frequent TLB misses and allocating them to huge pages, MaPHeA increases the performance of the read and update operations of Redis by 10.6% over the transparent huge-page implementation of Linux. Also, by distinguishing the objects that cause cache pollution due to their low temporal locality and applying write-combining to them, MaPHeA improves the performance of STREAM and RADIX workloads by 20.0% on average over the system without cacheability control.
APA, Harvard, Vancouver, ISO, and other styles
48

Constance, Matshidiso Lelaka. "Lessons learnt and challenges from implementing Safer Conception Services (SCS) in high HIV burden and resource-limited settings: Narratives from Healthcare Providers (HCP)." Peace and Conflict 30, no. 2 (2024). https://doi.org/10.5281/zenodo.13120122.

Full text
Abstract:
Introduction: Couples infected and affected have fertility desires and intensions to become pregnant. SaferConception Services (SCS) has been promoted as an HIV combination prevention intervention to reduce HIV exposureto uninfected partners and prevent mother-to-child transmission at the very early stages of gestation. To successfullyintroduce, integrate, promote, and optimize the service delivery of safer conception services in resource limited primary healthcare settings, it is crucial to understand the lessons learnt and challenges faced by healthcare providers.Purpose: The purpose of the study was to explore the lessons learnt and challenges from the implementation of SCS inhigh HIV burden and resource-limited settings. Material and Methods: This was a qualitative study and part of the implementation study, data was collected from sixfacility healthcare managers who were purposively selected, face-to-face in-depth interviews were using semi-structuredguide between November 2017 - September 2018. Data were analysed using thematic analysis.Results: Four themes emerged: thoughts on the introduction of SCS/perception regarding the introduction of SCS,challenges faced HCP during the implementation of SCS, recommendations by healthcare providers, and monitoring andevaluation systems. Conclusions: The results revealed that there was high acceptability and support of SCS services reported by HCP andpatients. To enhance and strengthen the SCS, there is need for continuous and ongoing support and training on diversetopics on SCS for all staff members regardless of their levels. To reduce burnout, lengthy services and workload,additional staff members should be employed. Furthermore, services should be streamlined, administration of relevantdocuments to be used should be strengthened and available guidelines should be translated into training programmes andmore proactive policies to support scale-up of this essential service
APA, Harvard, Vancouver, ISO, and other styles
49

Biundo, Eliana, Alan Burke, Sarah C. Rosemas, David Lanctin, and Emmanuelle Nicolle. "Abstract 286: Clinic Time Required To Manage Cardiac Implantable Electronic Device Patients: A Time And Motion Workflow Evaluation." Circulation: Cardiovascular Quality and Outcomes 13, Suppl_1 (2020). http://dx.doi.org/10.1161/hcq.13.suppl_1.286.

Full text
Abstract:
Background: The number of patients with cardiac implantable electronic devices (CIEDs) is growing, creating workload for device clinics to manage this population. Remote monitoring of CIED patients is a guidelines-recommended method for optimizing treatment in CIED patients in combination with in-person follow-up. However, the specific steps involved in CIED management, as well as the HCP time required for these activities, are not well understood. The aim of this study was to quantify the clinic staff time requirements associated with the remote and in-person management of CIED patients. Methods: A time and motion workflow evaluation was performed in 6 U.S. CIED clinics. Participating clinics manage an average of 4,217 (range: 870-10,336) CIED patients. The duration of each task involved in CIED management was repeatedly timed, for all device models/manufacturers, during one business week (5 days) of observation at each clinic. Mean time for review of a remote transmission and for an in-person clinic visit were calculated, including all clinical and administrative (e.g., scheduling, documentation) activities related to the encounter. Annual staff time (inclusive of all clinical and administrative staff) for follow-up of 1 CIED patient was modeled using device transmission data for the 6 clinics, clinical guidelines for CIED follow-up, and published literature (Table 1). Results: During 6 total weeks of data collection, 124 in-person clinic visits and 1,374 remote transmission review activities were observed and measured. On average, the total staff time required per remote transmission ranged from 11.9-13.5 minutes (depending on the CIED type), and time per in-person visit ranged from 43.4-51.0 minutes. Including all remote and in-person follow-ups, the estimated total staff time per year to manage one Pacemaker, ICD, CRT, and ICM patient was 2.3, 2.4, 2.4, and 9.3 hours, respectively. Conclusion: CIED patient management workflow is complex and requires significant staff time in cardiac device clinics. Remote monitoring is an efficient complement for in-office visits, allowing for continuous follow-up of patients with reduced staff time required per device check. Future research should examine heterogeneity in patient management processes to identify the most efficient workflow.
APA, Harvard, Vancouver, ISO, and other styles
50

Horwood, Jeremy, Emer Brangan, Petra Manley, et al. "Management of chlamydia and gonorrhoea infections diagnosed in primary care using a centralised nurse-led telephone-based service: mixed methods evaluation." BMC Family Practice 21, no. 1 (2020). http://dx.doi.org/10.1186/s12875-020-01329-0.

Full text
Abstract:
Abstract Background Up to 18% of genital Chlamydia infections and 9% of Gonorrhoea infections in England are diagnosed in Primary Care. Evidence suggests that a substantial proportion of these cases are not managed appropriately in line with national guidelines. With the increase in sexually transmitted infections and the emergence of antimicrobial resistance, their timely and appropriate treatment is a priority. We investigated feasibility and acceptability of extending the National Chlamydia Screening Programme’s centralised, nurse-led, telephone management (NLTM) as an option for management of all cases of chlamydia and gonorrhoea diagnosed in Primary Care. Methods Randomised feasibility trial in 11 practices in Bristol with nested qualitative study. In intervention practices patients and health care providers (HCPs) had the option of choosing NLTM or usual care for all patients tested for Chlamydia and Gonorrhoea. In control practices patients received usual care. Results One thousand one hundred fifty-four Chlamydia/gonorrhoea tests took place during the 6-month study, with a chlamydia positivity rate of 2.6% and gonorrhoea positivity rate of 0.8%. The NLTM managed 335 patients. Interviews were conducted with sixteen HCPs (11 GPs, 5 nurses) and 12 patients (8 female). HCPs were positive about the NLTM, welcomed the partner notification service, though requested more timely feedback on the management of their patients. Explaining the NLTM to patients didn’t negatively impact on consultations. Patients found the NLTM acceptable, more convenient and provided greater anonymity than usual care. Patients appreciated getting a text message regarding a negative result and valued talking to a sexual health specialist about positive results. Conclusion Extension of this established NLTM intervention to a greater proportion of patients was both feasible and acceptable to both patients and HCP, could provide a better service for patients, whilst decreasing primacy care workload. The study provides evidence to support the wider implementation of this NLTM approach to managing chlamydia and gonorrhoea diagnosed in primary care.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography