Academic literature on the topic 'Workload modeling and performance evaluation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Workload modeling and performance evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Workload modeling and performance evaluation":

1

Khan, Subayal, Jukka Saastamoinen, Jyrki Huusko, Juha-Pekka Soininen, and Jari Nurmi. "Application Workload Modelling via Run-Time Performance Statistics." International Journal of Embedded and Real-Time Communication Systems 4, no. 2 (April 2013): 1–35. http://dx.doi.org/10.4018/jertcs.2013040101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Modern mobile nomadic devices for example internet tablets and high end mobile phones support diverse distributed and stand-alone applications that were supported by single devices a decade back. Furthermore the complex heterogeneous platforms supporting these applications contain multi-core processors, hardware accelerators and IP cores and all these components can possibly be integrated into a single integrated circuit (chip). The high complexity of both the platform and the applications makes the design space very complex due to the availability of several alternatives. Therefore the system designer must be able to quickly evaluate the performance of different application architectures and implementations on potential platforms. The most popular technique employed nowadays is termed as system-level-performance evaluation which uses abstract workload and platform capacity models. The platform capacity models and application workload models reside at a higher abstraction-level. The platform and application workload models can be instantiated with reduced modeling effort and also operate at a higher simulation speed. This article presents a novel run-time statistics based application workload model extraction and platform configuration technique. This technique is called platform COnfiguration and woRkload generatIoN via code instrumeNtation and performAnce counters (CORINNA) which offers several advantages over compiler based technique called ABSINTH, and also provides automatic configuration of the platform processor models for example cache-hits and misses obtained during the application execution.
2

Teo, Grace, Gerald Matthews, Lauren Reinerman-Jones, and Daniel Barber. "Adaptive aiding with an individualized workload model based on psychophysiological measures." Human-Intelligent Systems Integration 2, no. 1-4 (November 28, 2019): 1–15. http://dx.doi.org/10.1007/s42454-019-00005-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractPotential benefits of technology such as automation are oftentimes negated by improper use and application. Adaptive systems provide a means to calibrate the use of technological aids to the operator’s state, such as workload state, which can change throughout the course of a task. Such systems require a workload model which detects workload and specifies the level at which aid should be rendered. Workload models that use psychophysiological measures have the advantage of detecting workload continuously and relatively unobtrusively, although the inter-individual variability in psychophysiological responses to workload is a major challenge for many models. This study describes an approach to workload modeling with multiple psychophysiological measures that was generalizable across individuals, and yet accommodated inter-individual variability. Under this approach, several novel algorithms were formulated. Each of these underwent a process of evaluation which included comparisons of the algorithm’s performance to an at-chance level, and assessment of algorithm robustness. Further evaluations involved the sensitivity of the shortlisted algorithms at various threshold values for triggering an adaptive aid.
3

Jeong, Heejin, and Yili Liu. "Development and Evaluation of a Computational Human Performance Model of In-vehicle Manual and Speech Interactions." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 1642. http://dx.doi.org/10.1177/1541931218621372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Usability evaluation traditionally relies on costly and time-consuming human-subject experiments, which typically involve developing physical prototypes, designing usability experiment, and recruiting human subjects. To minimize the limitations of human-subject experiments, computational human performance models can be used as an alternative. Human performance models generate digital simulations of human performance and examine the underlying psychological and physiological mechanisms to help understand and predict human performance. A variety of in-vehicle information systems (IVISs) using advanced automotive technologies have been developed to improve driver interactions with the in-vehicle systems. Numerous studies have used human subjects to evaluate in-vehicle human-system interactions; however, there are few modeling studies to estimate and simulate human performance, especially in in-vehicle manual and speech interactions. This paper presents a computational human performance modeling study for a usability test of IVISs using manual and speech interactions. Specifically, the model was aimed to generate digital simulations of human performance for a driver seat adjustment task to decrease the comfort level of a part of driver seat (i.e., the lower lumbar), using three different IVIS controls: direct-manual, indirect-manual, and voice controls. The direct-manual control is an input method to press buttons on the touchscreen display located on the center stack in the vehicle. The indirect-manual control is to press physical buttons mounted on the steering wheel to control a small display in the dashboard-cluster, which requires confirming visual feedback on the cluster display located on the dashboard. The voice control is to say a voice command, “ deflate lower lumbar” through an in-vehicle speaker. The model was developed to estimate task completion time and workload for the driver seat adjustment task, using the Queueing Network cognitive architecture (Liu, Feyen, & Tsimhoni, 2006). Processing times in the model were recorded every 50 msec and used as the estimates of task completion time. The estimated workload was measured by percentage utilization of servers used in the architecture. After the model was developed, the model was evaluated using an empirical data set of thirty-five human subjects from Chen, Tonshal, Rankin, & Feng (2016), in which the task completion times for the driver seat adjustment task using commercial in-vehicle systems (i.e., SYNC with MyFord Touch) were recorded. Driver workload was measured by NASA’s task load index (TLX). The average of the values from the NASA-TLX’s six categories was used to compare to the model’s estimated workload. The model produced results similar to actual human performance (i.e., task completion time, workload). The real-world engineering example presented in this study contributes to the literature of computational human performance modeling research.
4

HAVERKORT, BOUDEWIJN R. "PERFORMABILITY EVALUATION OF FAULT-TOLERANT COMPUTER SYSTEMS USING DYQNTOOL+." International Journal of Reliability, Quality and Safety Engineering 02, no. 04 (December 1995): 383–404. http://dx.doi.org/10.1142/s0218539395000277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For fault-tolerant computer systems (FTCS) supporting critical applications, it is of key importance to be able to answer the question of whether they indeed fulfill the quality of service requirements of their users. In particular, answers related to the combined performance and dependability of the FTCS are important. To facilitate these so-called performability studies, we present DYQNTOOL+, a performability evaluation tool based on the dynamic queuing network concept, that allows for a combined modeling of system performance and dependability. Different from other performability evaluation tools, DYQNTOOL+ combines two different modeling paradigms, i.e., queuing networks and stochastic Petri nets, for respectively the performance and the dependability aspects of the system under study. The mutual relations between these two model parts, such as workload-induced failures and performance decreases due to failures, are explicitly modeled as well. By the above choice for such a combination of modeling paradigms, the modeling can be done in greater detail, thereby often revealing system behavior that cannot be revealed otherwise. We present the dynamic queuing network modeling approach and its implementation in DYQNTOOL+, as well as illustrate its usage by addressing a number of examples.
5

Xu, Rongbing, and Shi Cao. "Modeling Pilot Flight Performance in a Cognitive Architecture: Model Demonstration." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, no. 1 (September 2021): 1254–58. http://dx.doi.org/10.1177/1071181321651008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cognitive architecture models can support the simulation and prediction of human performance in complex human-machine systems. In the current work, we demonstrate a pilot model that can perform and simulate taxiing and takeoff tasks. The model was built in Queueing Network-Adaptive Control of Thought Rational (QN-ACTR) cognitive architecture and can be connected to flight simulators such as X-Plane to generate various data, including performance, mental workload, and situation awareness. The model results are determined in combination by the declarative knowledge chunks, production rules, and a set of parameters. Currently, the model can generate flight operation behavior similar to human pilots. We will collect human pilot data to examine further and validate model assumptions and parameter values. Once validated, such models can support interface evaluation and competency-based pilot training, providing a theory-based predictive approach complementary to human-in-the-loop experiments for aviation research and development.
6

Yadav, Rajeev Ranjan, Gleidson A. S. Campos, Erica Teixeira Gomes Sousa, and Fernando Aires Lins. "A Strategy for Performance Evaluation and Modeling of Cloud Computing Services." Revista de Informática Teórica e Aplicada 26, no. 1 (April 14, 2019): 78. http://dx.doi.org/10.22456/2175-2745.87511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
On-demand services and reduced costs made cloud computing a popular mechanism to provide scalable resources according to the user’s expectations. This paradigm is an important role in business and academic organizations, supporting applications and services deployed based on virtual machines and containers, two different technologies for virtualization. Cloud environments can support workloads generated by several numbers of users, that request the cloud environment to execute transactions and its performance should be evaluated and estimated in order to achieve clients satisfactions when cloud services are offered. This work proposes a performance evaluation strategy composed of a performance model and a methodology for evaluating the performance of services configured in virtual machines and containers in cloud infrastructures. The performance model for the evaluation of virtual machines and containers in cloud infrastructures is based on stochastic Petri nets. A case study in a real public cloud is presented to illustrate the feasibility of the performance evaluation strategy. The case study experiments were performed with virtual machines and containers supporting workloads related to social networks transactions.
7

Bracken, Bethany, Noa Palmon, Lee Kellogg, Seth Elkin-Frankston, and Michael Farry. "A Cross-Domain Approach to Designing an Unobtrusive System to Assess Human State and Predict Upcoming Performance Deficits." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, no. 1 (September 2016): 707–11. http://dx.doi.org/10.1177/1541931213601162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many work environments are fraught with highly variable demands on cognitive workload, fluctuating between periods of high operational demand to the point of cognitive overload, to long periods of low workload bordering on boredom. When cognitive workload is not in an optimal range at either end of the spectrum, it can be detrimental to situational awareness and operational readiness, resulting in impaired cognitive functioning (Yerkes and Dodson, 1908). An unobtrusive system to assess the state of the human operator (e.g., stress, cognitive workload) and predict upcoming performance deficits could warn operators when steps should be taken to augment cognitive readiness. This system would also be useful during testing and evaluation (T&E) when new tools and systems are being evaluated for operational use. T&E researchers could accurately evaluate the cognitive and physical demands of these new tools and systems, and the effects they will have on task performance and accuracy. In this paper, we describe an approach to designing such a system that is applicable across environments. First, a suite of sensors is used to perform real-time synchronous data collection in a robust and unobtrusive fashion, and provide a holistic assessment of operators. Second, the best combination of indicators of operator state is extracted, fused, and interpreted. Third, performance deficits are comprehensively predicted, optimizing the likelihood of mission success. Finally, the data are displayed in such a way that supports the information requirements of any user. The approach described here is one we have successfully used in several projects, including modeling cognitive workload in the context of high-tempo, physically demanding environments, and modeling individual and team workload, stress, engagement, and performance while working together on a computerized task. We believe this approach is widely applicable and useful across domains to dramatically improve the mission readiness of human operators, and will improve the design and development of tools available to assist the operator in carrying out mission objectives. A system designed using this approach could enable crew to be aware of impending deficits to aid in augmenting mission performance, and will enable more effective T&E by measuring workload in response to new tools and systems while they are being designed and developed, rather than once they are deployed.
8

Roth, Tamara, Franz-Josef Scharfenberg, Julia Mierdel, and Franz X. Bogner. "Self-evaluative Scientific Modeling in an Outreach Gene Technology Laboratory." Journal of Science Education and Technology 29, no. 6 (August 12, 2020): 725–39. http://dx.doi.org/10.1007/s10956-020-09848-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The integration of scientific modeling into science teaching is key to the development of students’ understanding of complex scientific phenomena, such as genetics. With this in mind, we conducted an introductory hands-on module during an outreach gene technology laboratory on the structure of DNA. Our module examined the influence of two model evaluation variants on cognitive achievement: Evaluation 1, based on students’ hand-drawn sketches of DNA models and two open questions, and evaluation 2, based on students’ own evaluations of their models in comparison to a commercially available DNA model. We subsequently subdivided our sample (N = 296) into modellers-1 (n = 151) and modellers-2 (n = 145). Analyses of cognitive achievement revealed that modellers-2 achieved higher scores than modellers-1. In both cases, low achievers, in particular, benefitted from participation. Assessment of modellers-2 self-evaluation sheets revealed differences between self-evaluation and independent reassessment, as non-existent model features were tagged as correct whereas existent features were not identified. Correlation analyses between the models’ assessment scores and cognitive achievement revealed small-to-medium correlations. Consequently, our evaluation-2 phase impacted students’ performance in overall and model-related cognitive achievement, attesting to the value of our module as a means to integrate real scientific practices into science teaching. Although it may increase the workload for science teachers, we find that the potential scientific modeling holds as an inquiry-based learning strategy is worth the effort.
9

Mechalikh, Charafeddine, Hajer Taktak, and Faouzi Moussa. "PureEdgeSim: A simulation framework for performance evaluation of cloud, edge and mist computing environments." Computer Science and Information Systems, no. 00 (2020): 42. http://dx.doi.org/10.2298/csis200301042m.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Edge and Mist Computing are two emerging paradigms that aim to reduce latency and the Cloud workload by bringing its applications close to the Internet of Things (IoT) devices. In such complex environments, simulation makes it possible to evaluate the adopted strategies before their deployment on a real distributed system. However, despite the research advancement in this area, simulation tools are lacking, especially in the case of Mist Computing [11], where heterogeneous and constrained devices cooperate and share their resources. Motivated by this, in this paper, we present PureEdgeSim, a simulation toolkit that enables the simulation of Cloud, Edge, and Mist Computing environments and the evaluation of the adopted resources management strategies, in terms of delays, energy consumption, resources utilization, and tasks success rate. To show its capabilities, we introduce a case study, in which we evaluate the different architectures, orchestration algorithms, and the impact of offloading criteria. The simulation results show the effectiveness of PureEdgeSim in modeling such complex and dynamic environments.
10

Srinivas, Sharan, Roland Paul Nazareth, and Md Shoriat Ullah. "Modeling and analysis of business process reengineering strategies for improving emergency department efficiency." SIMULATION 97, no. 1 (October 8, 2020): 3–18. http://dx.doi.org/10.1177/0037549720957722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Emergency departments (ED) in the USA treat 136.9 million cases annually and account for nearly half of all medical care delivered. Due to high demand and limited resource (such as doctors and beds) availability, the median waiting time for ED patients is around 90 minutes. This research is motivated by a real-life case study of an ED located in central Missouri, USA, which faces the problem of congestion and improper workload distribution (e.g., overburdened ED doctors). The objective of this paper is to minimize patient waiting time and efficiently allocate workload among resources in an economical manner. The systematic framework of Business Process Reengineering (BPR), along with discrete-event simulation modeling approach, is employed to analyze current operations and potential improvement strategies. Alternative scenarios pertaining to process change, workforce planning, and capacity expansion are proposed. Besides process performance measures (waiting time and resource utilization), other criteria, such as responsiveness, cost of adoption, and associated risk, are also considered for evaluating an alternative. The experimental analysis indicates that a change in the triage process (evenly distributing medium-acuity patients among doctors and mid-level providers) is economical, easy to implement, reduces physician workload, and improves average waiting time by 20%, thereby making it attractive for short-term adoption. On the other hand, optimizing the workforce level based on historical demand patterns while adopting a triage process change delivers the best performance (84% reduction in waiting time and balanced resource utilization), and is recommended as a long-term solution.

Dissertations / Theses on the topic "Workload modeling and performance evaluation":

1

Ali-Eldin, Hassan Ahmed. "Workload characterization, controller design and performance evaluation for cloud capacity autoscaling." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-108398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis studies cloud capacity auto-scaling, or how to provision and release re-sources to a service running in the cloud based on its actual demand using an auto-matic controller. As the performance of server systems depends on the system design,the system implementation, and the workloads the system is subjected to, we focuson these aspects with respect to designing auto-scaling algorithms. Towards this goal,we design and implement two auto-scaling algorithms for cloud infrastructures. Thealgorithms predict the future load for an application running in the cloud. We discussthe different approaches to designing an auto-scaler combining reactive and proactivecontrol methods, and to be able to handle long running requests, e.g., tasks runningfor longer than the actuation interval, in a cloud. We compare the performance ofour algorithms with state-of-the-art auto-scalers and evaluate the controllers’ perfor-mance with a set of workloads. As any controller is designed with an assumptionon the operating conditions and system dynamics, the performance of an auto-scalervaries with different workloads.In order to better understand the workload dynamics and evolution, we analyze a6-years long workload trace of the sixth most popular Internet website. In addition,we analyze a workload from one of the largest Video-on-Demand streaming servicesin Sweden. We discuss the popularity of objects served by the two services, the spikesin the two workloads, and the invariants in the workloads. We also introduce, a mea-sure for the disorder in a workload, i.e., the amount of burstiness. The measure isbased on Sample Entropy, an empirical statistic used in biomedical signal processingto characterize biomedical signals. The introduced measure can be used to charac-terize the workloads based on their burstiness profiles. We compare our introducedmeasure with the literature on quantifying burstiness in a server workload, and showthe advantages of our introduced measure.To better understand the tradeoffs between using different auto-scalers with differ-ent workloads, we design a framework to compare auto-scalers and give probabilisticguarantees on the performance in worst-case scenarios. Using different evaluation cri-teria and more than 700 workload traces, we compare six state-of-the-art auto-scalersthat we believe represent the development of the field in the past 8 years. Knowingthat the auto-scalers’ performance depends on the workloads, we design a workloadanalysis and classification tool that assigns a workload to its most suitable elasticitycontroller out of a set of implemented controllers. The tool has two main components;an analyzer, and a classifier. The analyzer analyzes a workload and feeds the analysisresults to the classifier. The classifier assigns a workload to the most suitable elasticitycontroller based on the workload characteristics and a set of predefined business levelobjectives. The tool is evaluated with a set of collected real workloads, and a set ofgenerated synthetic workloads. Our evaluation results shows that the tool can help acloud provider to improve the QoS provided to the customers.
2

Rusnock, Christina. "Simulation-Based Cognitive Workload Modeling and Evaluation of Adaptive Automation Invoking and Revoking Strategies." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In human-computer systems, such as supervisory control systems, large volumes of incoming and complex information can degrade overall system performance. Strategically integrating automation to offload tasks from the operator has been shown to increase not only human performance but also operator efficiency and safety. However, increased automation allows for increased task complexity, which can lead to high cognitive workload and degradation of situational awareness. Adaptive automation is one potential solution to resolve these issues, while maintaining the benefits of traditional automation. Adaptive automation occurs dynamically, with the quantity of automated tasks changing in real-time to meet performance or workload goals. While numerous studies evaluate the relative performance of manual and adaptive systems, little attention has focused on the implications of selecting particular invoking or revoking strategies for adaptive automation. Thus, evaluations of adaptive systems tend to focus on the relative performance among multiple systems rather than the relative performance within a system. This study takes an intra-system approach specifically evaluating the relationship between cognitive workload and situational awareness that occurs when selecting a particular invoking-revoking strategy for an adaptive system. The case scenario is a human supervisory control situation that involves a system operator who receives and interprets intelligence outputs from multiple unmanned assets, and then identifies and reports potential threats and changes in the environment. In order to investigate this relationship between workload and situational awareness, discrete event simulation (DES) is used. DES is a standard technique in the analysis of systems, and the advantage of using DES to explore this relationship is that it can represent a human-computer system as the state of the system evolves over time. Furthermore, and most importantly, a well-designed DES model can represent the human operators, the tasks to be performed, and the cognitive demands placed on the operators. In addition to evaluating the cognitive workload to situational awareness tradeoff, this research demonstrates that DES can quite effectively model and predict human cognitive workload, specifically for system evaluation. This research finds that the predicted workload of the DES models highly correlates with well-established subjective measures and is more predictive of cognitive workload than numerous physiological measures. This research then uses the validated DES models to explore and predict the cognitive workload impacts of adaptive automation through various invoking and revoking strategies. The study provides insights into the workload-situational awareness tradeoffs that occur when selecting particular invoking and revoking strategies. First, in order to establish an appropriate target workload range, it is necessary to account for both performance goals and the portion of the workload-performance curve for the task in question. Second, establishing an invoking threshold may require a tradeoff between workload and situational awareness, which is influenced by the task's location on the workload-situational awareness continuum. Finally, this study finds that revoking strategies differ in their ability to achieve workload and situational awareness goals. For the case scenario examined, revoking strategies based on duration are best suited to improve workload, while revoking strategies based on revoking thresholds are better for maintaining situational awareness.
Ph.D.
Doctorate
Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering
3

Kreku, J. (Jari). "Early-phase performance evaluation of computer systems using workload models and SystemC." Doctoral thesis, Oulun yliopisto, 2012. http://urn.fi/urn:isbn:9789514299902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Novel methods and tools are needed for the performance evaluation of future embedded systems due to the increasing system complexity. Systems accommodate a large number of on-terminal and or downloadable applications offering the users with numerous services related to telecommunication, audio and video, digital television, internet and navigation. More flexibility, scalability and modularity is expected from execution platforms to support applications. Digital processing architectures will evolve from the current system-on-chips to massively parallel computers consisting of heterogeneous subsystems connected by a network-on-chip. As a consequence, the overall complexity of system evaluation will increase by orders of magnitude. The ABSOLUT performance simulation approach presented in this thesis combats evaluation complexity by abstracting the functionality of the applications with workload models consisting of instruction-like primitives. Workload models can be created from application specifications, measurement results, execution traces, or the source code. Complexity of execution platform models is also reduced since the data paths of processing elements need not be modelled in detail and data transfers and storage are simulated only from the performance point of view. The modelling approach enables early evaluation since mature hardware or software is not required for the modelling or simulation of complete systems. ABSOLUT is applied to a number of case studies including mobile phone usage, MP3 playback, MPEG4 encoding and decoding, 3D gaming, virtual network computing, and parallel software-defined radio applications. The platforms used in the studies represent both embedded systems and personal computers, and at the same time both currently existing platforms and future designs. The results obtained from simulations are compared to measurements from real platforms, which reveals an average difference of 12% in the results. This exceeds the accuracy requirements expected from virtual system-based simulation approaches intended for early evaluation
Tiivistelmä Sulautettujen tietokonejärjestelmien suorituskyvyn arviointi muuttuu yhä haastavammaksi järjestelmien kasvavan kompleksisuuden vuoksi. Järjestelmissä on suuri määrä sovelluksia, jotka tarjoavat käyttäjälle palveluita liittyen esimerkiksi telekommunikaatioon, äänen ja videokuvan toistoon, internet-selaukseen ja navigaatioon. Tästä johtuen suoritusalustoilta edellytetään yhä enemmän joustavuutta, skaalautuvuutta ja modulaarisuutta. Suoritusarkkitehtuurit kehittyvät nykyisistä System-on-Chip (SoC) -ratkaisuista Network-on-Chip (NoC) -rinnakkaistietokoneiksi, jotka koostuvat heterogeenisistä alijärjestelmistä. Sovellusten ja suoritusalustan muodostaman järjestelmän suorituskyvyn arviointiin tarvitaan uusia menetelmiä ja työkaluja, joilla kompleksisuutta voidaan hallita. Tässä väitöskirjassa esitettävä ABSOLUT-simulointimenetelmä pienentää suorituskyvyn arvioinnin kompleksisuutta abstrahoimalla sovelluksen toiminnallisuutta työkuormamalleilla, jotka koostuvat kuormaprimitiiveistä suorittimen käskyjen sijaan. Työkuormamalleja voidaan luoda sovellusten spesifikaatioista, mittaustuloksista, suoritusjäljistä tai sovellusten lähdekoodeista. Suoritusalustoista ABSOLUT-menetelmä käyttää yksinkertaisia kapasiteettimalleja toiminnallisten mallien sijaan: suoritinarkkitehtuurit mallinnetaan korkealla tasolla ja tiedonsiirto ja tiedon varastointi mallinnetaan vain suorituskyvyn näkökulmasta. Menetelmä mahdollistaa aikaisen suorituskyvyn arvioinnin, koska malleja voidaan luoda ja simuloida jo ennen valmiin sovelluksen tai suoritusalustan olemassaoloa. ABSOLUT-menetelmää on käytetty useissa erilaisissa kokeiluissa, jotka sisälsivät esimerkiksi matkapuhelimen käyttöä, äänen ja videokuvan toistoa ja tallennusta, 3D-pelin pelaamista ja digitaalista tiedonsiirtoa. Esimerkeissä käytetiin tyypillisiä suoritusalustoja sekä kotitietokoneiden että sulautettujen järjestelmien maailmasta. Lisäksi osa esimerkeistä pohjautui tuleviin tai keksittyihin suoritusalustoihin. Osa simuloinneista on varmennettu vertaamalla simulointituloksia todellisista järjestelmistä saatuihin mittaustuloksiin. Niiden välillä huomattiin keskimäärin 12 prosentin poikkeama, mikä ylittää aikaisen vaiheen suorituskyvyn simulointimenetelmiltä vaadittavan tarkkuuden
4

Georgiou, Yiannis. "Contributions for resource and job management in high performance computing." Grenoble, 2010. http://www.theses.fr/2010GRENM079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le domaine du Calcul à Haute Performance (HPC) évolue étroitement avec les dernières avancées technologiques des architectures informatiques et des besoins toujours croissants en demande de puissance de calcul. Cette thèse s'intéresse à l'étude d'un type d'intergiciel particulier appelé gestionnaire de tâches et ressources (RJMS) qui est chargé de distribuer la puissance de calcul aux applications dans les plateformes pour le HPC. Le RJMS joue un rôle central du fait de sa position dans la pile logicielle. Les dernières évolutions dans les couches matérielles et dans les applications ont largement augmenté le niveau de complexité auquel doit faire face ce type d'intergiciel. Des problématiques telles que le passage à l'échelle, la prise en compte d'un taux d'activité irrégulier, la gestion des contraintes liées à la topologie du matériel, l'efficacité énergétique et la tolérance aux pannes doivent être particulièrement pris en considération, afin, entre autres, de fournir une meilleure exploitation des ressources à la fois du point de vue global du système ainsi que de celui des utilisateurs. La première contribution de cette thèse est un état de l'art sur la gestion des tâches et des ressources ainsi qu'une analyse comparative des principaux intergiciels actuels et des différentes problématiques de recherche associées. Une métrique importante pour évaluer l'apport d'un RJMS sur une plate-forme est le niveau d'utilisation de l'ensemble du système. On constate parmi les traces d'activité de plusieurs plateformes qu'un grand nombre d'entre elles présentent un taux d'utilisation significativement inférieure à une pleine utilisation. Ce constat est la principale motivation des autres contributions de cette thèse qui portent sur les méthodes d'exploitations de ces périodes de sous-utilisation au profit de la gestion globale du système ou des applications en court d'exécution. Plus particulièrement cette thèse explore premièrement, les moyens d'accroître le taux de calculs utiles dans le contexte des grilles légères en présence d'une forte variabilité de la disponibilité des ressources de calcul. Deuxièmement, nous avons étudié le cas des tâches dynamiques et proposé différentes techniques s'intégrant au RJMS OAR et troisièmement nous évalués plusieurs modes d'exploitation des ressources en prenant en compte la consommation énergétique. Finalement, les évaluations de cette thèse reposent sur une approche expérimentale pour laquelle nous avons proposés des outils et une méthodologie permettant d'améliorer significativement la maîtrise et la reproductibilité d'expériences complexes propre à ce domaine d'étude
High Performance Computing is characterized by the latest technological evolutions in computing architectures and by the increasing needs of applications for computing power. A particular middleware called Resource and Job Management System (RJMS), is responsible for delivering computing power to applications. The RJMS plays an important role in HPC since it has a strategic place in the whole software stack because it stands between the above two layers. However, the latest evolutions in hardware and applications layers have provided new levels of complexities to this middleware. Issues like scalability, management of topological constraints, energy efficiency and fault tolerance have to be particularly considered, among others, in order to provide a better system exploitation from both the system and user point of view. This dissertation provides a state of the art upon the fundamental concepts and research issues of Resources and Jobs Management Systems. It provides a multi-level comparison (concepts, functionalities, performance) of some Resource and Jobs Management Systems in High Performance Computing. An important metric to evaluate the work of a RJMS on a platform is the observed system utilization. However, studies and logs of production platforms show that HPC systems in general suffer of significant un-utilization rates. Our study deals with these clusters' un-utilization periods by proposing methods to aggregate otherwise un-utilized resources for the benefit of the system or the application. More particularly this thesis explores RJMS level mechanisms: 1) for increasing the jobs valuable computation rates in the high volatile environments of a lightweight grid context, 2) for improving system utilization with malleability techniques and 3) providing energy efficient system management through the exploitation of idle computing machines. The experimentation and evaluation in this type of contexts provide important complexities due to the inter-dependency of multiple parameters that have to be taken into control. In this thesis we have developed a methodology based upon real-scale controlled experimentation with submission of synthetic or real workload traces
5

Middlebrooks, Sam E. "The COMPASS Paradigm For The Systematic Evaluation Of U.S. Army Command And Control Systems Using Neural Network And Discrete Event Computer Simulation." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/26605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In todayâ s technology based society the rapid proliferation of new machines and systems that would have been undreamed of only a few short years ago has become a way of life. Developments and advances especially in the areas of digital electronics and micro-circuitry have spawned subsequent technology based improvements in transportation, communications, entertainment, automation, the armed forces, and many other areas that would not have been possible otherwise. This rapid â explosionâ of new capabilities and ways of performing tasks has been motivated as often as not by the philosophy that if it is possible to make something better or work faster or be more cost effective or operate over greater distances then it must inherently be good for the human operator. Taken further, these improvements typically are envisioned to consequently produce a more efficient operating system where the human operator is an integral component. The formal concept of human-system interface design has only emerged this century as a recognized academic discipline, however, the practice of developing ideas and concepts for systems containing human operators has been in existence since humans started experiencing cognitive thought. An example of a human system interface technology for communication and dissemination of written information that has evolved over centuries of trial and error development, is the book. It is no accident that the form and shape of the book of today is as it is. This is because it is a shape and form readily usable by human physiology whose optimal configuration was determined by centuries of effort and revision. This slow evolution was mirrored by a rate of technical evolution in printing and elsewhere that allowed new advances to be experimented with as part of the overall use requirement and need for the existence of the printed word and some way to contain it. Today, however, technology is advancing at such a rapid rate that evolutionary use requirements have no chance to develop along side the fast pace of technical progress. One result of this recognition is the establishment of disciplines like human factors engineering that have stated purposes and goals of systematic determination of good and bad human system interface designs. However, other results of this phenomenon are systems that get developed and placed into public use simply because new technology allowed them to be made. This development can proceed without a full appreciation of how the system might be used and, perhaps even more significantly, what impact the use of this new system might have on the operator within it. The U.S. Army has a term for this type of activity. It is called â stove-piped developmentâ . The implication of this term is that a system gets developed in isolation where the developers are only looking â upâ and not â aroundâ . They are thus concerned only with how this system may work or be used for its own singular purposes as opposed to how it might be used in the larger community of existing systems and interfaces or, even more importantly, in the larger community of other new systems in concurrent development. Some of the impacts for the Army from this mode of system development are communication systems that work exactly as designed but are unable to interface to other communications systems in other domains for battlefield wide communications capabilities. Having communications systems that cannot communicate with each other is a distinct problem in its own right. However, when developments in one industry produce products that humans use or attempt to use with products from totally separate developments or industries, the Army concept of product development resulting from stove-piped design visions can have significant implication on the operation of each system and the human operator attempting to use it. There are many examples that would illustrate the above concept, however, one that will be explored here is the Army effort to study, understand, and optimize its command and control (C2) operations. This effort is at the heart of a change in the operational paradigm in C2 Tactical Operations Centers (TOCs) that the Army is now undergoing. For the 50 years since World War II the nature, organization, and mode of the operation of command organizations within the Army has remained virtually unchanged. Staffs have been organized on a basic four section structure and TOCs generally only operate in a totally static mode with the amount of time required to move them to keep up with a mobile battlefield going up almost exponentially from lower to higher command levels. However, current initiatives are changing all that and while new vehicles and hardware systems address individual components of the command structures to improve their operations, these initiatives do not necessarily provide the environment in which the human operator component of the overall system can function in a more effective manner. This dissertation examines C2 from a system level viewpoint using a new paradigm for systematically examining the way TOCs operate and then translating those observations into validated computer simulations using a methodological framework. This paradigm is called COmputer Modeling Paradigm And Simulation of Systems (COMPASS). COMPASS provides the ability to model TOC operations in a way that not only includes the individuals, work groups and teams in it, but also all of the other hardware and software systems and subsystems and human-system interfaces that comprise it as well as the facilities and environmental conditions that surround it. Most of the current literature and research in this area focuses on the concept of C2 itself and its follow-on activities of command, control, communications (C3), command, control, communications, and computers (C4), and command, control, communications, computers and intelligence (C4I). This focus tends to address the activities involved with the human processes within the overall system such as individual and team performance and the commanderâ s decision-making process. While the literature acknowledges the existence of the command and control system (C2S), little effort has been expended to quantify and analyze C2Ss from a systemic viewpoint. A C2S is defined as the facilities, equipment, communications, procedures, and personnel necessary to support the commander (i.e., the primary decision maker within the system) for conducting the activities of planning, directing, and controlling the battlefield within the sector of operations applicable to the system. The research in this dissertation is in two phases. The overall project incorporates sequential experimentation procedures that build on successive TOC observation events to generate an evolving data store that supports the two phases of the project. Phase I consists of the observation of heavy maneuver battalion and brigade TOCs during peacetime exercises. The term â heavy maneuverâ is used to connotate main battle forces such as armored and mechanized infantry units supported by artillery, air defense, close air, engineer, and other so called combat support elements. This type of unit comprises the main battle forces on the battlefield. It is used to refer to what is called the conventional force structure. These observations are conducted using naturalistic observation techniques of the visible functioning of activities within the TOC and are augmented by automatic data collection of such things as analog and digital message traffic, combat reports generated by the computer simulations supporting the wargame exercise, and video and audio recordings where appropriate and available. Visible activities within the TOC include primarily the human operator functions such as message handling activities, decision-making processes and timing, coordination activities, and span of control over the battlefield. They also include environmental conditions, functional status of computer and communications systems, and levels of message traffic flows. These observations are further augmented by observer estimations of such indicators as perceived level of stress, excitement, and level of attention to the mission of the TOC personnel. In other words, every visible and available component of the C2S within the TOC is recorded for analysis. No a priori attempt is made to evaluate the potential significance of each of the activities as their contribution may be so subtle as to only be ascertainable through statistical analysis. Each of these performance activities becomes an independent variable (IV) within the data that is compared against dependent variables (DV) identified according to the mission functions of the TOC. The DVs for the C2S are performance measures that are critical combat tasks performed by the system. Examples of critical combat tasks are â attacking to seize an objectiveâ , â seizure of key terrainâ , and â river crossingsâ . A list of expected critical combat tasks has been prepared from the literature and subject matter expert (SME) input. After the exercise is over, the success of these critical tasks attempted by the C2S during the wargame are established through evaluator assessments, if available, and/or TOC staff self analysis and reporting as presented during after action reviews. The second part of Phase I includes datamining procedures, including neural networks, used in a constrained format to analyze the data. The term constrained means that the identification of the outputs/DV is known. The process was to identify those IV that significantly contribute to the constrained DV. A neural network is then constructed where each IV forms an input node and each DV forms an output node. One layer of hidden nodes is used to complete the network. The number of hidden nodes and layers is determined through iterative analysis of the network. The completed network is then trained to replicate the output conditions through iterative epoch executions. The network is then pruned to remove input nodes that do not contribute significantly to the output condition. Once the neural network tree is pruned through iterative executions of the neural network, the resulting branches are used to develop algorithmic descriptors of the system in the form of regression like expressions. For Phase II these algorithmic expressions are incorporated into the CoHOST discrete event computer simulation model of the C2S. The programming environment is the commercial programming language Micro Saintä running on a PC microcomputer. An interrogation approach was developed to query these algorithms within the computer simulation to determine if they allow the simulation to reflect the activities observed in the real TOC to within an acceptable degree of accuracy. The purpose of this dissertation is to introduce the COMPASS concept that is a paradigm for developing techniques and procedures to translate as much of the performance of the entire TOC system as possible to an existing computer simulation that would be suitable for analyses of future system configurations. The approach consists of the following steps: · Naturalistic observation of the real system using ethnographic techniques. · Data analysis using datamining techniques such as neural networks. · Development of mathematical models of TOC performance activities. · Integration of the mathematical into the CoHOST computer simulation. · Interrogation of the computer simulation. · Assessment of the level of accuracy of the computer simulation. · Validation of the process as a viable system simulation approach.
Ph. D.
6

Peña, Ortiz Raúl. "Accurate workload design for web performance evaluation." Doctoral thesis, Editorial Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/21054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Las nuevas aplicaciones y servicios web, cada vez má¡s populares en nuestro día a día, han cambiado completamente la forma en la que los usuarios interactúan con la Web. En menos de media década, el papel que juegan los usuarios ha evolucionado de meros consumidores pasivos de información a activos colaboradores en la creación de contenidos dinámicos, típicos de la Web actual. Y, además, esta tendencia se espera que aumente y se consolide con el paso del tiempo. Este comportamiento dinámico de los usuarios es una de las principales claves en la definición de cargas de trabajo adecuadas para estimar con precisión el rendimiento de los sistemas web. No obstante, la dificultad intrínseca a la caracterización del dinamismo del usuario y su aplicación en un modelo de carga, propicia que muchos trabajos de investigación sigan todavía empleando cargas no representativas de las navegaciones web actuales. Esta tesis doctoral se centra en la caracterización y reproducción, para estudios de evaluación de prestaciones, de un tipo de carga web más realista, capaz de imitar el comportamiento de los usuarios de la Web actual. El estado del arte en el modelado y generación de cargas para los estudios de prestaciones de la Web presenta varias carencias en relación a modelos y aplicaciones software que representen los diferentes niveles de dinamismo del usuario. Este hecho nos motiva a proponer un modelo más preciso y a desarrollar un nuevo generador de carga basado en este nuevo modelo. Ambas propuestas han sido validadas en relación a una aproximación tradicional de generación de carga web. Con este fin, se ha desarrollado un nuevo entorno de experimentación con la capacidad de reproducir cargas web tradicionales y dinámicas, mediante la integración del generador propuesto con un benchmark de uso común. En esta tesis doctoral también se analiza y evalúa por primera vez, según nuestro saber y entender, el impacto que tiene el empleo de cargas de trabajo dinámicas en las métrica
Peña Ortiz, R. (2013). Accurate workload design for web performance evaluation [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/21054
Palancia
7

Wojciechowski, Josephine Quinn. "Validation of a Task Network Human Performance Model of Driving." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/31713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Human performance modeling (HPM) is often used to investigate systems during all phases of development. HPM was used to investigate function allocation in crews for future combat vehicles. The tasks required by the operators centered around three primary functions, commanding, gunning, and driving. In initial investigations, the driver appeared to be the crew member with the highest workload. Validation of the driver workload model (DWM) is necessary for confidence in the ability of the model to predict workload. Validation would provide mathematical proof that workload of driving is high and that additional tasks impact the performance. This study consisted of two experiments. The purpose of each experiment was to measure performance and workload while driving and attending to an auditory secondary task. The first experiment was performed with a human performance model. The second experiment replicated the same conditions in a human-in-the-loop driving simulator. The results of the two experiments were then correlated to determine if the model could predict performance and workload changes. The results of the investigation indicate that there is some impact of an auditory task on driving. The model is a good predictor of mental workload changes with auditory secondary tasks. However, predictions of the impact on performance from secondary auditory tasks were not demonstrated in the simulator study. Frequency of the distraction was more influential in the changes of performance and workload than the demand of the distraction, at least under the conditions tested in this study. While the workload numbers correlate with simulator numbers, using the model would require a better understanding of what the workload changes would mean in terms of performance measures.
Master of Science
8

Emeras, Joseph. "Workload Traces Analysis and Replay in Large Scale Distributed Systems." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM081/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'auteur n'a pas fourni de résumé en français
High Performance Computing is preparing the era of the transition from Petascale to Exascale. Distributed computing systems are already facing new scalability problems due to the increasing number of computing resources to manage. It is now necessary to study in depth these systems and comprehend their behaviors, strengths and weaknesses to better build the next generation.The complexity of managing users applications on the resources conducted to the analysis of the workload the platform has to support, this to provide them an efficient service.The need for workload comprehension has lead to the collection of traces from production systems and to the proposal of a standard workload format. These contributions enabled the study of numerous of these traces. This also lead to the construction of several models, based on the statistical analysis of the different workloads from the collection.Until recently, existing workload traces did not enabled researchers to study the consumption of resources by the jobs in a temporal way. This is now changing with the need for characterization of jobs consumption patterns.In the first part of this thesis we propose a study of existing workload traces. Then we contribute with an observation of cluster workloads with the consideration of the jobs resource consumptions over time. This highlights specific and unattended patterns in the usage of resources from users.Finally, we propose an extension of the former standard workload format that enables to add such temporal consumptions without loosing the benefit of the existing works.Experimental approaches based on workload models have also served the goal of distributed systems evaluation. Existing models describe the average behavior of observed systems.However, although the study of average behaviors is essential for the understanding of distributed systems, the study of critical cases and particular scenarios is also necessary. This study would give a more complete view and understanding of the performance of the resources and jobs management. In the second part of this thesis we propose an experimental method for performance evaluation of distributed systems based on the replay of production workload trace extracts. These extracts, replaced in their original context, enable to experiment the change of configuration of the system in an online workload and observe the different configurations results. Our technical contribution in this experimental approach is twofold. We propose a first tool to construct the environment in which the experimentation will take place, then we propose a second set of tools that automatize the experiment setup and that replay the trace extract within its original context.Finally, these contributions conducted together, enable to gain a better knowledge of HPC platforms. As future works, the approach proposed in this thesis will serve as a basis to further study larger infrastructures
9

Lundin, Mikael. "Simulating the effects of mental workload on tactical and operational performance in tankcrew." Thesis, Linköping University, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Battletank crew must perform many diverse tasks during a normal mission: Crewmembers have to navigate, communicate, control on-board systems, and engage with the enemy, to mention a few. As human processing capacity is limited, the crewmembers will find themselves in situations where task requirements, due to the number of tasks and task complexity, exceed their mental capacity. The stress that results from mental overload has documented quantitative and qualitative effects on performance; effects that could lead to mission failure.

This thesis describes a simulation of tankcrew during a mission where mental workload is a key factor to the outcome of mission performance. The thesis work has given rise to a number of results. First, conceptual models have been developed of the tank crewmembers. Mental workload is represented in these models as a behavior moderator, which can be manipulated to demonstrate and predict behavioral effects. Second, cognitive models of the tank crewmembers are implemented as Soar agents, which interact with tanks in a 3D simulated battlefield. The empirical data underlying these models was collected from experiments with tankcrew, and involved first hand observations and task analyses. Afterwards, the model’s behavior was verified against an a priori established behavioral pattern and successfully face validated with two subject matter experts.

10

Kwok, Alice S. L. "Modeling, simulation, and performance evaluation of telecommunication networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0009/MQ41729.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Workload modeling and performance evaluation":

1

MacNair, Edward A. Elements of practical performance modeling. Englewood Cliffs, N.J: Prentice-Hall, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dimpsey, Robert Tod. Performance evaluation and modeling techniques for parallel processors. Urbana, Ill: Center for Reliable and High-Performance Computing, Coordinated Science Laboratory, College of Engineering, University of Illinois at Urbana-Champaign, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

International Conference on Modeling Techniques and Tools for Computer Performance Evaluation (4th 1988 Palma de Mallorca, Spain). Modeling techniques and tools for computer performance evaluation. New York: Plenum Press, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cesarini, Francesca, and Silvio Salza, eds. Database Machine Performance: Modeling Methodologies and Evaluation Strategies. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987. http://dx.doi.org/10.1007/3-540-17942-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fiondella, Lance, and Antonio Puliafito, eds. Principles of Performance and Reliability Modeling and Evaluation. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30599-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Puigjaner, Ramon, and Dominique Potier, eds. Modeling Techniques and Tools for Computer Performance Evaluation. Boston, MA: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4613-0533-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Turin, William. Digital transmission systems: Performance analysis and modeling. 2nd ed. New York: McGraw-Hill, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yuan, Haiyue, Shujun Li, and Patrice Rusconi. Cognitive Modeling for Automated Human Performance Evaluation at Scale. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45704-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kobayashi, Hisashi. System modeling and analysis: Foundations of system performance evaluation. Upper Saddle River, N.J: Pearson Prentice Hall, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Turin, William. Performance analysis and modeling of digital transmission systems. New York: Kluwer Academic/Plenum Publishers, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Workload modeling and performance evaluation":

1

Feitelson, Dror G. "Workload Modeling for Performance Evaluation." In Performance Evaluation of Complex Systems: Techniques and Tools, 114–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45798-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Salza, S., and M. Terranova. "Chapter 4 Database workload modeling." In Database Machine Performance: Modeling Methodologies and Evaluation Strategies, 50–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987. http://dx.doi.org/10.1007/3-540-17942-9_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kobsa, Alfred, and Josef Fink. "Performance Evaluation of User Modeling Servers under Real-World Workload Conditions." In User Modeling 2003, 143–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44963-9_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kreku, Jari, Mika Hoppari, Tuomo Kestilä, Yang Qu, Juha-Pekka Soininen, and Kari Tiensyrjä. "Application Workload and SystemC Platform Modeling for Performance Evaluation." In Lecture Notes in Electrical Engineering, 131–47. Dordrecht: Springer Netherlands, 2009. http://dx.doi.org/10.1007/978-1-4020-9714-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Li, Zhen Liu, Anton Riabov, Monty Schulman, Cathy Xia, and Fan Zhang. "A Comprehensive Toolset for Workload Characterization, Performance Modeling, and Online Control." In Computer Performance Evaluation. Modelling Techniques and Tools, 63–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45232-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Braun, M., and G. Kotsis. "Interval based workload characterization for distributed systems." In Computer Performance Evaluation Modelling Techniques and Tools, 181–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0022206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oke, Adeniyi, and Rick Bunt. "Hierarchical Workload Characterization for a Busy Web Server." In Computer Performance Evaluation: Modelling Techniques and Tools, 309–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46029-2_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Smirni, Evgenia, and Daniel A. Reed. "Workload characterization of input/output intensive parallel applications." In Computer Performance Evaluation Modelling Techniques and Tools, 169–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0022205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Delimitrou, Christina, Sriram Sankar, Badriddine Khessib, Kushagra Vaid, and Christos Kozyrakis. "Time and Cost-Efficient Modeling and Generation of Large-Scale TPCC/TPCE/TPCH Workloads." In Topics in Performance Evaluation, Measurement and Characterization, 146–62. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32627-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ruf, Christian, and Peter Stütz. "Model-Driven Payload Sensor Operation Assistance for a Transport Helicopter Crew in Manned–Unmanned Teaming Missions: Assistance Realization, Modelling Experimental Evaluation of Mental Workload." In Engineering Psychology and Cognitive Ergonomics: Performance, Emotion and Situation Awareness, 51–63. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58472-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Workload modeling and performance evaluation":

1

Barnes, Taylor, Brandon Cook, Jack Deslippe, Douglas Doerfler, Brian Friesen, Yun He, Thorsten Kurth, et al. "Evaluating and Optimizing the NERSC Workload on Knights Landing." In 2016 7th International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS). IEEE, 2016. http://dx.doi.org/10.1109/pmbs.2016.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Volovich, Konstantin. "ESTIMATION OF THE WORKLOAD OF A HYBRID COMPUTING CLUSTER IN TASKS OF MODELING IN MATERIALS SCIENCE." In Mathematical modeling in materials science of electronic component. LLC MAKS Press, 2020. http://dx.doi.org/10.29003/m1511.mmmsec-2020/30-33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The article is devoted to methods of calculation and evaluation of the effectiveness of the functioning of hybrid computing systems. The article proposes a method of calculating the value of the workload using peak values of the cluster performance. The results and the quality of the functioning of cloud scientific services of high-performance computing using the roofline model are analyzed.
3

Somashekhar, Karoor, and B. Eswara Reddy. "Performance Evaluation of Multi-Tier Application by using the Comprehensive Workload Modelling in the Cloud." In 2021 5th International Conference on Computing Methodologies and Communication (ICCMC). IEEE, 2021. http://dx.doi.org/10.1109/iccmc51019.2021.9418275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Porter, Donald E., and Emmett Witchel. "Modeling transactional memory workload performance." In the 15th ACM SIGPLAN symposium. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1693453.1693508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Adams, Evans J. "Workload models for DBMS performance evaluation." In the 1985 ACM thirteenth annual conference. New York, New York, USA: ACM Press, 1985. http://dx.doi.org/10.1145/320599.320676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Van Ertvelde, Luk, and Lieven Eeckhout. "Workload generation for microprocessor performance evaluation." In the third joint WOSP/SIPEW international conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2188286.2188313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Leiden, Kenneth, Jill Kamienski, and Parimal Kopardekar. "Human Performance Modeling to Predict Controller Workload." In AIAA 5th ATIO and16th Lighter-Than-Air Sys Tech. and Balloon Systems Conferences. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2005. http://dx.doi.org/10.2514/6.2005-7378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lutteroth, Christof, and Gerald Weber. "Modeling a Realistic Workload for Performance Testing." In 2008 12th International IEEE Enterprise Distributed Object Computing Conference (EDOC). IEEE, 2008. http://dx.doi.org/10.1109/edoc.2008.40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Karpenko, Dmytro, Roman Vitenberg, and Alexander L. Read. "ATLAS grid workload on NDGF resources: Analysis, modeling, and workload generation." In 2012 SC - International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2012. http://dx.doi.org/10.1109/sc.2012.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hebbar, Ranjan, and Aleksandar Milenković. "An Experimental Evaluation of Workload Driven DVFS." In ICPE '21: ACM/SPEC International Conference on Performance Engineering. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3447545.3451192.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Workload modeling and performance evaluation":

1

Nechvatal, James. On the performance evaluation and analytic modeling of shared-memory computers. Gaithersburg, MD: National Bureau of Standards, 1988. http://dx.doi.org/10.6028/nist.ir.88-3857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Middlebrooks, Sam E., Beverly G. Knapp, B. Diane Barnette, Cheryl A. Bird, and Joyce M. Johnson. CoHOST (Computer Modeling of Human Operator System Tasks) Computer Simulation Models to Investigate Human Performance Task and Workload Conditions in a U.S. Army Heavy Maneuver Battalion Tactical Operations Center. Fort Belvoir, VA: Defense Technical Information Center, August 1999. http://dx.doi.org/10.21236/ada368587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wallace, Sean, Scott Lux, Constandinos Mitsingas, Irene Andsager, and Tapan Patel. Performance testing and modeling of a transpired ventilation preheat solar wall : performance evaluation of facilities at Fort Drum, NY, and Kansas Air National Guard, Topeka, KS. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/42000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This work performed measurement and verification of installed, operational solar wall systems at Fort Drum, NY, and Forbes Field, Air National Guard, Topeka, KS. Actual annual savings were compared estimated savings generated by a solar wall modeling tool (RETScreen). A comparison with the RETScreen modeling tool shows that the measured actively heated air provided by the solar wall provides 57% more heat than the RETScreen tool predicted, after accounting for boiler efficiency. The solar wall at Fort Drum yields a net savings of $851/yr, for a simple payback of 146 years and a SIR of 0.16. RETScreen models indicate that the solar wall system at Forbes Field, Kansas Air National Guard, Topeka, KS saves $9,350/yr, for a simple payback of 58.8 years and a SIR of 0.34. Although results showed that, due to low natural gas prices, the Fort Drum system was not economically viable, it was recommended that the system still be used to meet renewable energy and fossil fuel reduction goals. The current system becomes economical (SIR 1.00) at a natural gas rate of $16.00/MMBTU or $1.60 /therm.
4

Barnard, J. C., and H. L. Wegley. A preliminary evaluation of the performance of wind tunnel and numerical modeling simulations of the wind flow over a wind farm. Office of Scientific and Technical Information (OSTI), January 1987. http://dx.doi.org/10.2172/6983532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Barclay Jones. Modeling and Thermal Performance Evaluation of Porous Curd Layers in Sub-Cooled Boiling Region of PWRs and Effects of Sub-Cooled Nucleate Boiling on Anomalous Porous Crud Deposition on Fuel Pin Surfaces. Office of Scientific and Technical Information (OSTI), June 2005. http://dx.doi.org/10.2172/841238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Warrick, Arthur, Uri Shani, Dani Or, and Muluneh Yitayew. In situ Evaluation of Unsaturated Hydraulic Properties Using Subsurface Points. United States Department of Agriculture, October 1999. http://dx.doi.org/10.32747/1999.7570566.bard.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The primary information for accurately predicting water and solute movement and their impact on water quality is the characterization of soil hydraulic properties. This project was designed to develop methods for rapid and reliable estimates of unsaturated hydraulic properties of the soil. Particularly, in situ methodology is put forth, based on subsurface point sources. Devices were designed to allow introduction of water in subsurface settings at constant negative heads. The ability to operate at a negative head allows a direct method of finding unsaturated soil properties and a mechanism for eliminating extremely rapid preferential flow from the slow matrix flow. The project included field, laboratory and modeling components. By coupling the measurements and the modeling together, a wider range of designs can be examined, while at the same time realistic performance is assured. The developed methodology greatly expands the possibilities for evaluating hydraulic properties in place, especially for measurements in undisturbed soil within plant rooting zones. The objectives of the project were (i) To develop methods for obtaining rapid and reliable estimates of unsaturated hydraulic properties in situ, based on water distribution from subsurface point sources. These can be operated with a constant flow or at a constant head; (ii) To develop methods for distinguishing between matrix and preferential flow using cavities/permeameters under tension; (iii) To evaluate auxiliary measurements such as soil water content or tensions near the operating cavities to improve reliability of results; and (iv: To develop numerical and analytical models for obtaining soil hydraulic properties based on measurements from buried-cavity sources and the auxiliary measurements. The project began in July 1995 and was terminated in November 1998. All of the objectives were pursued. Three new subsurface point sources were designed and tested and two old types were also used. Two of the three new designs used a nylon cloth membrane (30 mm) arranged in a cylindrical geometry and operating at a negative water pressure (tension). A separate bladder arrangement allowed inflation under a positive pressure to maintain contact between the membrane and the soil cavity. The third new design used porous stainless steel (0.5 and 5 mm) arranged in six segments, each with its own water inlet, assembled to form a cylindrical supply surface when inflated in a borehole. The "old" types included an "off-the-shelf" porous cup as well as measurements from a subsurface drip emitter in a small subsurface cavity. Reasonable measurements were made with all systems. Sustained use of the cloth membrane devices were difficult because of leaks and plugging problems. All of the devices require careful consideration to assure contact with the soil system. Steady flow was established which simplified the analysis (except for the drip emitter which used a transient analysis).

To the bibliography