Journal articles on the topic 'Workload modeling and performance evaluation'

To see the other types of publications on this topic, follow the link: Workload modeling and performance evaluation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Workload modeling and performance evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Khan, Subayal, Jukka Saastamoinen, Jyrki Huusko, Juha-Pekka Soininen, and Jari Nurmi. "Application Workload Modelling via Run-Time Performance Statistics." International Journal of Embedded and Real-Time Communication Systems 4, no. 2 (April 2013): 1–35. http://dx.doi.org/10.4018/jertcs.2013040101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Modern mobile nomadic devices for example internet tablets and high end mobile phones support diverse distributed and stand-alone applications that were supported by single devices a decade back. Furthermore the complex heterogeneous platforms supporting these applications contain multi-core processors, hardware accelerators and IP cores and all these components can possibly be integrated into a single integrated circuit (chip). The high complexity of both the platform and the applications makes the design space very complex due to the availability of several alternatives. Therefore the system designer must be able to quickly evaluate the performance of different application architectures and implementations on potential platforms. The most popular technique employed nowadays is termed as system-level-performance evaluation which uses abstract workload and platform capacity models. The platform capacity models and application workload models reside at a higher abstraction-level. The platform and application workload models can be instantiated with reduced modeling effort and also operate at a higher simulation speed. This article presents a novel run-time statistics based application workload model extraction and platform configuration technique. This technique is called platform COnfiguration and woRkload generatIoN via code instrumeNtation and performAnce counters (CORINNA) which offers several advantages over compiler based technique called ABSINTH, and also provides automatic configuration of the platform processor models for example cache-hits and misses obtained during the application execution.
2

Teo, Grace, Gerald Matthews, Lauren Reinerman-Jones, and Daniel Barber. "Adaptive aiding with an individualized workload model based on psychophysiological measures." Human-Intelligent Systems Integration 2, no. 1-4 (November 28, 2019): 1–15. http://dx.doi.org/10.1007/s42454-019-00005-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractPotential benefits of technology such as automation are oftentimes negated by improper use and application. Adaptive systems provide a means to calibrate the use of technological aids to the operator’s state, such as workload state, which can change throughout the course of a task. Such systems require a workload model which detects workload and specifies the level at which aid should be rendered. Workload models that use psychophysiological measures have the advantage of detecting workload continuously and relatively unobtrusively, although the inter-individual variability in psychophysiological responses to workload is a major challenge for many models. This study describes an approach to workload modeling with multiple psychophysiological measures that was generalizable across individuals, and yet accommodated inter-individual variability. Under this approach, several novel algorithms were formulated. Each of these underwent a process of evaluation which included comparisons of the algorithm’s performance to an at-chance level, and assessment of algorithm robustness. Further evaluations involved the sensitivity of the shortlisted algorithms at various threshold values for triggering an adaptive aid.
3

Jeong, Heejin, and Yili Liu. "Development and Evaluation of a Computational Human Performance Model of In-vehicle Manual and Speech Interactions." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 1642. http://dx.doi.org/10.1177/1541931218621372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Usability evaluation traditionally relies on costly and time-consuming human-subject experiments, which typically involve developing physical prototypes, designing usability experiment, and recruiting human subjects. To minimize the limitations of human-subject experiments, computational human performance models can be used as an alternative. Human performance models generate digital simulations of human performance and examine the underlying psychological and physiological mechanisms to help understand and predict human performance. A variety of in-vehicle information systems (IVISs) using advanced automotive technologies have been developed to improve driver interactions with the in-vehicle systems. Numerous studies have used human subjects to evaluate in-vehicle human-system interactions; however, there are few modeling studies to estimate and simulate human performance, especially in in-vehicle manual and speech interactions. This paper presents a computational human performance modeling study for a usability test of IVISs using manual and speech interactions. Specifically, the model was aimed to generate digital simulations of human performance for a driver seat adjustment task to decrease the comfort level of a part of driver seat (i.e., the lower lumbar), using three different IVIS controls: direct-manual, indirect-manual, and voice controls. The direct-manual control is an input method to press buttons on the touchscreen display located on the center stack in the vehicle. The indirect-manual control is to press physical buttons mounted on the steering wheel to control a small display in the dashboard-cluster, which requires confirming visual feedback on the cluster display located on the dashboard. The voice control is to say a voice command, “ deflate lower lumbar” through an in-vehicle speaker. The model was developed to estimate task completion time and workload for the driver seat adjustment task, using the Queueing Network cognitive architecture (Liu, Feyen, & Tsimhoni, 2006). Processing times in the model were recorded every 50 msec and used as the estimates of task completion time. The estimated workload was measured by percentage utilization of servers used in the architecture. After the model was developed, the model was evaluated using an empirical data set of thirty-five human subjects from Chen, Tonshal, Rankin, & Feng (2016), in which the task completion times for the driver seat adjustment task using commercial in-vehicle systems (i.e., SYNC with MyFord Touch) were recorded. Driver workload was measured by NASA’s task load index (TLX). The average of the values from the NASA-TLX’s six categories was used to compare to the model’s estimated workload. The model produced results similar to actual human performance (i.e., task completion time, workload). The real-world engineering example presented in this study contributes to the literature of computational human performance modeling research.
4

HAVERKORT, BOUDEWIJN R. "PERFORMABILITY EVALUATION OF FAULT-TOLERANT COMPUTER SYSTEMS USING DYQNTOOL+." International Journal of Reliability, Quality and Safety Engineering 02, no. 04 (December 1995): 383–404. http://dx.doi.org/10.1142/s0218539395000277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For fault-tolerant computer systems (FTCS) supporting critical applications, it is of key importance to be able to answer the question of whether they indeed fulfill the quality of service requirements of their users. In particular, answers related to the combined performance and dependability of the FTCS are important. To facilitate these so-called performability studies, we present DYQNTOOL+, a performability evaluation tool based on the dynamic queuing network concept, that allows for a combined modeling of system performance and dependability. Different from other performability evaluation tools, DYQNTOOL+ combines two different modeling paradigms, i.e., queuing networks and stochastic Petri nets, for respectively the performance and the dependability aspects of the system under study. The mutual relations between these two model parts, such as workload-induced failures and performance decreases due to failures, are explicitly modeled as well. By the above choice for such a combination of modeling paradigms, the modeling can be done in greater detail, thereby often revealing system behavior that cannot be revealed otherwise. We present the dynamic queuing network modeling approach and its implementation in DYQNTOOL+, as well as illustrate its usage by addressing a number of examples.
5

Xu, Rongbing, and Shi Cao. "Modeling Pilot Flight Performance in a Cognitive Architecture: Model Demonstration." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, no. 1 (September 2021): 1254–58. http://dx.doi.org/10.1177/1071181321651008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cognitive architecture models can support the simulation and prediction of human performance in complex human-machine systems. In the current work, we demonstrate a pilot model that can perform and simulate taxiing and takeoff tasks. The model was built in Queueing Network-Adaptive Control of Thought Rational (QN-ACTR) cognitive architecture and can be connected to flight simulators such as X-Plane to generate various data, including performance, mental workload, and situation awareness. The model results are determined in combination by the declarative knowledge chunks, production rules, and a set of parameters. Currently, the model can generate flight operation behavior similar to human pilots. We will collect human pilot data to examine further and validate model assumptions and parameter values. Once validated, such models can support interface evaluation and competency-based pilot training, providing a theory-based predictive approach complementary to human-in-the-loop experiments for aviation research and development.
6

Yadav, Rajeev Ranjan, Gleidson A. S. Campos, Erica Teixeira Gomes Sousa, and Fernando Aires Lins. "A Strategy for Performance Evaluation and Modeling of Cloud Computing Services." Revista de Informática Teórica e Aplicada 26, no. 1 (April 14, 2019): 78. http://dx.doi.org/10.22456/2175-2745.87511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
On-demand services and reduced costs made cloud computing a popular mechanism to provide scalable resources according to the user’s expectations. This paradigm is an important role in business and academic organizations, supporting applications and services deployed based on virtual machines and containers, two different technologies for virtualization. Cloud environments can support workloads generated by several numbers of users, that request the cloud environment to execute transactions and its performance should be evaluated and estimated in order to achieve clients satisfactions when cloud services are offered. This work proposes a performance evaluation strategy composed of a performance model and a methodology for evaluating the performance of services configured in virtual machines and containers in cloud infrastructures. The performance model for the evaluation of virtual machines and containers in cloud infrastructures is based on stochastic Petri nets. A case study in a real public cloud is presented to illustrate the feasibility of the performance evaluation strategy. The case study experiments were performed with virtual machines and containers supporting workloads related to social networks transactions.
7

Bracken, Bethany, Noa Palmon, Lee Kellogg, Seth Elkin-Frankston, and Michael Farry. "A Cross-Domain Approach to Designing an Unobtrusive System to Assess Human State and Predict Upcoming Performance Deficits." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, no. 1 (September 2016): 707–11. http://dx.doi.org/10.1177/1541931213601162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many work environments are fraught with highly variable demands on cognitive workload, fluctuating between periods of high operational demand to the point of cognitive overload, to long periods of low workload bordering on boredom. When cognitive workload is not in an optimal range at either end of the spectrum, it can be detrimental to situational awareness and operational readiness, resulting in impaired cognitive functioning (Yerkes and Dodson, 1908). An unobtrusive system to assess the state of the human operator (e.g., stress, cognitive workload) and predict upcoming performance deficits could warn operators when steps should be taken to augment cognitive readiness. This system would also be useful during testing and evaluation (T&E) when new tools and systems are being evaluated for operational use. T&E researchers could accurately evaluate the cognitive and physical demands of these new tools and systems, and the effects they will have on task performance and accuracy. In this paper, we describe an approach to designing such a system that is applicable across environments. First, a suite of sensors is used to perform real-time synchronous data collection in a robust and unobtrusive fashion, and provide a holistic assessment of operators. Second, the best combination of indicators of operator state is extracted, fused, and interpreted. Third, performance deficits are comprehensively predicted, optimizing the likelihood of mission success. Finally, the data are displayed in such a way that supports the information requirements of any user. The approach described here is one we have successfully used in several projects, including modeling cognitive workload in the context of high-tempo, physically demanding environments, and modeling individual and team workload, stress, engagement, and performance while working together on a computerized task. We believe this approach is widely applicable and useful across domains to dramatically improve the mission readiness of human operators, and will improve the design and development of tools available to assist the operator in carrying out mission objectives. A system designed using this approach could enable crew to be aware of impending deficits to aid in augmenting mission performance, and will enable more effective T&E by measuring workload in response to new tools and systems while they are being designed and developed, rather than once they are deployed.
8

Roth, Tamara, Franz-Josef Scharfenberg, Julia Mierdel, and Franz X. Bogner. "Self-evaluative Scientific Modeling in an Outreach Gene Technology Laboratory." Journal of Science Education and Technology 29, no. 6 (August 12, 2020): 725–39. http://dx.doi.org/10.1007/s10956-020-09848-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The integration of scientific modeling into science teaching is key to the development of students’ understanding of complex scientific phenomena, such as genetics. With this in mind, we conducted an introductory hands-on module during an outreach gene technology laboratory on the structure of DNA. Our module examined the influence of two model evaluation variants on cognitive achievement: Evaluation 1, based on students’ hand-drawn sketches of DNA models and two open questions, and evaluation 2, based on students’ own evaluations of their models in comparison to a commercially available DNA model. We subsequently subdivided our sample (N = 296) into modellers-1 (n = 151) and modellers-2 (n = 145). Analyses of cognitive achievement revealed that modellers-2 achieved higher scores than modellers-1. In both cases, low achievers, in particular, benefitted from participation. Assessment of modellers-2 self-evaluation sheets revealed differences between self-evaluation and independent reassessment, as non-existent model features were tagged as correct whereas existent features were not identified. Correlation analyses between the models’ assessment scores and cognitive achievement revealed small-to-medium correlations. Consequently, our evaluation-2 phase impacted students’ performance in overall and model-related cognitive achievement, attesting to the value of our module as a means to integrate real scientific practices into science teaching. Although it may increase the workload for science teachers, we find that the potential scientific modeling holds as an inquiry-based learning strategy is worth the effort.
9

Mechalikh, Charafeddine, Hajer Taktak, and Faouzi Moussa. "PureEdgeSim: A simulation framework for performance evaluation of cloud, edge and mist computing environments." Computer Science and Information Systems, no. 00 (2020): 42. http://dx.doi.org/10.2298/csis200301042m.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Edge and Mist Computing are two emerging paradigms that aim to reduce latency and the Cloud workload by bringing its applications close to the Internet of Things (IoT) devices. In such complex environments, simulation makes it possible to evaluate the adopted strategies before their deployment on a real distributed system. However, despite the research advancement in this area, simulation tools are lacking, especially in the case of Mist Computing [11], where heterogeneous and constrained devices cooperate and share their resources. Motivated by this, in this paper, we present PureEdgeSim, a simulation toolkit that enables the simulation of Cloud, Edge, and Mist Computing environments and the evaluation of the adopted resources management strategies, in terms of delays, energy consumption, resources utilization, and tasks success rate. To show its capabilities, we introduce a case study, in which we evaluate the different architectures, orchestration algorithms, and the impact of offloading criteria. The simulation results show the effectiveness of PureEdgeSim in modeling such complex and dynamic environments.
10

Srinivas, Sharan, Roland Paul Nazareth, and Md Shoriat Ullah. "Modeling and analysis of business process reengineering strategies for improving emergency department efficiency." SIMULATION 97, no. 1 (October 8, 2020): 3–18. http://dx.doi.org/10.1177/0037549720957722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Emergency departments (ED) in the USA treat 136.9 million cases annually and account for nearly half of all medical care delivered. Due to high demand and limited resource (such as doctors and beds) availability, the median waiting time for ED patients is around 90 minutes. This research is motivated by a real-life case study of an ED located in central Missouri, USA, which faces the problem of congestion and improper workload distribution (e.g., overburdened ED doctors). The objective of this paper is to minimize patient waiting time and efficiently allocate workload among resources in an economical manner. The systematic framework of Business Process Reengineering (BPR), along with discrete-event simulation modeling approach, is employed to analyze current operations and potential improvement strategies. Alternative scenarios pertaining to process change, workforce planning, and capacity expansion are proposed. Besides process performance measures (waiting time and resource utilization), other criteria, such as responsiveness, cost of adoption, and associated risk, are also considered for evaluating an alternative. The experimental analysis indicates that a change in the triage process (evenly distributing medium-acuity patients among doctors and mid-level providers) is economical, easy to implement, reduces physician workload, and improves average waiting time by 20%, thereby making it attractive for short-term adoption. On the other hand, optimizing the workforce level based on historical demand patterns while adopting a triage process change delivers the best performance (84% reduction in waiting time and balanced resource utilization), and is recommended as a long-term solution.
11

YANG, Li, Junlin YI, and Hui PENG. "Big-Data Measurement-Model Research about Judges’ Actual Workload in China." Asian Journal of Law and Society 7, no. 3 (October 2020): 541–60. http://dx.doi.org/10.1017/als.2019.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractAs the growing number of cases is draining the limited court resources in China, how to scientifically measure the reasonable saturated workload of judges has become an urgent issue. This issue is the prerequisite of other important topics such as determination of judges’ quotas, measurement of the actual workload of a trial team, performance evaluation of judges, and resource allocation within courts. Data-driven measurement of the actual workload of China’s judges depends on various factors such as local economic development, public transportation, case-load in the past, and staffing of assistant positions. Therefore, traditional approaches that depend only on a single element, such as cause of action, do not work well. We proposed a modelling framework based on big-data and machine-learning technology to more accurately measure the actual workload of judges. This framework extracts the core elements of judicial cases, assigns target workload to the cases based on feedback from judges and analyzing case samples to create a standard training dataset, and trains machine-learning models using the data. A preliminary case-weight calculation model is built using the framework. Besides, the model is continuously evaluated and improved by comparing its output with the actual demand in a court through methods such as sampling, questionnaires, and expert evaluation.
12

Hanguan, Wen, Xu Zhihui, Xue Ke, Jiang Chenyu, and Yang Ming. "Modeling digital main control room operator’s resilience under extreme conditions: An Experiment design scheme." E3S Web of Conferences 245 (2021): 03018. http://dx.doi.org/10.1051/e3sconf/202124503018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Human reliability is one of the most important factors that make effects in nuclear power plant(NPP) operation. In advanced digital NPP main control room with high levels of automation, the systematic operation which require a sufficient mental workload to address those undesired events has become a critical challenge for operators. The aim of this research is to identify the operator’s reliability by developing a resilience model. In this work, a seven-stage technique framework is proposed, which includes the skeleton of theoretical analysis, experimental design and hardware setting to how to establish the model for NPP operator in a downsize main control room cabin. The resilience model for operators’ reliability via assessing their basic skill tasks performance and evaluating their cognitive workload in the framework hence can be used for assessing the level of training of the new employed operators as well as human reliability in other critical process industries.
13

Kroß, Johannes, and Helmut Krcmar. "PerTract: Model Extraction and Specification of Big Data Systems for Performance Prediction by the Example of Apache Spark and Hadoop." Big Data and Cognitive Computing 3, no. 3 (August 9, 2019): 47. http://dx.doi.org/10.3390/bdcc3030047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Evaluating and predicting the performance of big data applications are required to efficiently size capacities and manage operations. Gaining profound insights into the system architecture, dependencies of components, resource demands, and configurations cause difficulties to engineers. To address these challenges, this paper presents an approach to automatically extract and transform system specifications to predict the performance of applications. It consists of three components. First, a system-and tool-agnostic domain-specific language (DSL) allows the modeling of performance-relevant factors of big data applications, computing resources, and data workload. Second, DSL instances are automatically extracted from monitored measurements of Apache Spark and Apache Hadoop (i.e., YARN and HDFS) systems. Third, these instances are transformed to model- and simulation-based performance evaluation tools to allow predictions. By adapting DSL instances, our approach enables engineers to predict the performance of applications for different scenarios such as changing data input and resources. We evaluate our approach by predicting the performance of linear regression and random forest applications of the HiBench benchmark suite. Simulation results of adjusted DSL instances compared to measurement results show accurate predictions errors below 15% based upon averages for response times and resource utilization.
14

Lima, Cláudio, and Ronaldo Santos Mello. "On proposing and evaluating a NoSQL document database logical approach." International Journal of Web Information Systems 12, no. 4 (November 7, 2016): 398–417. http://dx.doi.org/10.1108/ijwis-04-2016-0018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose NoSQL databases do not require a default schema associated with the data. Even that, they are categorized by data models. A model associated with the data can promote better strategies for persistence and manipulation of data in the target database. Based on this motivation, the purpose of this paper is to present an approach for logical design of NoSQL document databases that consists a process that converts a conceptual modeling into efficient logical representations for a NoSQL document database. The authors also evaluate their approach and demonstrate that the generated NoSQL logical structures reduce the amount of data items accessed by queries. Design/methodology/approach This paper presents an approach for logical design of NoSQL document database schemas based on a conceptual schema. The authors generate compact and redundancy-free schemas and define appropriate representations in a NoSQL document logical model. The estimated volume of data and workload information can be considered to generate optimized NoSQL document structures. Findings This approach was evaluated through a case study with an experimental evaluation in the e-commerce application domain. The results demonstrate that the authors’ workload-based conversion process improves query performance on NoSQL documents by reducing the number of database accesses. Originality/value Unlike related work, the reported approach covers all typical conceptual constructs, details a conversion process between conceptual schemas and logical representations for NoSQL document database category and, additionally, considers the estimated database workload to perform optimizations in the logical structure. An experimental evaluation shows that the proposed approach is promising.
15

Daghistani, Anas, Walid G. Aref, Arif Ghafoor, and Ahmed R. Mahmood. "SWARM: Adaptive Load Balancing in Distributed Streaming Systems for Big Spatial Data." ACM Transactions on Spatial Algorithms and Systems 7, no. 3 (June 7, 2021): 1–43. http://dx.doi.org/10.1145/3460013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The proliferation of GPS-enabled devices has led to the development of numerous location-based services. These services need to process massive amounts of streamed spatial data in real-time. The current scale of spatial data cannot be handled using centralized systems. This has led to the development of distributed spatial streaming systems. Existing systems are using static spatial partitioning to distribute the workload. In contrast, the real-time streamed spatial data follows non-uniform spatial distributions that are continuously changing over time. Distributed spatial streaming systems need to react to the changes in the distribution of spatial data and queries. This article introduces SWARM, a lightweight adaptivity protocol that continuously monitors the data and query workloads across the distributed processes of the spatial data streaming system and redistributes and rebalances the workloads as soon as performance bottlenecks get detected. SWARM is able to handle multiple query-execution and data-persistence models. A distributed streaming system can directly use SWARM to adaptively rebalance the system’s workload among its machines with minimal changes to the original code of the underlying spatial application. Extensive experimental evaluation using real and synthetic datasets illustrate that, on average, SWARM achieves 2 improvement in throughput over a static grid partitioning that is determined based on observing a limited history of the data and query workloads. Moreover, SWARM reduces execution latency on average 4 compared with the other technique.
16

Jordan, CS, E. W. Farmer, A. J. Belyavin, S. J. Selcon, A. J. Bunting, C. R. Shanks, and P. Newman. "Empirical Validation of the Prediction of Operator Performance (POP) Model." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 40, no. 2 (October 1996): 39–43. http://dx.doi.org/10.1177/154193129604000207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper describes an experiment conducted to validate the Prediction of Operator Performance (POP) model in a flight simulation context. The POP model uses subjective ratings of the demand imposed by single tasks to predict both the demand and performance associated with concurrent tasks. Previous experiments on the POP model have investigated a wide range of experimental tasks including tracking and verbal reasoning. In this experiment eight subjects performed flight control, threat assessment and threat identification tasks singly and in combination. Performance measures and POP scores were collected at the completion of each task condition. The results demonstrated performance decrements in the dual task conditions that were consistent with the predictions. The implications for the POP model are discussed in terms of workload modelling and human performance modelling within the context of the Integrated Performance Modelling Environment (IPME) currently being developed within the Defence Evaluation and Research Agency
17

Stavrinides, Georgios L., and Helen D. Karatza. "Performance evaluation of a SaaS cloud under different levels of workload computational demand variability and tardiness bounds." Simulation Modelling Practice and Theory 91 (February 2019): 1–12. http://dx.doi.org/10.1016/j.simpat.2018.11.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Plott, Beth M., Shelly Scott-Nash, Bruce P. Hallbert, and Angelia L. Sebok. "Computer Modeling of a Nuclear Power Plant Operating Crew to Aid in Analysis of Crew Size Issues." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 39, no. 18 (October 1995): 1214–18. http://dx.doi.org/10.1177/154193129503901814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An analytical approach to addressing the implications of nuclear power plant shift sizing is needed as an augmentation to the classical empirical approach. The research reported in this paper was to evaluate the feasibility and validity of one potential analytical approach as a means of evaluating the consequences of crew reduction on crew performance in a nuclear power plant setting. The approach selected for analysis was task network modeling and simulation using a tool named Micro Saint. Task network modeling allows the human factors engineer to extend the information from a task analysis and generate a computer simulation of crew performance that can predict critical task times and error rates. Through modeling, the current and proposed processes can be evaluated and analyzed in order to understand, identify, and test opportunities for process improvement or reengineering. For this effort, models of a conventional nuclear power plant during four extremely demanding scenarios were developed. Task analysis and timing data were collected at the Imatran Voima Nuclear Power Plant at Loviisa, Finland. The task analyses were collected over a two week period by interviewing reactor operators, reviewing procedures, and conducting walk-throughs. We then refined the models and incorporated workload modeling constructs. At the completion of the modeling effort, the models were executed and the data collected were used to predict crew performance in varying staffing conditions.
19

RahimiZadeh, Keyvan, Reza Nasiri Gerde, Morteza AnaLoui, and Peyman Kabiri. "Performance evaluation of Web server workloads in Xen-based virtualized computer system: analytical modeling and experimental validation." Concurrency and Computation: Practice and Experience 27, no. 17 (January 26, 2015): 4741–62. http://dx.doi.org/10.1002/cpe.3454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Yixiao, Xiaosong Wang, Ziyue Xu, Qihang Yu, Alan Yuille, and Daguang Xu. "When Radiology Report Generation Meets Knowledge Graph." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12910–17. http://dx.doi.org/10.1609/aaai.v34i07.6989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Automatic radiology report generation has been an attracting research problem towards computer-aided diagnosis to alleviate the workload of doctors in recent years. Deep learning techniques for natural image captioning are successfully adapted to generating radiology reports. However, radiology image reporting is different from the natural image captioning task in two aspects: 1) the accuracy of positive disease keyword mentions is critical in radiology image reporting in comparison to the equivalent importance of every single word in a natural image caption; 2) the evaluation of reporting quality should focus more on matching the disease keywords and their associated attributes instead of counting the occurrence of N-gram. Based on these concerns, we propose to utilize a pre-constructed graph embedding module (modeled with a graph convolutional neural network) on multiple disease findings to assist the generation of reports in this work. The incorporation of knowledge graph allows for dedicated feature learning for each disease finding and the relationship modeling between them. In addition, we proposed a new evaluation metric for radiology image reporting with the assistance of the same composed graph. Experimental results demonstrate the superior performance of the methods integrated with the proposed graph embedding module on a publicly accessible dataset (IU-RR) of chest radiographs compared with previous approaches using both the conventional evaluation metrics commonly adopted for image captioning and our proposed ones.
21

May, Kieran W., Chandani KC, Jose Jorge Ochoa, Ning Gu, James Walsh, Ross T. Smith, and Bruce H. Thomas. "The Identification, Development, and Evaluation of BIM-ARDM: A BIM-Based AR Defect Management System for Construction Inspections." Buildings 12, no. 2 (January 28, 2022): 140. http://dx.doi.org/10.3390/buildings12020140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This article presents our findings from a three-stage research project, which consists of the identification, development, and evaluation of a defect management Augmented Reality (AR) prototype that incorporates Building Information Modelling (BIM) technologies. Within the first stage, we conducted a workshop with four construction-industry representatives to capture their opinions and perceptions of the potentials and barriers associated with the integration of BIM and AR in the construction industry. The workshop findings led us to the second stage, which consisted of the development of an on-site BIM-based AR defect management (BIM-ARDM) system for construction inspections. Finally, a study was conducted to evaluate BIM-ARDM in comparison to the current paper-based defect management inspection approach employed on construction sites. The findings from the study revealed BIM-ARDM significantly outperformed current approaches in terms of usability, workload, performance, completion time, identifying defects, locating building elements, and assisting the user with the inspection task.
22

Garrido-Labrador, José Luis, Daniel Puente-Gabarri, José Miguel Ramírez-Sanz, David Ayala-Dulanto, and Jesus Maudes. "Using Ensembles for Accurate Modelling of Manufacturing Processes in an IoT Data-Acquisition Solution." Applied Sciences 10, no. 13 (July 2, 2020): 4606. http://dx.doi.org/10.3390/app10134606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The development of complex real-time platforms for the Internet of Things (IoT) opens up a promising future for the diagnosis and the optimization of machining processes. Many issues have still to be solved before IoT platforms can be profitable for small workshops with very flexible workloads and workflows. The main obstacles refer to sensor implementation, IoT architecture, and data processing, and analysis. In this research, the use of different machine-learning techniques is proposed, for the extraction of different information from an IoT platform connected to a machining center, working under real industrial conditions in a workshop. The aim is to evaluate which algorithmic technique might be the best to build accurate prediction models for one of the main demands of workshops: the optimization of machining processes. This evaluation, completed under real industrial conditions, includes very limited information on the machining workload of the machining center and unbalanced datasets. The strategy is validated for the classification of the state of a machining center, its working mode, and the prediction of the thermal evolution of the main machine-tool motors: the axis motors and the milling head motor. The results show the superiority of the ensembles for both classification problems under analysis and all four regression problems. In particular, Rotation Forest-based ensembles turned out to have the best performance in the experiments for all the metrics under study. The models are accurate enough to provide useful conclusions applicable to current industrial practice, such as improvements in machine programming to avoid cutting conditions that might greatly reduce tool lifetime and damage machine components.
23

Alvanos, Michail, and Theodoros Christoudias. "GPU-accelerated atmospheric chemical kinetics in the ECHAM/MESSy (EMAC) Earth system model (version 2.52)." Geoscientific Model Development 10, no. 10 (October 10, 2017): 3679–93. http://dx.doi.org/10.5194/gmd-10-3679-2017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate–chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.
24

Mandjes, Michel, and Werner Scheinhardt. "A Fluid Model for a Relay Node in an Ad Hoc Network: Evaluation of Resource Sharing Policies." Journal of Applied Mathematics and Stochastic Analysis 2008 (July 13, 2008): 1–25. http://dx.doi.org/10.1155/2008/518214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Fluid queues offer a natural framework for analyzing waiting times in a relay node of an ad hoc network. Because of the resource sharing policy applied, the input and output of these queues are coupled. More specifically, when there are users who wish to transmit data through a specific node, each of them obtains a share of the service capacity to feed traffic into the queue of the node, whereas the remaining fraction is used to serve the queue; here is a free design parameter. Assume now that jobs arrive at the relay node according to a Poisson process, and that they bring along exponentially distributed amounts of data. The case has been addressed before; the present paper focuses on the intrinsically harder case , that is, policies that give more weight to serving the queue. Four performance metrics are considered: (i) the stationary workload of the queue, (ii) the queueing delay, that is, the delay of a “packet” (a fluid particle) that arrives at an arbitrary point in time, (iii) the flow transfer delay, (iv) the sojourn time, that is, the flow transfer time increased by the time it takes before the last fluid particle of the flow is served. We explicitly compute the Laplace transforms of these random variables.
25

Wu, Xiaoliang, Bo Zhang, Gong Chen, and Dong Jin. "A Scalable Quantum Key Distribution Network Testbed Using Parallel Discrete-Event Simulation." ACM Transactions on Modeling and Computer Simulation 32, no. 2 (April 30, 2022): 1–22. http://dx.doi.org/10.1145/3490029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Quantum key distribution (QKD) has been promoted as a means for secure communications. Although QKD has been widely implemented in many urban fiber networks, the large-scale deployment of QKD remains challenging. Today, researchers extensively conduct simulation-based evaluations for their designs and applications of large-scale QKD networks for cost efficiency. However, the existing discrete-event simulators offer models for QKD hardware and protocols based on sequential event execution, which limits the scale of the experiments. In this work, we explore parallel simulation of QKD networks to address this issue. Our contributions lay in the exploration of QKD network characteristics to be leveraged for parallel simulation as well as the development of a parallel simulation framework for QKD networks. We also investigate three techniques to improve the simulation performance including (1) a ladder queue based event list, (2) memoization for computationally intensive quantum state transformation information, and (3) optimization of the network partition scheme for workload balance. The experimental results show that our parallel simulator is 10 times faster than a sequential simulator when simulating a 128-node QKD network. Our linear-regression-based network partition scheme can further accelerate the simulation experiments up to two times over using a randomized network partition scheme.
26

Liu, Xiang, Jian Lv, Qingsheng Xie, Haisong Huang, and Weixing Wang. "Construction and application of an ergonomic simulation optimization method driven by a posture load regulatory network." SIMULATION 96, no. 7 (May 27, 2020): 623–37. http://dx.doi.org/10.1177/0037549720915261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The optimization of man–machine systems is a critical component in the research and development of products but it has been a struggle to improve the optimization accuracy. This study presents an ergonomic optimization method driven by a posture load regulatory network (PLRN). Considering that the differences in work-related musculoskeletal disorders are caused by different occupations, human body part data collection is completed using the deconstructions of man–machine task sequences, which applies in partial theory of the complex network to build the PLRN model. Then the approach connects the human body part data with the regulatory network to calculate the cumulative load tendency and the performance of the load group; results of the analysis provide support for the ranking of human body part loads. In addition, we derive the mapping relationship between the man–machine workload and product engineering modules based on the quality function deployment theory, which can reflect the man–machine system problems of products and assist designers in optimizing and making decisions in man–machine systems. To this end, this paper provides a case study research for evaluating the feasibility of the PLRN simulation optimization method. Results show that our method is capable of explaining the change and the predicting the tendency of human load during the man–machine operation. Compared with the traditional subjective analytic hierarchy process, the PLRN simulation optimization method provides more accurate and objective evaluation on product ergonomics, and new research opportunities on ergonomic optimization.
27

Porter, Donald E., and Emmett Witchel. "Modeling transactional memory workload performance." ACM SIGPLAN Notices 45, no. 5 (May 2010): 349–50. http://dx.doi.org/10.1145/1837853.1693508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Cassenti, Daniel N., Troy D. Kelley, and Richard A. Carlson. "Modeling the Workload-Performance Relationship." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 54, no. 19 (September 2010): 1684–88. http://dx.doi.org/10.1177/154193121005401968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, X., X. Qu, H. Xue, H. Zhao, T. Li, and D. Tao. "Modeling pilot mental workload using information theory." Aeronautical Journal 123, no. 1264 (June 2019): 828–39. http://dx.doi.org/10.1017/aer.2019.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractPredicting mental workload of pilots can provide cockpit designers with useful information to reduce the possibility of pilot error and cost of training, improve the safety and performance of systems, and increase operator satisfaction. We present a theoretical model of mental workload, using information theory, based on review investigations of how effectively task complexity, visual performance, and pilot experience predict mental workload. The validity of the model was confirmed based on data collected from pilot taxiing experiments. Experiments were performed on taxiing tasks in four different scenarios. Results showed that predicted values from the proposed mental workload model were highly correlated to actual mental workload ratings from the experiments. The findings indicate that the proposed mental workload model appears to be effective in the prediction of pilots’ mental workload over time.
30

Changxu Wu and Yili Liu. "Queuing Network Modeling of Driver Workload and Performance." IEEE Transactions on Intelligent Transportation Systems 8, no. 3 (September 2007): 528–37. http://dx.doi.org/10.1109/tits.2007.903443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wu, Changxu, and Yili Liu. "Queuing Network Modeling of Driver Workload and Performance." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50, no. 22 (October 2006): 2368–72. http://dx.doi.org/10.1177/154193120605002204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Bo, Hai Ying Zhou, and De Cheng Zuo. "Workload Performance Characterization and Test Strategy of High-Performance Fault-Tolerant Computers Based on BIBbench." Applied Mechanics and Materials 130-134 (October 2011): 2455–60. http://dx.doi.org/10.4028/www.scientific.net/amm.130-134.2455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
It is critical to understand the workload characteristics and resource usage patterns of available applications to guide the design and development of architecture of the future large scale servers. In this paper, we analyze the workload performance characteristics of the actual Bank Intermediary Business (BIB) characteristics with BIBmodel and BIBbench design work, and propose BIB performance workload and use case definitions. The analysis and comparisons of workload and use case illustrate that the workload performance characteristics of BIB is totally different with TPC benchmarks. With the development of economy and technology, the requirements for BIB servers are important in modeling, benchmarks developing and workload performance characteristics studying are increased nowadays.
33

Kent, David, Carl Saldanha, and Sonia Chernova. "Leveraging depth data in remote robot teleoperation interfaces for general object manipulation." International Journal of Robotics Research 39, no. 1 (November 25, 2019): 39–53. http://dx.doi.org/10.1177/0278364919888565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Robust remote teleoperation of high-degree-of-freedom manipulators is of critical importance across a wide range of robotics applications. Contemporary robot manipulation interfaces primarily utilize a free positioning pose specification approach to independently control each degree of freedom in free space. In this work, we present two novel interfaces, constrained positioning and point-and-click. Both novel approaches incorporate scene information from depth data into the grasp pose specification process, effectively reducing the number of 3D transformations the user must input. The novel interactions are designed for 2D image streams, rather than traditional 3D virtual scenes, further reducing mental transformations by eliminating the controllable camera viewpoint in favor of fixed physical camera viewpoints. We present interface implementations of our novel approaches, as well as free positioning, in both 2D and 3D visualization modes. In addition, we present results of a 90-participant user study evaluation comparing the effectiveness of each approach for a set of general object manipulation tasks, and the effects of implementing each approach in 2D image views versus 3D depth views. The results of our study show that point-and-click outperforms both free positioning and constrained positioning by significantly increasing the number of tasks completed and significantly reducing task failures and grasping errors, while significantly reducing the number of user interactions required to specify poses. In addition, we found that regardless of the interaction approach, the 2D visualization mode resulted in significantly better performance than the 3D visualization mode, with statistically significant reductions in task failures, grasping errors, task completion time, number of interactions, and user workload, all while reducing bandwidth requirements imposed by streaming depth data.
34

Suryana, N., M. S. Rohman, and F. S. Utomo. "PREDICTION BASED WORKLOAD PERFORMANCE EVALUATION FOR DISASTER MANAGEMENT SPATIAL DATABASE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W10 (September 12, 2018): 187–92. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w10-187-2018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<p><strong>Abstract.</strong> This paper discusses a prediction based workload performance evaluation implementation during Disaster Management, especially at the response phase, to handle large spatial data in the event of an eruption of the Merapi volcano in Indonesia. Complexity associated with a large spatial database are not the same with the conventional database. This implies that in coming complex work loads are difficult to be handled by human from which needs longer processing time and may lead to failure and undernourishment. Based on incoming workload, this study is intended to predict the associated workload into OLTP and DSS workload performance types. From the SQL statements, it is clear that the DBMS can obtain and record the process, measure the analysed performances and the workload classifier in the form of DBMS snapshots. The Case-Based Reasoning (CBR) optimised with Hash Search Technique has been adopted in this study to evaluate and predict the workload performance of PostgreSQL. It has been proven that the proposed CBR using Hash Search technique has resulted in acceptable prediction of the accuracy measurement than other machine learning algorithm like Neural Network and Support Vector Machine. Besides, the results of the evaluation using confusion matrix has resulted in very good accuracy as well as improvement in execution time. Additionally, the results of the study indicated that the prediction model for workload performance evaluation using CBR which is optimised by Hash Search technique for determining workload data on shortest path analysis via the employment of Dijkstra algorithm. It could be useful for the prediction of the incoming workload based on the status of the predetermined DBMS parameters. In this way, information is delivered to DBMS hence ensuring incoming workload information that is very crucial to determine the smooth works of PostgreSQL.</p>
35

Wu, Changxu, and Yili Liu. "Usability Makeover of a Cognitive Modeling Tool." Ergonomics in Design: The Quarterly of Human Factors Applications 15, no. 2 (April 2007): 8–14. http://dx.doi.org/10.1177/106480460701500201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
FEATURE AT A GLANCE: In this article, we describe a new software tool that was developed for modeling human performance and mental workload in single- and dual-task situations. The tool features an interactive interface and is based on psychological theory. Using this new modeling tool, in most cases, users can model and predict human performance and workload by clicking buttons to select options without needing to learn a new programming language. They can also visualize the information-processing state of the model during simulation and compare and evaluate the simulated human performance and mental workload for different user interface designs based on the simulation results.
36

Borghetti, Brett J., Joseph J. Giametta, and Christina F. Rusnock. "Assessing Continuous Operator Workload With a Hybrid Scaffolded Neuroergonomic Modeling Approach." Human Factors: The Journal of the Human Factors and Ergonomics Society 59, no. 1 (February 2017): 134–46. http://dx.doi.org/10.1177/0018720816672308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Objective: We aimed to predict operator workload from neurological data using statistical learning methods to fit neurological-to-state-assessment models. Background: Adaptive systems require real-time mental workload assessment to perform dynamic task allocations or operator augmentation as workload issues arise. Neuroergonomic measures have great potential for informing adaptive systems, and we combine these measures with models of task demand as well as information about critical events and performance to clarify the inherent ambiguity of interpretation. Method: We use machine learning algorithms on electroencephalogram (EEG) input to infer operator workload based upon Improved Performance Research Integration Tool workload model estimates. Results: Cross-participant models predict workload of other participants, statistically distinguishing between 62% of the workload changes. Machine learning models trained from Monte Carlo resampled workload profiles can be used in place of deterministic workload profiles for cross-participant modeling without incurring a significant decrease in machine learning model performance, suggesting that stochastic models can be used when limited training data are available. Conclusion: We employed a novel temporary scaffold of simulation-generated workload profile truth data during the model-fitting process. A continuous workload profile serves as the target to train our statistical machine learning models. Once trained, the workload profile scaffolding is removed and the trained model is used directly on neurophysiological data in future operator state assessments. Application: These modeling techniques demonstrate how to use neuroergonomic methods to develop operator state assessments, which can be employed in adaptive systems.
37

Moorthi, M. Narayana, and R. Manjula. "Performance Evaluation and Analysis of Parallel Computers Workload." International Journal of Grid and Distributed Computing 9, no. 1 (January 31, 2016): 127–34. http://dx.doi.org/10.14257/ijgdc.2016.9.1.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Mitchell, Diane Kuhl, and Charneta Samms. "Workload Warriors: Lessons Learned from a Decade of Mental Workload Prediction Using Human Performance Modeling." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 53, no. 12 (October 2009): 819–23. http://dx.doi.org/10.1177/154193120905301212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For at least a decade, researchers at the Army Research Laboratory (ARL) have predicted mental workload using human performance modeling (HPM) tools, primarily IMPRINT. During this timeframe their projects have matured from simple models of human behavior to complex analyses of the interactions of system design and human behavior. As part of this maturation process, the researchers learned: 1) to develop a modeling question that incorporates all aspects of workload, 2) to determine when workload is most likely to affect performance, 3) to build multiple models to represent experimental conditions, 4) to connect performance predictions to an overall mission or system capability, and 5) to format results in a clear, concise format. By implementing the techniques they developed from these lessons learned, the researchers have had an impact on major Army programs with their workload predictions. Specifically, they have successfully changed design requirements for future concept Army vehicles, substantiated manpower requirements for fielded Army vehicles, and made Soldier workload the number one item during preliminary design review for a major Army future concept vehicle program. The effective techniques the ARL researchers developed for their IMPRINT projects are applicable to other HPM tools. In addition, they can be used by students and researchers who are doing human performance modeling projects and are confronted with similar problems to help them achieve project success.
39

Biscarat, Catherine, Tommaso Boccali, Daniele Bonacorsi, Concezio Bozzi, Davide Costanzo, Dirk Duellmann, Johannes Elmsheuser, et al. "System Performance and Cost Modelling in LHC computing." EPJ Web of Conferences 214 (2019): 03019. http://dx.doi.org/10.1051/epjconf/201921403019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The increase in the scale of LHC computing expected for Run 3 and even more so for Run 4 (HL-LHC) over the next ten years will certainly require radical changes to the computing models and the data processing of the LHC experiments. Translating the requirements of the physics programmes into computing resource needs is a complicated process and subject to significant uncertainties. For this reason, WLCG has established a working group to develop methodologies and tools intended tocharacterise the LHC workloads, better understand their interaction with the computing infrastructure, calculate their cost in terms of resources and expenditure and assist experiments, sites and the WLCG project in the evaluation of their future choices. This working group started in November 2017 and has about 30 active participants representing experiments and sites. In this contribution we expose the activities, the results achieved and the future directions.
40

Smyth, Christopher C. "Modeling Mental Workload and Task Performance for Indirect Vision Driving." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 45, no. 23 (October 2001): 1694–98. http://dx.doi.org/10.1177/154193120104502328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kalantari, Mohammad, and Mohammad Kazem Akbari. "Fault-aware grid scheduling using performance prediction by workload modeling." Journal of Supercomputing 46, no. 1 (March 14, 2008): 15–39. http://dx.doi.org/10.1007/s11227-008-0183-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Rodgers, Mark D., Carol A. Manning, and Charles S. Kerr. "Demonstration of Power: Performance and Objective Workload Evaluation Research." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 38, no. 15 (October 1994): 941. http://dx.doi.org/10.1177/154193129403801502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Federal Aviation Adrainistration (FAA) is developing a method to determine whether future air traffic control systems will provide the benefits to the National Airspace System (NAS) that were proposed when they were conceived. The purpose of this project was to develop a set of objective measures to characterize the productivity of an individual air traffic controller. Software was developed to compute measures of airspace characteristics, controller activities, and air traffic situational characteristics. This software, the Performance and Objective Workload Evaluation Research (POWER) program, computes a set of numerical measures based on routinely collected air traffic control data. The POWER program was written to interface with the Situation Assessment Through Re-creation of Incidents (SATORI) system, originally developed to re-create operational incidents (Rodgers & Duke, 1993). An engineering validation was conducted and a psychometric assessment is underway to evaluate the reliability, validity, and utility of the measures and a subset will be chosen to characterize controller taskload and performance. POWER will then be used to measure controller performance and taskload on ATC sectors to be transitioned to future systems. These baseline taskload and performance measures will be compared to taskload and performance measures obtained from future ATC systems after system implementation. POWER will also be used to evaluate alternative future systems display configurations at the Civil Aeromedical Institute (CAMI) Air Traffic Control Future Systems Simulation Laboratory.
43

Zakay, Netanel, and Dror G. Feitelson. "Workload resampling for performance evaluation of parallel job schedulers." Concurrency and Computation: Practice and Experience 26, no. 12 (March 14, 2014): 2079–105. http://dx.doi.org/10.1002/cpe.3240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Gonçalves, Glauber D., Idilio Drago, Alex B. Vieira, Ana Paula Couto da Silva, Jussara M. Almeida, and Marco Mellia. "Workload models and performance evaluation of cloud storage services." Computer Networks 109 (November 2016): 183–99. http://dx.doi.org/10.1016/j.comnet.2016.03.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Rantanen, Esa M., and Brian R. Levinthal. "Time-Based Modeling of Human Performance." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 12 (September 2005): 1200–1204. http://dx.doi.org/10.1177/154193120504901222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents a probabilistic approach to modeling human performance. Instead of focusing on mean performance, the effects of taskload on the distributions of performance variables are examined. From such data, probabilities of given levels of performance can be derived and methods of measurement that expand the analyses beyond those of the mean developed. Results from two experiments, one abstract, the other realistic, are presented in terms of timely performance on required tasks. As taskload increased, the participants were less likely to act on the experimental tasks at an earliest opportunity than under low taskload, resulting in increase of “too late” errors. Measurement of taskload and performance in temporal terms also allowed for bracketing and making inferences about mental workload, which is not directly measurable.
46

Wei, Bing. "Energy Guided and Workload Adaptive Modeling for Live Migration." Advanced Materials Research 748 (August 2013): 982–86. http://dx.doi.org/10.4028/www.scientific.net/amr.748.982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Live migration provides desirable benefits in the field of energy saving, which packs service into fewer physical servers while maintaining the performance level. In this paper we present energy guided and workload adaptive modeling for live migration, two models are developed respectively including energy guided migration model and workload adaptive model, the former model selects the best migrating virtual machine (VM) candidate with the minimal energy consumption while the later model chooses the best migrated physical server candidate in terms of both energy and workload characteristics, furthermore, concerning the service quality, workload adaptive model also takes charge of the determination of the live migration moment. The experiments results show that our approach achieves significant energy saving and robust live migration.
47

Rueb, Justin, Michael Vidulich, and John Hassoun. "Establishing Workload Acceptability: An Evaluation of a Proposed Kc-135 Cockpit Redesign." Proceedings of the Human Factors Society Annual Meeting 36, no. 1 (October 1992): 17–21. http://dx.doi.org/10.1177/154193129203600106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Workload assessment has become a common part of system evaluation. Workload assessment is an important adjunct to performance measurement because the operator is sometimes flexible enough to disguise excessively demanding systems by expending additional effort to overcome optimal information processing limits. This is often referred to as the problem of determining a “workload redline.” The present paper recounts an evaluation of a proposed redesign of the KC-135 tanker aircraft cockpit. The current KC-135 cockpit has three crew positions: pilot, copilot, and navigator. As part of a proposed redesign, modern automation capabilities to replace the navigator were considered. Ten operational KC-135 crews and two KC-10 crews were studied while performing missions of differing levels of workload in a high-fidelity simulator. Three main classes of data relevant to the redline issue were collected: Performance data. Subjective Workload Assessment Technique (SWAT) ratings, and Subjective WORkload Dominance (SWORD) ratings. Evaluation of the performance results demonstrated that the redesigned cockpit could be flown in accordance to regulations. This was a necessary first step, but could not ensure that acceptable workload had been obtained. Taken together, the SWAT and SWORD results strongly suggested that acceptable performance can be achieved at acceptable level s of workload. In conclusion, the present study is a prototypical example of using available assessment tools to determine system acceptability. These tools should be useful for many other system evaluations.
48

Steinberg, Dick, Dan Donohoo, Laura Strater, and Alice Diggs. "Workload Thresholds for Human Performance Models." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 781–85. http://dx.doi.org/10.1177/1541931213601679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Human performance modeling (HPM) can be an effective tool to use for determining crew designs. Crew design includes determining the number of operators needed, the role of automation, and member task responsibilities required to operate a system. Without effective measures of performance and thresholds for assessing success, design decisions from HPM will be erroneous. Operator tasks can be assigned and allocated to crew members in a simulation to estimate the workload for each operator during a period of performance. The methods for determining when an operator exceeds workload thresholds create challenges for those using HPM for crew design. Some types of analysis have more clearly defined thresholds. For example, if a military operator has too many tasks to complete to effectively initiate countermeasures between the times they receive a warning until the time the threat arrives, they are overloaded and cannot complete their mission. However, many missions do not have such a severe penalty for not completing the tasks within a given time. For example, pharmacists, satellite managers, traffic managers, food service workers do not have such stringent task timing completion thresholds. For example, the penalty for a food service provider to be overloaded is typically extended wait times rather than risk of a loss of life. For these types of operational situations, determining overload is much more challenging. This paper describes a new workload thresholds for operator workflow models. It incorporates the vigilance effort, the maximum time a crew member will be fully loaded, and determining the maximum time worked without a break.
49

Reaux, Ray A., Elizabeth D. Murphy, Lisa J. Stewart, Janet L. Gresh, and Karin Bruce. "Building a Modeling and Simulation Analysis Tool to Predict Air Traffic Controller Workload and Performance." Proceedings of the Human Factors Society Annual Meeting 33, no. 2 (October 1989): 52–56. http://dx.doi.org/10.1177/154193128903300211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To meet expected increases in domestic air traffic, the Federal Aviation Administration (FAA) will increase the level of automation in the domestic air traffic control (ATC) system. There is a need to assess the effects of the increased automation on controller workload and performance. Software-based engineering tools are needed to automate the analysis, allowing designers to identify potential problems early in the system design lifecycle. This paper describes one such tool, the Predictive Air Traffic Controller Analysis Model (PATCAM), a modeling and simulation analysis tool that uses a system operations concept and task attributes database, a controller activities model, a sector environment model and simulation engine, and a workload or performance model to predict the impact of system design changes on controller workload or performance.
50

Payvar, Saman, Maxime Pelcat, and Timo D. Hämäläinen. "A model of architecture for estimating GPU processing performance and power." Design Automation for Embedded Systems 25, no. 1 (January 16, 2021): 43–63. http://dx.doi.org/10.1007/s10617-020-09244-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractEfficient usage of heterogeneous computing architectures requires distribution of the workload on available processing elements. Traditionally, the mapping is based on information acquired from application profiling and utilized in architecture exploration. To reduce the amount of manual work required, statistical application modeling and architecture modeling can be combined with exploration heuristics. While the application modeling side of the problem has been studied extensively, architecture modeling has received less attention. Linear System Level Architecture (LSLA) is a Model of Architecture that aims at separating the architectural concerns from algorithmic ones when predicting performance. This work builds on the LSLA model and introduces non-linear semantics, specifically to support GPU performance and power modeling, by modeling also the degree of parallelism. The model is evaluated with three signal processing applications with various workload distributions on a desktop GPU and mobile GPU. The measured average fidelity of the new model is 93% for performance, and 84% for power, which can fit design space exploration purposes.

To the bibliography