Dissertations / Theses on the topic 'Workload modeling and performance evaluation'

To see the other types of publications on this topic, follow the link: Workload modeling and performance evaluation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Workload modeling and performance evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ali-Eldin, Hassan Ahmed. "Workload characterization, controller design and performance evaluation for cloud capacity autoscaling." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-108398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis studies cloud capacity auto-scaling, or how to provision and release re-sources to a service running in the cloud based on its actual demand using an auto-matic controller. As the performance of server systems depends on the system design,the system implementation, and the workloads the system is subjected to, we focuson these aspects with respect to designing auto-scaling algorithms. Towards this goal,we design and implement two auto-scaling algorithms for cloud infrastructures. Thealgorithms predict the future load for an application running in the cloud. We discussthe different approaches to designing an auto-scaler combining reactive and proactivecontrol methods, and to be able to handle long running requests, e.g., tasks runningfor longer than the actuation interval, in a cloud. We compare the performance ofour algorithms with state-of-the-art auto-scalers and evaluate the controllers’ perfor-mance with a set of workloads. As any controller is designed with an assumptionon the operating conditions and system dynamics, the performance of an auto-scalervaries with different workloads.In order to better understand the workload dynamics and evolution, we analyze a6-years long workload trace of the sixth most popular Internet website. In addition,we analyze a workload from one of the largest Video-on-Demand streaming servicesin Sweden. We discuss the popularity of objects served by the two services, the spikesin the two workloads, and the invariants in the workloads. We also introduce, a mea-sure for the disorder in a workload, i.e., the amount of burstiness. The measure isbased on Sample Entropy, an empirical statistic used in biomedical signal processingto characterize biomedical signals. The introduced measure can be used to charac-terize the workloads based on their burstiness profiles. We compare our introducedmeasure with the literature on quantifying burstiness in a server workload, and showthe advantages of our introduced measure.To better understand the tradeoffs between using different auto-scalers with differ-ent workloads, we design a framework to compare auto-scalers and give probabilisticguarantees on the performance in worst-case scenarios. Using different evaluation cri-teria and more than 700 workload traces, we compare six state-of-the-art auto-scalersthat we believe represent the development of the field in the past 8 years. Knowingthat the auto-scalers’ performance depends on the workloads, we design a workloadanalysis and classification tool that assigns a workload to its most suitable elasticitycontroller out of a set of implemented controllers. The tool has two main components;an analyzer, and a classifier. The analyzer analyzes a workload and feeds the analysisresults to the classifier. The classifier assigns a workload to the most suitable elasticitycontroller based on the workload characteristics and a set of predefined business levelobjectives. The tool is evaluated with a set of collected real workloads, and a set ofgenerated synthetic workloads. Our evaluation results shows that the tool can help acloud provider to improve the QoS provided to the customers.
2

Rusnock, Christina. "Simulation-Based Cognitive Workload Modeling and Evaluation of Adaptive Automation Invoking and Revoking Strategies." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In human-computer systems, such as supervisory control systems, large volumes of incoming and complex information can degrade overall system performance. Strategically integrating automation to offload tasks from the operator has been shown to increase not only human performance but also operator efficiency and safety. However, increased automation allows for increased task complexity, which can lead to high cognitive workload and degradation of situational awareness. Adaptive automation is one potential solution to resolve these issues, while maintaining the benefits of traditional automation. Adaptive automation occurs dynamically, with the quantity of automated tasks changing in real-time to meet performance or workload goals. While numerous studies evaluate the relative performance of manual and adaptive systems, little attention has focused on the implications of selecting particular invoking or revoking strategies for adaptive automation. Thus, evaluations of adaptive systems tend to focus on the relative performance among multiple systems rather than the relative performance within a system. This study takes an intra-system approach specifically evaluating the relationship between cognitive workload and situational awareness that occurs when selecting a particular invoking-revoking strategy for an adaptive system. The case scenario is a human supervisory control situation that involves a system operator who receives and interprets intelligence outputs from multiple unmanned assets, and then identifies and reports potential threats and changes in the environment. In order to investigate this relationship between workload and situational awareness, discrete event simulation (DES) is used. DES is a standard technique in the analysis of systems, and the advantage of using DES to explore this relationship is that it can represent a human-computer system as the state of the system evolves over time. Furthermore, and most importantly, a well-designed DES model can represent the human operators, the tasks to be performed, and the cognitive demands placed on the operators. In addition to evaluating the cognitive workload to situational awareness tradeoff, this research demonstrates that DES can quite effectively model and predict human cognitive workload, specifically for system evaluation. This research finds that the predicted workload of the DES models highly correlates with well-established subjective measures and is more predictive of cognitive workload than numerous physiological measures. This research then uses the validated DES models to explore and predict the cognitive workload impacts of adaptive automation through various invoking and revoking strategies. The study provides insights into the workload-situational awareness tradeoffs that occur when selecting particular invoking and revoking strategies. First, in order to establish an appropriate target workload range, it is necessary to account for both performance goals and the portion of the workload-performance curve for the task in question. Second, establishing an invoking threshold may require a tradeoff between workload and situational awareness, which is influenced by the task's location on the workload-situational awareness continuum. Finally, this study finds that revoking strategies differ in their ability to achieve workload and situational awareness goals. For the case scenario examined, revoking strategies based on duration are best suited to improve workload, while revoking strategies based on revoking thresholds are better for maintaining situational awareness.
Ph.D.
Doctorate
Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering
3

Kreku, J. (Jari). "Early-phase performance evaluation of computer systems using workload models and SystemC." Doctoral thesis, Oulun yliopisto, 2012. http://urn.fi/urn:isbn:9789514299902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Novel methods and tools are needed for the performance evaluation of future embedded systems due to the increasing system complexity. Systems accommodate a large number of on-terminal and or downloadable applications offering the users with numerous services related to telecommunication, audio and video, digital television, internet and navigation. More flexibility, scalability and modularity is expected from execution platforms to support applications. Digital processing architectures will evolve from the current system-on-chips to massively parallel computers consisting of heterogeneous subsystems connected by a network-on-chip. As a consequence, the overall complexity of system evaluation will increase by orders of magnitude. The ABSOLUT performance simulation approach presented in this thesis combats evaluation complexity by abstracting the functionality of the applications with workload models consisting of instruction-like primitives. Workload models can be created from application specifications, measurement results, execution traces, or the source code. Complexity of execution platform models is also reduced since the data paths of processing elements need not be modelled in detail and data transfers and storage are simulated only from the performance point of view. The modelling approach enables early evaluation since mature hardware or software is not required for the modelling or simulation of complete systems. ABSOLUT is applied to a number of case studies including mobile phone usage, MP3 playback, MPEG4 encoding and decoding, 3D gaming, virtual network computing, and parallel software-defined radio applications. The platforms used in the studies represent both embedded systems and personal computers, and at the same time both currently existing platforms and future designs. The results obtained from simulations are compared to measurements from real platforms, which reveals an average difference of 12% in the results. This exceeds the accuracy requirements expected from virtual system-based simulation approaches intended for early evaluation
Tiivistelmä Sulautettujen tietokonejärjestelmien suorituskyvyn arviointi muuttuu yhä haastavammaksi järjestelmien kasvavan kompleksisuuden vuoksi. Järjestelmissä on suuri määrä sovelluksia, jotka tarjoavat käyttäjälle palveluita liittyen esimerkiksi telekommunikaatioon, äänen ja videokuvan toistoon, internet-selaukseen ja navigaatioon. Tästä johtuen suoritusalustoilta edellytetään yhä enemmän joustavuutta, skaalautuvuutta ja modulaarisuutta. Suoritusarkkitehtuurit kehittyvät nykyisistä System-on-Chip (SoC) -ratkaisuista Network-on-Chip (NoC) -rinnakkaistietokoneiksi, jotka koostuvat heterogeenisistä alijärjestelmistä. Sovellusten ja suoritusalustan muodostaman järjestelmän suorituskyvyn arviointiin tarvitaan uusia menetelmiä ja työkaluja, joilla kompleksisuutta voidaan hallita. Tässä väitöskirjassa esitettävä ABSOLUT-simulointimenetelmä pienentää suorituskyvyn arvioinnin kompleksisuutta abstrahoimalla sovelluksen toiminnallisuutta työkuormamalleilla, jotka koostuvat kuormaprimitiiveistä suorittimen käskyjen sijaan. Työkuormamalleja voidaan luoda sovellusten spesifikaatioista, mittaustuloksista, suoritusjäljistä tai sovellusten lähdekoodeista. Suoritusalustoista ABSOLUT-menetelmä käyttää yksinkertaisia kapasiteettimalleja toiminnallisten mallien sijaan: suoritinarkkitehtuurit mallinnetaan korkealla tasolla ja tiedonsiirto ja tiedon varastointi mallinnetaan vain suorituskyvyn näkökulmasta. Menetelmä mahdollistaa aikaisen suorituskyvyn arvioinnin, koska malleja voidaan luoda ja simuloida jo ennen valmiin sovelluksen tai suoritusalustan olemassaoloa. ABSOLUT-menetelmää on käytetty useissa erilaisissa kokeiluissa, jotka sisälsivät esimerkiksi matkapuhelimen käyttöä, äänen ja videokuvan toistoa ja tallennusta, 3D-pelin pelaamista ja digitaalista tiedonsiirtoa. Esimerkeissä käytetiin tyypillisiä suoritusalustoja sekä kotitietokoneiden että sulautettujen järjestelmien maailmasta. Lisäksi osa esimerkeistä pohjautui tuleviin tai keksittyihin suoritusalustoihin. Osa simuloinneista on varmennettu vertaamalla simulointituloksia todellisista järjestelmistä saatuihin mittaustuloksiin. Niiden välillä huomattiin keskimäärin 12 prosentin poikkeama, mikä ylittää aikaisen vaiheen suorituskyvyn simulointimenetelmiltä vaadittavan tarkkuuden
4

Georgiou, Yiannis. "Contributions for resource and job management in high performance computing." Grenoble, 2010. http://www.theses.fr/2010GRENM079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le domaine du Calcul à Haute Performance (HPC) évolue étroitement avec les dernières avancées technologiques des architectures informatiques et des besoins toujours croissants en demande de puissance de calcul. Cette thèse s'intéresse à l'étude d'un type d'intergiciel particulier appelé gestionnaire de tâches et ressources (RJMS) qui est chargé de distribuer la puissance de calcul aux applications dans les plateformes pour le HPC. Le RJMS joue un rôle central du fait de sa position dans la pile logicielle. Les dernières évolutions dans les couches matérielles et dans les applications ont largement augmenté le niveau de complexité auquel doit faire face ce type d'intergiciel. Des problématiques telles que le passage à l'échelle, la prise en compte d'un taux d'activité irrégulier, la gestion des contraintes liées à la topologie du matériel, l'efficacité énergétique et la tolérance aux pannes doivent être particulièrement pris en considération, afin, entre autres, de fournir une meilleure exploitation des ressources à la fois du point de vue global du système ainsi que de celui des utilisateurs. La première contribution de cette thèse est un état de l'art sur la gestion des tâches et des ressources ainsi qu'une analyse comparative des principaux intergiciels actuels et des différentes problématiques de recherche associées. Une métrique importante pour évaluer l'apport d'un RJMS sur une plate-forme est le niveau d'utilisation de l'ensemble du système. On constate parmi les traces d'activité de plusieurs plateformes qu'un grand nombre d'entre elles présentent un taux d'utilisation significativement inférieure à une pleine utilisation. Ce constat est la principale motivation des autres contributions de cette thèse qui portent sur les méthodes d'exploitations de ces périodes de sous-utilisation au profit de la gestion globale du système ou des applications en court d'exécution. Plus particulièrement cette thèse explore premièrement, les moyens d'accroître le taux de calculs utiles dans le contexte des grilles légères en présence d'une forte variabilité de la disponibilité des ressources de calcul. Deuxièmement, nous avons étudié le cas des tâches dynamiques et proposé différentes techniques s'intégrant au RJMS OAR et troisièmement nous évalués plusieurs modes d'exploitation des ressources en prenant en compte la consommation énergétique. Finalement, les évaluations de cette thèse reposent sur une approche expérimentale pour laquelle nous avons proposés des outils et une méthodologie permettant d'améliorer significativement la maîtrise et la reproductibilité d'expériences complexes propre à ce domaine d'étude
High Performance Computing is characterized by the latest technological evolutions in computing architectures and by the increasing needs of applications for computing power. A particular middleware called Resource and Job Management System (RJMS), is responsible for delivering computing power to applications. The RJMS plays an important role in HPC since it has a strategic place in the whole software stack because it stands between the above two layers. However, the latest evolutions in hardware and applications layers have provided new levels of complexities to this middleware. Issues like scalability, management of topological constraints, energy efficiency and fault tolerance have to be particularly considered, among others, in order to provide a better system exploitation from both the system and user point of view. This dissertation provides a state of the art upon the fundamental concepts and research issues of Resources and Jobs Management Systems. It provides a multi-level comparison (concepts, functionalities, performance) of some Resource and Jobs Management Systems in High Performance Computing. An important metric to evaluate the work of a RJMS on a platform is the observed system utilization. However, studies and logs of production platforms show that HPC systems in general suffer of significant un-utilization rates. Our study deals with these clusters' un-utilization periods by proposing methods to aggregate otherwise un-utilized resources for the benefit of the system or the application. More particularly this thesis explores RJMS level mechanisms: 1) for increasing the jobs valuable computation rates in the high volatile environments of a lightweight grid context, 2) for improving system utilization with malleability techniques and 3) providing energy efficient system management through the exploitation of idle computing machines. The experimentation and evaluation in this type of contexts provide important complexities due to the inter-dependency of multiple parameters that have to be taken into control. In this thesis we have developed a methodology based upon real-scale controlled experimentation with submission of synthetic or real workload traces
5

Middlebrooks, Sam E. "The COMPASS Paradigm For The Systematic Evaluation Of U.S. Army Command And Control Systems Using Neural Network And Discrete Event Computer Simulation." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/26605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In todayâ s technology based society the rapid proliferation of new machines and systems that would have been undreamed of only a few short years ago has become a way of life. Developments and advances especially in the areas of digital electronics and micro-circuitry have spawned subsequent technology based improvements in transportation, communications, entertainment, automation, the armed forces, and many other areas that would not have been possible otherwise. This rapid â explosionâ of new capabilities and ways of performing tasks has been motivated as often as not by the philosophy that if it is possible to make something better or work faster or be more cost effective or operate over greater distances then it must inherently be good for the human operator. Taken further, these improvements typically are envisioned to consequently produce a more efficient operating system where the human operator is an integral component. The formal concept of human-system interface design has only emerged this century as a recognized academic discipline, however, the practice of developing ideas and concepts for systems containing human operators has been in existence since humans started experiencing cognitive thought. An example of a human system interface technology for communication and dissemination of written information that has evolved over centuries of trial and error development, is the book. It is no accident that the form and shape of the book of today is as it is. This is because it is a shape and form readily usable by human physiology whose optimal configuration was determined by centuries of effort and revision. This slow evolution was mirrored by a rate of technical evolution in printing and elsewhere that allowed new advances to be experimented with as part of the overall use requirement and need for the existence of the printed word and some way to contain it. Today, however, technology is advancing at such a rapid rate that evolutionary use requirements have no chance to develop along side the fast pace of technical progress. One result of this recognition is the establishment of disciplines like human factors engineering that have stated purposes and goals of systematic determination of good and bad human system interface designs. However, other results of this phenomenon are systems that get developed and placed into public use simply because new technology allowed them to be made. This development can proceed without a full appreciation of how the system might be used and, perhaps even more significantly, what impact the use of this new system might have on the operator within it. The U.S. Army has a term for this type of activity. It is called â stove-piped developmentâ . The implication of this term is that a system gets developed in isolation where the developers are only looking â upâ and not â aroundâ . They are thus concerned only with how this system may work or be used for its own singular purposes as opposed to how it might be used in the larger community of existing systems and interfaces or, even more importantly, in the larger community of other new systems in concurrent development. Some of the impacts for the Army from this mode of system development are communication systems that work exactly as designed but are unable to interface to other communications systems in other domains for battlefield wide communications capabilities. Having communications systems that cannot communicate with each other is a distinct problem in its own right. However, when developments in one industry produce products that humans use or attempt to use with products from totally separate developments or industries, the Army concept of product development resulting from stove-piped design visions can have significant implication on the operation of each system and the human operator attempting to use it. There are many examples that would illustrate the above concept, however, one that will be explored here is the Army effort to study, understand, and optimize its command and control (C2) operations. This effort is at the heart of a change in the operational paradigm in C2 Tactical Operations Centers (TOCs) that the Army is now undergoing. For the 50 years since World War II the nature, organization, and mode of the operation of command organizations within the Army has remained virtually unchanged. Staffs have been organized on a basic four section structure and TOCs generally only operate in a totally static mode with the amount of time required to move them to keep up with a mobile battlefield going up almost exponentially from lower to higher command levels. However, current initiatives are changing all that and while new vehicles and hardware systems address individual components of the command structures to improve their operations, these initiatives do not necessarily provide the environment in which the human operator component of the overall system can function in a more effective manner. This dissertation examines C2 from a system level viewpoint using a new paradigm for systematically examining the way TOCs operate and then translating those observations into validated computer simulations using a methodological framework. This paradigm is called COmputer Modeling Paradigm And Simulation of Systems (COMPASS). COMPASS provides the ability to model TOC operations in a way that not only includes the individuals, work groups and teams in it, but also all of the other hardware and software systems and subsystems and human-system interfaces that comprise it as well as the facilities and environmental conditions that surround it. Most of the current literature and research in this area focuses on the concept of C2 itself and its follow-on activities of command, control, communications (C3), command, control, communications, and computers (C4), and command, control, communications, computers and intelligence (C4I). This focus tends to address the activities involved with the human processes within the overall system such as individual and team performance and the commanderâ s decision-making process. While the literature acknowledges the existence of the command and control system (C2S), little effort has been expended to quantify and analyze C2Ss from a systemic viewpoint. A C2S is defined as the facilities, equipment, communications, procedures, and personnel necessary to support the commander (i.e., the primary decision maker within the system) for conducting the activities of planning, directing, and controlling the battlefield within the sector of operations applicable to the system. The research in this dissertation is in two phases. The overall project incorporates sequential experimentation procedures that build on successive TOC observation events to generate an evolving data store that supports the two phases of the project. Phase I consists of the observation of heavy maneuver battalion and brigade TOCs during peacetime exercises. The term â heavy maneuverâ is used to connotate main battle forces such as armored and mechanized infantry units supported by artillery, air defense, close air, engineer, and other so called combat support elements. This type of unit comprises the main battle forces on the battlefield. It is used to refer to what is called the conventional force structure. These observations are conducted using naturalistic observation techniques of the visible functioning of activities within the TOC and are augmented by automatic data collection of such things as analog and digital message traffic, combat reports generated by the computer simulations supporting the wargame exercise, and video and audio recordings where appropriate and available. Visible activities within the TOC include primarily the human operator functions such as message handling activities, decision-making processes and timing, coordination activities, and span of control over the battlefield. They also include environmental conditions, functional status of computer and communications systems, and levels of message traffic flows. These observations are further augmented by observer estimations of such indicators as perceived level of stress, excitement, and level of attention to the mission of the TOC personnel. In other words, every visible and available component of the C2S within the TOC is recorded for analysis. No a priori attempt is made to evaluate the potential significance of each of the activities as their contribution may be so subtle as to only be ascertainable through statistical analysis. Each of these performance activities becomes an independent variable (IV) within the data that is compared against dependent variables (DV) identified according to the mission functions of the TOC. The DVs for the C2S are performance measures that are critical combat tasks performed by the system. Examples of critical combat tasks are â attacking to seize an objectiveâ , â seizure of key terrainâ , and â river crossingsâ . A list of expected critical combat tasks has been prepared from the literature and subject matter expert (SME) input. After the exercise is over, the success of these critical tasks attempted by the C2S during the wargame are established through evaluator assessments, if available, and/or TOC staff self analysis and reporting as presented during after action reviews. The second part of Phase I includes datamining procedures, including neural networks, used in a constrained format to analyze the data. The term constrained means that the identification of the outputs/DV is known. The process was to identify those IV that significantly contribute to the constrained DV. A neural network is then constructed where each IV forms an input node and each DV forms an output node. One layer of hidden nodes is used to complete the network. The number of hidden nodes and layers is determined through iterative analysis of the network. The completed network is then trained to replicate the output conditions through iterative epoch executions. The network is then pruned to remove input nodes that do not contribute significantly to the output condition. Once the neural network tree is pruned through iterative executions of the neural network, the resulting branches are used to develop algorithmic descriptors of the system in the form of regression like expressions. For Phase II these algorithmic expressions are incorporated into the CoHOST discrete event computer simulation model of the C2S. The programming environment is the commercial programming language Micro Saintä running on a PC microcomputer. An interrogation approach was developed to query these algorithms within the computer simulation to determine if they allow the simulation to reflect the activities observed in the real TOC to within an acceptable degree of accuracy. The purpose of this dissertation is to introduce the COMPASS concept that is a paradigm for developing techniques and procedures to translate as much of the performance of the entire TOC system as possible to an existing computer simulation that would be suitable for analyses of future system configurations. The approach consists of the following steps: · Naturalistic observation of the real system using ethnographic techniques. · Data analysis using datamining techniques such as neural networks. · Development of mathematical models of TOC performance activities. · Integration of the mathematical into the CoHOST computer simulation. · Interrogation of the computer simulation. · Assessment of the level of accuracy of the computer simulation. · Validation of the process as a viable system simulation approach.
Ph. D.
6

Peña, Ortiz Raúl. "Accurate workload design for web performance evaluation." Doctoral thesis, Editorial Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/21054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Las nuevas aplicaciones y servicios web, cada vez má¡s populares en nuestro día a día, han cambiado completamente la forma en la que los usuarios interactúan con la Web. En menos de media década, el papel que juegan los usuarios ha evolucionado de meros consumidores pasivos de información a activos colaboradores en la creación de contenidos dinámicos, típicos de la Web actual. Y, además, esta tendencia se espera que aumente y se consolide con el paso del tiempo. Este comportamiento dinámico de los usuarios es una de las principales claves en la definición de cargas de trabajo adecuadas para estimar con precisión el rendimiento de los sistemas web. No obstante, la dificultad intrínseca a la caracterización del dinamismo del usuario y su aplicación en un modelo de carga, propicia que muchos trabajos de investigación sigan todavía empleando cargas no representativas de las navegaciones web actuales. Esta tesis doctoral se centra en la caracterización y reproducción, para estudios de evaluación de prestaciones, de un tipo de carga web más realista, capaz de imitar el comportamiento de los usuarios de la Web actual. El estado del arte en el modelado y generación de cargas para los estudios de prestaciones de la Web presenta varias carencias en relación a modelos y aplicaciones software que representen los diferentes niveles de dinamismo del usuario. Este hecho nos motiva a proponer un modelo más preciso y a desarrollar un nuevo generador de carga basado en este nuevo modelo. Ambas propuestas han sido validadas en relación a una aproximación tradicional de generación de carga web. Con este fin, se ha desarrollado un nuevo entorno de experimentación con la capacidad de reproducir cargas web tradicionales y dinámicas, mediante la integración del generador propuesto con un benchmark de uso común. En esta tesis doctoral también se analiza y evalúa por primera vez, según nuestro saber y entender, el impacto que tiene el empleo de cargas de trabajo dinámicas en las métrica
Peña Ortiz, R. (2013). Accurate workload design for web performance evaluation [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/21054
Palancia
7

Wojciechowski, Josephine Quinn. "Validation of a Task Network Human Performance Model of Driving." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/31713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Human performance modeling (HPM) is often used to investigate systems during all phases of development. HPM was used to investigate function allocation in crews for future combat vehicles. The tasks required by the operators centered around three primary functions, commanding, gunning, and driving. In initial investigations, the driver appeared to be the crew member with the highest workload. Validation of the driver workload model (DWM) is necessary for confidence in the ability of the model to predict workload. Validation would provide mathematical proof that workload of driving is high and that additional tasks impact the performance. This study consisted of two experiments. The purpose of each experiment was to measure performance and workload while driving and attending to an auditory secondary task. The first experiment was performed with a human performance model. The second experiment replicated the same conditions in a human-in-the-loop driving simulator. The results of the two experiments were then correlated to determine if the model could predict performance and workload changes. The results of the investigation indicate that there is some impact of an auditory task on driving. The model is a good predictor of mental workload changes with auditory secondary tasks. However, predictions of the impact on performance from secondary auditory tasks were not demonstrated in the simulator study. Frequency of the distraction was more influential in the changes of performance and workload than the demand of the distraction, at least under the conditions tested in this study. While the workload numbers correlate with simulator numbers, using the model would require a better understanding of what the workload changes would mean in terms of performance measures.
Master of Science
8

Emeras, Joseph. "Workload Traces Analysis and Replay in Large Scale Distributed Systems." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM081/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'auteur n'a pas fourni de résumé en français
High Performance Computing is preparing the era of the transition from Petascale to Exascale. Distributed computing systems are already facing new scalability problems due to the increasing number of computing resources to manage. It is now necessary to study in depth these systems and comprehend their behaviors, strengths and weaknesses to better build the next generation.The complexity of managing users applications on the resources conducted to the analysis of the workload the platform has to support, this to provide them an efficient service.The need for workload comprehension has lead to the collection of traces from production systems and to the proposal of a standard workload format. These contributions enabled the study of numerous of these traces. This also lead to the construction of several models, based on the statistical analysis of the different workloads from the collection.Until recently, existing workload traces did not enabled researchers to study the consumption of resources by the jobs in a temporal way. This is now changing with the need for characterization of jobs consumption patterns.In the first part of this thesis we propose a study of existing workload traces. Then we contribute with an observation of cluster workloads with the consideration of the jobs resource consumptions over time. This highlights specific and unattended patterns in the usage of resources from users.Finally, we propose an extension of the former standard workload format that enables to add such temporal consumptions without loosing the benefit of the existing works.Experimental approaches based on workload models have also served the goal of distributed systems evaluation. Existing models describe the average behavior of observed systems.However, although the study of average behaviors is essential for the understanding of distributed systems, the study of critical cases and particular scenarios is also necessary. This study would give a more complete view and understanding of the performance of the resources and jobs management. In the second part of this thesis we propose an experimental method for performance evaluation of distributed systems based on the replay of production workload trace extracts. These extracts, replaced in their original context, enable to experiment the change of configuration of the system in an online workload and observe the different configurations results. Our technical contribution in this experimental approach is twofold. We propose a first tool to construct the environment in which the experimentation will take place, then we propose a second set of tools that automatize the experiment setup and that replay the trace extract within its original context.Finally, these contributions conducted together, enable to gain a better knowledge of HPC platforms. As future works, the approach proposed in this thesis will serve as a basis to further study larger infrastructures
9

Lundin, Mikael. "Simulating the effects of mental workload on tactical and operational performance in tankcrew." Thesis, Linköping University, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Battletank crew must perform many diverse tasks during a normal mission: Crewmembers have to navigate, communicate, control on-board systems, and engage with the enemy, to mention a few. As human processing capacity is limited, the crewmembers will find themselves in situations where task requirements, due to the number of tasks and task complexity, exceed their mental capacity. The stress that results from mental overload has documented quantitative and qualitative effects on performance; effects that could lead to mission failure.

This thesis describes a simulation of tankcrew during a mission where mental workload is a key factor to the outcome of mission performance. The thesis work has given rise to a number of results. First, conceptual models have been developed of the tank crewmembers. Mental workload is represented in these models as a behavior moderator, which can be manipulated to demonstrate and predict behavioral effects. Second, cognitive models of the tank crewmembers are implemented as Soar agents, which interact with tanks in a 3D simulated battlefield. The empirical data underlying these models was collected from experiments with tankcrew, and involved first hand observations and task analyses. Afterwards, the model’s behavior was verified against an a priori established behavioral pattern and successfully face validated with two subject matter experts.

10

Kwok, Alice S. L. "Modeling, simulation, and performance evaluation of telecommunication networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0009/MQ41729.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gutierrez, Mauricio F. "Masonry heater performance evaluation : efficiency, emissions, and thermal modeling /." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-10062009-020208/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Shanshan. "Modeling and Performance Evaluation of Spatially-correlated Cellular Networks." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans la modélisation et l'évaluation des performances de la communication cellulaire sans fil, la géométrie stochastique est largement appliquée afin de fournir des solutions plus efficaces et plus précises. Le processus ponctuel de Poisson homogène (H-PPP), est le processus ponctuel le plus largement utilisé pour modéliser les emplacements spatiaux des stations de base (BS) en raison de sa facilité de traitement mathématique et de sa simplicité. Pour les fortes corrélations spatiales entre les emplacements des stations de base, seuls les processus ponctuels (PP) avec inhibitions et attractions spatiales peuvent être utiles. Cependant, le temps de simulation long et la faible aptitude mathématique rendent les PP non-Poisson non adaptés à l'évaluation des performances au niveau du système. Par conséquent, pour surmonter les problèmes mentionnés, nous avons les contributions suivantes dans cette thèse: Premièrement, nous introduisons une nouvelle méthodologie de modélisation et d’analyse de réseaux cellulaires de liaison descendante, dans laquelle les stations de base constituent un processus ponctuel invariant par le mouvement qui présente un certain degré d’interaction entre les points. L'approche proposée est basée sur la théorie des PP inhomogènes de Poisson (I-PPP) et est appelée approche à double amincissement non homogène (IDT). L’approche proposée consiste à approximer le PP initial invariant par le mouvement avec un PP équivalent constitué de la superposition de deux I-PPP conditionnellement indépendants. Les inhomogénéités des deux PP sont créées du point de vue de l'utilisateur type ``centré sur l'utilisateur''. Des conditions suffisantes sur les paramètres des fonctions d'amincissement qui garantissent une couverture meilleure ou pire par rapport au modèle de PPP homogène de base sont identifiées. La précision de l'approche IDT est justifiée à l'aide de données empiriques sur la distribution spatiale des stations de base. Ensuite, sur la base de l’approche IDT, une nouvelle expression analytique traitable du rapport de brouillage moyen sur signal (MISR) des réseaux cellulaires où les stations de base présentent des corrélations spatiales est introduite. Pour les PP non-Poisson, nous appliquons l'approche IDT proposée pour estimer les performances des PP non-Poisson. En prenant comme exemple le processus de points β-Ginibre ( β -GPP), nous proposons de nouvelles fonctions d’approximation pour les paramètres clés dans l’approche IDT afin de modéliser différents degrés d’inhibition spatiale et de prouver que MISR est constant en densification de réseau. Nous prouvons que la performance MISR dans le cas β-GPP ne dépend que du degré de répulsion spatiale, c'est-à-dire β , quelles que soient les densités de BS. Les nouvelles fonctions d'approximation et les tendances sont validées par des simulations numériques.Troisièmement nous étudions plus avant la méta-distribution du SIR à l’aide de l’approche IDT. La méta-distribution est la distribution de la probabilité de réussite conditionnelle compte tenu du processus de points. Nous dérivons et comparons l'expression sous forme fermée pour le b-ème moment dans les cas PP H-PPP et non-Poisson. Le calcul direct de la fonction de distribution cumulative complémentaire (CCDF) pour la méta-distribution n'étant pas disponible, nous proposons une méthode numérique simple et précise basée sur l'inversion numérique des transformées de Laplace. L'approche proposée est plus efficace et stable que l'approche conventionnelle utilisant le théorème de Gil-Pelaez. La valeur asymptotique de la CCDF de la méta distribution est calculée dans la nouvelle définition de la probabilité de réussite. En outre, la méthode proposée est comparée à certaines autres approximations et limites, par exemple l’approximation bêta, les bornes de Markov et les liaisons de Paley-Zygmund. Cependant, les autres modèles et limites d'approximation sont comparés pour être moins précis que notre méthode proposée
In the modeling and performance evaluation of wireless cellular communication, stochastic geometry is widely applied, in order to provide more efficient and accurate solutions. Homogeneous Poisson point process (H-PPP) with identically independently distributed variables, is the most widely used point process to model the spatial locations of base stations (BSs) due to its mathematical tractability and simplicity. For strong spatial correlations between locations of BSs, only point processes (PPs) with spatial inhibitions and attractions can help. However, the long simulation time and weak mathematical tractability make non-Poisson PPs not suitable for system level performance evaluation. Therefore, to overcome mentioned problems, we have the following contributions in this thesis: First, we introduce a new methodology for modeling and analyzing downlink cellular networks, where the base stations constitute a motion-invariant point process that exhibits some degree of interactions among the points. The proposed approach is based on the theory of inhomogeneous Poisson PPs (I-PPPs) and is referred to as inhomogeneous double thinning (IDT) approach. The proposed approach consists of approximating the original motion-invariant PP with an equivalent PP that is made of the superposition of two conditionally independent I-PPPs. The inhomogeneities of both PPs are created from the point of view of the typical user. The inhomogeneities are mathematically modeled through two distance-dependent thinning functions and a tractable expression of the coverage probability is obtained. Sufficient conditions on the parameters of the thinning functions that guarantee better or worse coverage compared with the baseline homogeneous PPP model are identified. The accuracy of the IDT approach is substantiated with the aid of empirical data for the spatial distribution of the BSs. Then, based on the IDT approach, a new tractable analytical expression of mean interference to signal ratio (MISR) of cellular networks where BSs exhibits spatial correlations is introduced.For non-Poisson PPs, we apply proposed IDT approach to approximate the performance of non-Poisson PPs. Taking β-Ginibre point process (β -GPP) as an example, we propose new approximation functions for key parameters in IDT approach to model different degree of spatial inhibition and we successfully prove that MISR for β -GPP is constant under network densification with our proposed approximation functions. We prove that of MISR performance under β-GPP case only depends on the degree of spatial repulsion, i.e., β , regardless of different BS densities. We also prove that with the increase of β or (given fixed γ or β respectively), the corresponding MISR for β-GPP decreases. The new approximation functions and the trends are validated by numerical simulations. Third, we further study meta distribution of the SIR with the help of the IDT approach. Meta distribution is the distribution of the conditional success probability given the point process. We derive and compare the closed-form expression for the b-th moment under H-PPP and non-Poisson PP case. Since the direct computation of the complementary cumulative distribution function (CCDF) for meta distribution is not available, we propose a simple and accurate numerical method based on numerical inversion of Laplace transforms. The proposed approach is more efficient and stable than the conventional approach using Gil-Pelaez theorem. The asymptotic value of CCDF of meta distribution is computed under new definition of success probability. Furthermore, the proposed method is compared with some other approximations and bounds, e.g., beta approximation, Markov bounds and Paley-Zygmund bound. However, the other approximation models and bounds are compared to be less accurate than our proposed method
13

Lu, Yao. "Propagation Modeling and Performance Evaluation in an Atrium Building." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis electromagnetic wave propagation is investigated in an indoor environment. The indoor environment is a furnished office building with corridors, corners and rooms. Particularly, there is an atrium through the building in the center. For the study there were measurements available from real building in the 2.1 GHz frequency band. One objective is to design a propagation model that should be simple but reflect the trend of the propagation measurements. Furthermore, a system performance evaluation is carried out based on the implemented model. The proposed 3D model is a combination of the Free Space Path Loss model, the Keenan-Motley model and the recursive diffraction model. The channel predictions from the 2D Keenan-Motley algorithm are quite different from the measurements. Therefore, the 3D Keenan-Motley algorithm is designed to depict the atrium effect and speed up the simulation at the same time. Besides a buttery radiation diagram is created to mimic Kathrein 80010709 antenna installed in the building. Finally, a diffracted path is added to improve the received signal strength for the users around the atrium areas. With all the above procedures, the final results from the model are in good quantitative agreement with the measurement data. With the implemented propagation model, a further analysis of the system performance on the Distributed Antenna System (DAS) is performed. A comparison for the system capacity between the closed building and the atrium building is conducted, showing that the former one benefits more when the number of the cells increases. The reason is the atrium cells suffer severe interference from neighbor cells during high traffic demand scenarios. Then some further cell configurations show that the number of the cells, the geometry performance and the balance of the user fraction should be considered to improve the system capacity.
14

Evans, Jason P. "Dynamics modeling and performance evaluation of an autonomous underwater vehicle." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=19581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis describes the creation of a dynamics model of an autonomous underwater vehicle. Motion equations are integrated to obtain the position and velocity of the vehicle. External forces acting on the vehicle, such as hull and control plane hydrodynamic forces, are predicted for the full 360° angle of attack range. This enables the simulation of high angle of attack situations. An accurate through-body thruster model is also incorporated into the simulation. The vehicle model is validated using experimental turning diameters of the ARCS vehicle.
15

Zhao, Jian. "Performance modeling and evaluation of digital video over wireless networks /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20ZHAO.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2003.
Vita. Includes bibliographical references (leaves 149-160). Also available in electronic version. Access restricted to campus users.
16

Vattyam, Priya. "Performance Modeling Methodologies using PDL+ and ARC." University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin962373966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Trevena, Samuel. "Developing and Evaluating a Tool for Automating the Process of Modelling Web Server Workloads : An Explorative Feasibility Study in the Field of Performance Testing." Thesis, Karlstads universitet, Handelshögskolan, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-47784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As the Internet has become increasingly important for people and for businesses that rely on it to create revenue, Internet unavailability can have major consequences. A common cause to unavailability is performance related problems. In order to evade such problems, the system’s quality characteristics in terms of performance need to be evaluated, which is commonly done with performance testing. When performance tests are conducted, the system under test is driven by an artificial workload or a sample of its natural workload while performance related metrics are measured. The workload is a very important aspect of performance testing, proved by measured performance metrics being directly dependent on the workload processed by the system under test. In order to conduct performance tests with representative workloads, the concept of workload modelling should be considered. Workload models attempt to model all relevant features of the workload experienced by a system within a given period of time. A workload model is created by set of consecutive activities that together constitute a process. This explorative feasibility study focuses on exploring, describing and evaluating the feasibility of a tool for automating the process of modelling Web server workloads for performance testing. A literature review was conducted in this student thesis, from which a research model was developed that describes the key factors in the process of modelling Web server workloads for performance testing, the relationships between these factors and their variables. The key factors constitute of four sub-processes and the relationships between them are the sequence flow, i.e. the order of events in the process. The process is initiated by the sub-process Establish Workload Data, where the workload data are retrieved and sanitised. The workload data are then categorised in homogeneous groups called workload entities, which is done in the Identify Workload Entities sub-process. Each workload entity has some associated workload attributes that are identified in the Identify Workload Attributes sub-process. In the last sub-process, Represent Workload, statistical methods, such as standard deviation and arithmetic mean, are applied in order to represent the workload in graphs and tables. Based on the research model and in order to evaluate the feasibility of a tool, a prototype was developed. The feasibility was evaluated through analysis of the primary empirical data, collected from an interview with a field expert who had tested the prototype. The analysis indicated that developing a tool for automating the process of modelling Web server workloads for performance testing is indeed feasible, although some aspects should be addressed if such a tool was to be realised. Analysis implied that an important aspect of modelling Web server workloads for performance testing is that the modeller must be in controller of what is being modelled. The prototype that was developed is highly static, i.e. it is not possible to create customised workload models. Therefore, if the tool is going to be realised, functionality for customising workload models should be added to the tool. Another important aspect that should be addressed if the tool is going to be realised is graphical representation of multiple workload attributes. The analysis indicated that there might be correlations between workload attributes. It is therefore important to be able to graphically represent multiple workload attributes together so that such correlations can be identified.
18

Stojanova, Marija. "Performance Modeling of IEEE 802.11 WLANs." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Il est pratiquement impossible de nommer toutes les sphères de la société profondément modifiées par Internet ou de mesurer son impact à l'intérieur de chaque sphère. Cependant, un consensus parmi les experts des réseaux est que cette influence ne fera que grandir dans les années à venir. Dans la communauté des réseaux, nous nous attendons à un trafic de plus en plus important qui dépendra plus que jamais des technologies sans fil, en particulier des réseaux locaux sans fil (WLAN). L'augmentation du volume de trafic signifie que les organismes de standardisation, les fournisseurs et les architectes de réseau doivent trouver des solutions pour accroître la couverture et la capacité des réseaux WLAN. Compte tenu de la nature distribuée du partage des ressources dans les WLAN basés sur 802.11, ces solutions peuvent devenir inefficaces lorsqu'elles consistent simplement à déployer un plus grand nombre de ressources. Par conséquent, la configuration et le déploiement appropriés jouent un rôle important dans les performances des réseaux locaux sans fil. Dans cette thèse, nous proposons des approches stochastiques de modélisation de performance pour les réseaux locaux sans fil. Tous nos modèles sont conçus pour les WLAN non saturés avec des topologies arbitraires. Les trois premiers modèles sont basés sur des chaînes de Markov et modélisent le réseau à un niveau d'abstraction élevé. Chaque nouveau modèle affine son prédécesseur en étant à la fois conceptuellement plus simple et plus proche du système réel. Notre dernier modèle Markovien est conçu sur mesure pour les réseaux WLAN IEEE 802.11ac et intègre l'agrégation de trames et de canaux ainsi que les index MCS. La fidélité croissante au système et la précision de nos modèles nous ont permis de proposer plusieurs applications différentes pour l’évaluation des performances et la configuration de WLAN gérés de manière centralisée. Nous nous intéressons en particulier aux questions d’équité et de maximisation du débit et proposons plusieurs approches pouvant aider un administrateur de réseau à configurer correctement un réseau en fonction d’un certain indicateur de performance. Notre dernière approche de modélisation est profondément différente et intègre une méthode de traitement de signal de graphe (GSP) pour la modélisation des performances des réseaux locaux sans fil. Le besoin d'une telle modélisation découle principalement de problèmes d'évolutivité, car même si la précision de nos modèles Markoviens les rend appropriés pour de nombreuses applications, leur manque d'évolutivité peut parfois être considéré comme restrictif. Nous montrons que cette approche de boîte noire peut être utilisée avec succès pour modéliser le réseau et qu’intégrer des connaissances spécifiques au WLAN peut aider à augmenter la précision du modèle. Le dernier chapitre de ce manuscrit détaille les contributions et les limites de chaque approche de modélisation que nous avons proposée, y compris une discussion sur les travaux futurs potentiels et sur les pratiques générales en matière d'évaluation de performances des réseaux locaux sans fil
It is virtually impossible to name all the spheres of society that have been profoundly changed by the widespread of the Internet or to measure its impact inside each sphere. However, a consensus opinion of network experts is that this influence will only grow in the coming years. In the networking community, we are expecting an ever-increasing amount of traffic that will more than ever depend on wireless technologies, specifically Wireless Local Area Networks (WLANs). The increase of traffic volume means that standardization organisms, vendors, and network architects need to come up with solutions for more coverage and capacity for WLANs. Given the distributed nature of resource sharing in 802.11-based WLANs, these solutions can become inefficient when they amount to simply deploying a larger number of resources. Thus, proper configuration and deployment of the networks is crucial to their performance. In this thesis, we propose stochastic performance modeling approaches for WLANs. All our models are designed for unsaturated WLANs with arbitrary topologies. The first three models are based on Markov chains and model the network at a high level of abstraction. Each new model refines its predecessor by being conceptually simpler and at the same time closer to the real system. Our last Markovian model is tailor-made for IEEE 802.11ac WLANs and incorporates channel bonding, MCS indexes, and frame aggregation. The increasing system fidelity of the models and their precision have allowed us to propose several different applications regarding the performance evaluation and configuration of centrally-managed WLANs. In particular, we are interested in issues of fairness and throughput maximization and propose several approaches that can help a network administrator to properly configure a network given a certain performance metric. Our last modeling approach is profoundly different and incorporates a Graph Signal Processing (GSP) method for the performance modeling of WLANs. The need for such modeling arises mostly from scalability issues, as even though our Markovian models' accuracy makes them suitable for many applications, their lack of scalability can sometimes be seen as restrictive. We show that this black box approach can be successfully used for modeling the network and that incorporating WLAN-specific knowledge can help increase the accuracy of the model. The final chapter of this manuscript details the contributions and limitations of each modeling approach we proposed, including a discussion on potential future works and on general practices in the performance evaluation of WLANs
19

Middlebrooks, Sam E. "Experimental Interrogation Of Network Simulation Models Of Human Task And Workload Performance In A U.S. Army Tactical Operations Center." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/34429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis research is involved with the development of new methodologies for enhancing the experimental use of computer simulations to optimize predicted human performance in a work domain. Using a computer simulation called Computer modeling Of Human Operator System Tasks (CoHOST) to test the concepts in this research, methods are developed that are used to establish confidence limits and significance thresholds by having the computer model self report its limits. These methods, along with experimental designs that are tailored to the use of computer simulation instead of human subject based research, are used in the CoHOST simulation to investigate the U.S. Army battalion level command and control work domain during combat conditions and develop recommendations about that domain based on the experimental use of CoHOST with these methodologies. Further, with the realization that analytical results showing strictly numerical data do not always satisfy the need for understanding by those who could most benefit from the analysis, the results are further interpreted in accordance with a team performance model and the CoHOST analysis results are mapped to it according to macroergonomic and team performance concepts. The CoHOST computer simulation models were developed based on Army needs stemming from the Persian Gulf war. They examined human mental and physical performance capabilities resulting from the introduction of a new command and control vehicle with modernized digital communications systems. Literature searches and background investigations were conducted, and the CoHOST model architecture was developed that was based on a taxonomy of human performance. A computer simulation design was implemented with these taxonomic based descriptors of human performance in the military command and control domain using the commercial programming language MicroSaintâ ¢. The original CoHOST development project developed results that suggested that automation alone does not necessarily improve human performance. The CoHOST models were developed to answer questions about whether human operators could operate effectively in a specified work domain. From an analytical point of view this satisfied queries being made from the developers of that work domain. However, with these completed models available, the intriguing possibility now exists to allow an investigation of how to optimize that work domain to maximize predicted human performance. By developing an appropriate experimental design that allows evaluative conditions to be placed on the simulated human operators in the computer model rather than live human test subjects, a series of computer runs are made to establish test points for identified dependent variables against specified independent variables. With these test points a set of polynomial regression equations are developed that describe the performance characteristics according to these dependent variables of the human operator in the work domain simulated in the model. The resulting regression equations are capable of predicting any outcome the model can produce. The optimum values for the independent variables are then determined that produce the maximum predicted human performance according to the dependent variables. The conclusions from the CoHOST example in this thesis complement the results of the original CoHOST study with the prediction that the primary attentional focus of the battalion commander during combat operations is on establishing and maintaining an awareness and understanding of the situational picture of the battlefield he is operating upon. Being able to form and sustain an accurate mental model of this domain is the predicted predominant activity and drives his ability to make effective decisions and communicate those decisions to the other members of his team and to elements outside his team. The potential specific benefit of this research to the Army is twofold. First, the research demonstrates techniques and procedures that can be used without any required modifications to the existing computer simulations that allow significant predictive use to be made of the simulation beyond its original purpose and intent. Second, the use of these techniques with CoHOST is developing conclusions and recommendations from that simulation that Army force developers can use with their continuing efforts to improve and enhance the ability of commanders and other decision makers to perform as new digital communications systems and procedures are producing radical changes to the paradigm that describes the command and control work domain. The general benefits beyond the Army domain of this research fall into the two areas of methodological improvement of simulation based experimental procedures and in the actual application area of the CoHOST simulation. Tailoring the experimental controls and development of interrogation techniques for the self-reporting and analysis of simulation parameters and thresholds are topics that bode for future study. The CoHOST simulation, while used in this thesis as an example of new and tailored techniques for computer simulation based research, has nevertheless produced conclusions that deviate somewhat from prevailing thought in military command and control. Refinement of this simulation and its use in an even more thorough simulation based study could further address whether the military decision making process itself or contributing factors such as development of mental models for understanding of the situation is or should be the primary focus of team decision makers in the military command and control domain.
Master of Science
20

Nouri, Ayoub. "Rigorous System-level Modeling and Performance Evaluation for Embedded System Design." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENM008/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les systèmes embarqués ont évolué d'une manière spectaculaire et sont devenus partie intégrante de notre quotidien. En réponse aux exigences grandissantes en termes de nombre de fonctionnalités et donc de flexibilité, les parties logicielles de ces systèmes se sont vues attribuer une place importante malgré leur manque d'efficacité, en comparaison aux solutions matérielles. Par ailleurs, vu la prolifération des systèmes nomades et à ressources limités, tenir compte de la performance est devenu indispensable pour bien les concevoir. Dans cette thèse, nous proposons une démarche rigoureuse et intégrée pour la modélisation et l'évaluation de performance tôt dans le processus de conception. Cette méthode permet de construire des modèles, au niveau système, conformes aux spécifications fonctionnelles, et intégrant les contraintes non-fonctionnelles de l'environnement d'exécution. D'autre part, elle permet d'analyser quantitativement la performance de façon rapide et précise. Cette méthode est guidée par les modèles et se base sur le formalisme $mathcal{S}$BIP que nous proposons pour la modélisation stochastique selon une approche formelle et par composants. Pour construire des modèles conformes au niveau système, nous partons de modèles purement fonctionnels utilisés pour générer automatiquement une implémentation distribuée, étant donnée une architecture matérielle cible et un schéma de répartition. Dans le but d'obtenir une description fidèle de la performance, nous avons conçu une technique d'inférence statistique qui produit une caractérisation probabiliste. Cette dernière est utilisée pour calibrer le modèle fonctionnel de départ. Afin d'évaluer la performance de ce modèle, nous nous basons sur du model checking statistique que nous améliorons à l'aide d'une technique d'abstraction. Nous avons développé un flot de conception qui automatise la majorité des phases décrites ci-dessus. Ce flot a été appliqué à différentes études de cas, notamment à une application de reconnaissance d'image déployée sur la plateforme multi-cœurs STHORM
In the present work, we tackle the problem of modeling and evaluating performance in the context of embedded systems design. These have become essential for modern societies and experienced important evolution. Due to the growing demand on functionality and programmability, software solutions have gained in importance, although known to be less efficient than dedicated hardware. Consequently, considering performance has become a must, especially with the generalization of resource-constrained devices. We present a rigorous and integrated approach for system-level performance modeling and analysis. The proposed method enables faithful high-level modeling, encompassing both functional and performance aspects, and allows for rapid and accurate quantitative performance evaluation. The approach is model-based and relies on the $mathcal{S}$BIP formalism for stochastic component-based modeling and formal verification. We use statistical model checking for analyzing performance requirements and introduce a stochastic abstraction technique to enhance its scalability. Faithful high-level models are built by calibrating functional models with low-level performance information using automatic code generation and statistical inference. We provide a tool-flow that automates most of the steps of the proposed approach and illustrate its use on a real-life case study for image processing. We consider the design and mapping of a parallel version of the HMAX models algorithm for object recognition on the STHORM many-cores platform. We explored timing aspects and the obtained results show not only the usability of the approach but also its pertinence for taking well-founded decisions in the context of system-level design
21

Gonzalez, Damian Mark. "Performance modeling and evaluation of topologies for low-latency SCI systems." [Florida] : State University System of Florida, 2000. http://etd.fcla.edu/etd/uf/2000/ane5950/thesis%5F001115.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S.)--University of Florida, 2000.
Title from first page of PDF file. Document formatted into pages; contains ix, 81 p.; also contains graphics. Abstract copied from student-submitted information. Vita. Includes bibliographical references (p. 79-80).
22

Wang, Peng. "STOCHASTIC MODELING AND UNCERTAINTY EVALUATION FOR PERFORMANCE PROGNOSIS IN DYNAMICAL SYSTEMS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1499788641069811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Nduonyi, Moses Asuquo. "Integrated reservoir study of the Monument Northwest field: a waterflood performance evaluation." Texas A&M University, 2007. http://hdl.handle.net/1969.1/85867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An integrated full-field study was conducted on the Monument Northwest field located in Kansas. The purpose of this study was to determine the feasibility and profitability of a waterflood using numerical simulation. Outlined in this thesis is a methodology for a deterministic approach. The data history of the wells in the field beginning from spud date were gathered and analyzed into information necessary for building an upscaled reservoir model of the field. Means of increasing production and recovery from the field via a waterflood was implemented. Usually, at the time of such a redevelopment plan or scheme to improve field performance, a tangible amount of information about the reservoir is already known. Therefore it is very useful incorporating knowledge about the field in predicting future behavior of the field under certain conditions. The need for an integrated reservoir study cannot be over-emphasized. Information known about the reservoir from different segments of the field exploration and production are coupled and harnessed into developing a representative 3D reservoir model of the field. An integrated approach is used in developing a 3D reservoir model of the Monument Northwest field and a waterflood is evaluated and analyzed by a simulation of the reservoir model. From the results of the reservoir simulation it was concluded that the waterflood project for the Monument Northwest field is a viable and economic project.
24

Wickramarathna, Thamali Dilusha N. "Modeling and Performance Evaluation of a Delay and Marking Based Congestion Controller." Scholarly Repository, 2008. http://scholarlyrepository.miami.edu/oa_theses/101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Achieving high performance in high capacity data transfers over the Internet has long been a daunting challenge. The current standard of Transmission Control Protocol (TCP), TCP Reno, does not scale efficiently to higher bandwidths. Various congestion controllers have been proposed to alleviate this problem. Most of these controllers primarily use marking/loss or/and delay as distinct feedback signals from the network, and employ separate data transfer control strategies that react to either marking/loss or delay. While these controllers have achieved better performance compared to existing TCP standard, they suffer from various shortcomings. Thus, in our previous work, we designed a congestion control scheme that jointly exploits both delay and marking; D+M (Delay Marking) TCP. We demonstrated that D+M TCP can adapt to highly dynamic network conditions and infrastructure using ns-2 simulations. Yet, an analytical explanation of D+M TCP was needed to explain why it works as observed. Furthermore, D+M TCP needed extensive simulations in order to assess its performance, especially in relation to other high-speed protocols. Therefore, we propose a model for D+M TCP based on distributed resource optimization theory. Based on this model, we argue that D+M TCP solves the network resource allocation problem in an optimal manner. Moreover, we analyze the fairness properties of D+M TCP, and its coexistence with different queue management algorithms. Resource optimization interpretation of D+M TCP allows us to derive equilibrium values of steady state of the controller, and we use ns-2 simulations to verify that the protocol indeed attains the analytical equilibria. Furthermore, dynamics of D+M TCP is also explained in a mathematical framework, and we show that D+M TCP achieves analytical predictions. Modeling the dynamics gives insights to the stability and convergence properties of D+M TCP, as we outline in the thesis. Moreover, we demonstrate that D+M TCP is able to achieve excellent performance in a variety of network conditions and infrastructure. D+M TCP achieved performance superior to most of the existing high-speed TCP versions in terms of link utilization, RTT fairness, goodput, and oscillatory behavior, as confirmed by comparative ns-2 simulations.
25

Mishra, Amitabh. "Modeling and Performance Evaluation of Wireless Body Area Networks for Healthcare Applications." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439281330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Izurieta, Iván Carrera. "Performance modeling of MapReduce applications for the cloud." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/99055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nos últimos anos, Cloud Computing tem se tornado uma tecnologia importante que possibilitou executar aplicações sem a necessidade de implementar uma infraestrutura física com a vantagem de reduzir os custos ao usuário cobrando somente pelos recursos computacionais utilizados pela aplicação. O desafio com a implementação de aplicações distribuídas em ambientes de Cloud Computing é o planejamento da infraestrutura de máquinas virtuais visando otimizar o tempo de execução e o custo da implementação. Assim mesmo, nos últimos anos temos visto como a quantidade de dados produzida pelas aplicações cresceu mais que nunca. Estes dados contêm informação valiosa que deve ser obtida utilizando ferramentas como MapReduce. MapReduce é um importante framework para análise de grandes quantidades de dados desde que foi proposto pela Google, e disponibilizado Open Source pela Apache com a sua implementação Hadoop. O objetivo deste trabalho é apresentar que é possível predizer o tempo de execução de uma aplicação distribuída, a saber, uma aplicação MapReduce, na infraestrutura de Cloud Computing, utilizando um modelo matemático baseado em especificações teóricas. Após medir o tempo levado na execução da aplicação e variando os parámetros indicados no modelo matemático, e, após utilizar uma técnica de regressão linear, o objetivo é atingido encontrando um modelo do tempo de execução que foi posteriormente aplicado para predizer o tempo de execução de aplicações MapReduce com resultados satisfatórios. Os experimentos foram realizados em diferentes configurações: a saber, executando diferentes aplicações MapReduce em clusters privados e públicos, bem como em infraestruturas de Cloud comercial, e variando o número de nós que compõem o cluster, e o tamanho do workload dado à aplicação. Os experimentos mostraram uma clara relação com o modelo teórico, indicando que o modelo é, de fato, capaz de predizer o tempo de execução de aplicações MapReduce. O modelo desenvolvido é genérico, o que quer dizer que utiliza abstrações teóricas para a capacidade computacional do ambiente e o custo computacional da aplicação MapReduce. Motiva-se a desenvolver trabalhos futuros para estender esta abordagem para atingir outro tipo de aplicações distribuídas, e também incluir o modelo matemático deste trabalho dentro de serviços na núvem que ofereçam plataformas MapReduce, a fim de ajudar os usuários a planejar suas implementações.
In the last years, Cloud Computing has become a key technology that made possible running applications without needing to deploy a physical infrastructure with the advantage of lowering costs to the user by charging only for the computational resources used by the application. The challenge with deploying distributed applications in Cloud Computing environments is that the virtual machine infrastructure should be planned in a way that is time and cost-effective. Also, in the last years we have seen how the amount of data produced by applications has grown bigger than ever. This data contains valuable information that has to be extracted using tools like MapReduce. MapReduce is an important framework to analyze large amounts of data since it was proposed by Google, and made open source by Apache with its Hadoop implementation. The goal of this work is to show that the execution time of a distributed application, namely, a MapReduce application, in a Cloud computing environment, can be predicted using a mathematical model based on theoretical specifications. This prediction is made to help the users of the Cloud Computing environment to plan their deployments, i.e., quantify the number of virtual machines and its characteristics in order to have a lesser cost and/or time. After measuring the application execution time and varying parameters stated in the mathematical model, and after that, using a linear regression technique, the goal is achieved finding a model of the execution time which was then applied to predict the execution time of MapReduce applications with satisfying results. The experiments were conducted in several configurations: namely, private and public clusters, as well as commercial cloud infrastructures, running different MapReduce applications, and varying the number of nodes composing the cluster, as well as the amount of workload given to the application. Experiments showed a clear relation with the theoretical model, revealing that the model is in fact able to predict the execution time of MapReduce applications. The developed model is generic, meaning that it uses theoretical abstractions for the computing capacity of the environment and the computing cost of the MapReduce application. Further work in extending this approach to fit other types of distributed applications is encouraged, as well as including this mathematical model into Cloud services offering MapReduce platforms, in order to aid users plan their deployments.
27

Luo, Yang, and 羅陽. "Performance modeling and load balancing for Distributed Java Virtual Machine." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B41509043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Fändriks, Ingrid. "Alternative Methods for Evaluation of Oxygen Transfer Performance in Clean Water." Thesis, Institutionen för informationsteknologi, Department of Information Technology, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-153208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Aeration of wastewater is performed in many wastewater treatment plants to supply oxygen to microorganisms. To evaluate the performance of a single aerator or an aeration system, there is a standard method for oxygen transfer measurements in clean water used today. The method includes a model that describes the aeration process and the model parameters could be estimated using nonlinear regression. The model is a simplified description of the oxygen transfer which could possibly result in performance results that are not accurate. That is why many have tried to describe the aeration at other ways and with other parameters. The focus of this Master Thesis has been to develop alternative models which better describe the aeration that could result in more accurate performance results. Data for model evaluations have been measured in two different tanks with various numbers of aerators. Five alternative methods containing new models for oxygen transfer evaluation have been studied in this thesis. The model in method nr 1 assumes that the oxygen transfer is different depending on where in a tank the dissolved oxygen concentration is measured. It is assumed to be faster in a water volume containing air bubbles. The size of the water volumes and the mixing between them can be described as model parameters and also estimated. The model was evaluated with measured data from the two different aeration systems where the water mixing was relatively big which resulted in that the model assumed that the whole water volume contained air bubbles. After evaluating the results, the model was considered to maybe be useful for aeration systems where the mixing of the water volumes was relatively small in comparison to the total water volume. However, the method should be further studied to evaluate its usability. Method nr 2 contained a model with two separate model parameter, one for the oxygen transfer for the air bubbles and one for the oxygen transfer at the water surface. The model appeared to be sensitive for which initial guesses that was used for the estimated parameters and it was assumed to reduce the model’s usability. Model nr 3 considered that the dissolved oxygen equilibrium concentration in water is depth dependent and was assumed to increase with increasing water depth. Also this model assumed that the oxygen was transferred from both the air bubbles and at the water surface. The model was considered to be useful but further investigations about whether the saturation concentrations should be constant or vary with water depth should be performed. The other two methods contained models that were combinations of the previous mentioned model approaches but was considered to not be useful.
29

Wang, Lingling 1971. "Data broadcast in a network of pervasive devices : modeling, scheduling and performance evaluation." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=79270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the rapid growth of wireless networks and demand of services, seamless service network of pervasive devices integrated with next-generation mobile communication systems has been envisioned. In such network, all mobile elements are highly dynamic ad hoc devices. Plenty of applications require wireless data broadcast to enhance the services due to its natural advantage. This thesis focuses on developing data broadcast service models in a multi-hop ad hoc network environment. In particular, it presents and analyzes three proxy behavior models for the data dissemination in the network. QoS performance measures are derived mathematically according to the behavior models, and applied to the optimization of broadcast schedules at each proxy. Numerical results have shown that the proposed "Intelligent" service proxy behavior model out-performs all other proxy models. A sensitivity analysis is conducted to determine the sensitivity of the network performance with respect to the model assumptions. In addition, an intensive performance analysis and evaluation is conducted as well as its interpretations by information theory.
30

Chen, Yuan. "A performance evaluation methodology for pi-Calculus family of multi-process modeling tools /." Available to subscribers only, 2006. http://proquest.umi.com/pqdweb?did=1203587971&sid=19&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Tesfamariam, Semere Daniel. "Configuration Design of a High Performance and Responsive Manufacturing System : Modeling and Evaluation." Doctoral thesis, Stockholm, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Lynn, Thomas J. "Evaluation and Modeling of Internal Water Storage Zone Performance in Denitrifying Bioretention Systems." Thesis, University of South Florida, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3630914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Nitrate (NO3-) loadings from stormwater runoff promote eutrophication in surface waters. Low Impact Development (LID) is a type of best management practice aimed at restoring the hydrologic function of watersheds and removing contaminants before they are discharged into ground and surface waters. Also known as rain gardens, a bioretention system is a LID technology that is capable of increasing infliltration, reducing runoff rates and removing pollutants. They can be planted with visually appealing vegetation, which plays a role in nutrient uptake. A modified bioretention system incorporates a submerged internal water storage zone (IWSZ) that includes an electron donor to support denitrification. Modified (or denitrifying) bioretention systems have been shown to be capable of converting NO3 - in stormwater runoff to nitrogen gas through denitrification; however, design guidelines are lacking for these systems, particularly under Florida-specific hydrologic conditions.

The experimental portion of this research investigated the performance of denitrifying bioretention systems with varying IWSZ medium types, IWSZ depths, hydraulic loading rates and antecedent dry conditions (ADCs). Microcosm studies were performed to compare denitrification rates using wood chips, gravel, sand, and mixtures of wood chips with sand or gravel media. The microcosm study revealed that carbon-containing media, acclimated media and lower initial dissolved oxygen concentrations will enhance NO3- removal rates. The gravel-wood medium was observed to have high NO3 - removal rates and low final dissolved organic carbon concentrations compared to the other media types. The gravel-wood medium was selected for subsequent storm event and tracer studies, which incorporated three completely submerged columns with varying depths. Even though the columns were operated under equivalent detention times, greater NO3- removal efficiencies were observed in the taller compared to the shorter columns. Tracer studies revealed this phenomenon was attributed to the improved hydraulic performance in the taller compared to shorter columns. In addition, greater NO3- removal efficiencies were observed with an increase in ADCs, where ADCs were positively correlated with dissolved organic carbon concentrations.

Data from the experimental portion of this study, additional hydraulic modeling development for the unsaturated layer and unsaturated layer data from other studies were combined to create nitrogen loading model for modified bioretention systems. The processes incorporated into the IWSZ model include denitrification, dispersion, organic media hydrolysis, oxygen inhibition, bio-available organic carbon limitation and Total Kjeldahl Nitrogen (TKN) leaching. For the hydraulic component, a unifying equation was developed to approximate unsaturated and saturated flow rates. The hydraulic modeling results indicate that during ADCs, greater storage capacities are available in taller compared to shorter IWSZs Data from another study was used to develop a pseudo-nitrification model for the unsaturated layer. A hypothetical case study was then conducted with SWMM-5 software to evaluate nitrogen loadings from various modified bioretention system designs that have equal IWSZ volumes. The results indicate that bioretention systems with taller IWSZs remove greater NO3- loadings, which was likely due to the greater hydraulic performance in the taller compared to shorter IWSZ designs. However, the systems with the shorter IWSZs removed greater TKN and total nitrogen loadings due to the larger unsaturated layer volumes in the shorter IWSZ designs.

33

Lynn, Thomas Joseph. "Evaluation and Modeling of Internal Water Storage Zone Performance in Denitrifying Bioretention Systems." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nitrate (NO3) loadings from stormwater runoff promote eutrophication in surface waters. Low Impact Development (LID) is a type of best management practice aimed at restoring the hydrologic function of watersheds and removing contaminants before they are discharged into ground and surface waters. Also known as rain gardens, a bioretention system is a LID technology that is capable of increasing infliltration, reducing runoff rates and removing pollutants. They can be planted with visually appealing vegetation, which plays a role in nutrient uptake. A modified bioretention system incorporates a submerged internal water storage zone (IWSZ) that includes an electron donor to support denitrification. Modified (or denitrifying) bioretention systems have been shown to be capable of converting NO3 in stormwater runoff to nitrogen gas through denitrification; however, design guidelines are lacking for these systems, particularly under Florida-specific hydrologic conditions. The experimental portion of this research investigated the performance of denitrifying bioretention systems with varying IWSZ medium types, IWSZ depths, hydraulic loading rates and antecedent dry conditions (ADCs). Microcosm studies were performed to compare denitrification rates using wood chips, gravel, sand, and mixtures of wood chips with sand or gravel media. The microcosm study revealed that carbon-containing media, acclimated media and lower initial dissolved oxygen concentrations will enhance NO3 removal rates. The gravel-wood medium was observed to have high NO3 removal rates and low final dissolved organic carbon concentrations compared to the other media types. The gravel-wood medium was selected for subsequent storm event and tracer studies, which incorporated three completely submerged columns with varying depths. Even though the columns were operated under equivalent detention times, greater NO3 removal efficiencies were observed in the taller compared to the shorter columns. Tracer studies revealed this phenomenon was attributed to the improved hydraulic performance in the taller compared to shorter columns. In addition, greater NO3 removal efficiencies were observed with an increase in ADCs, where ADCs were positively correlated with dissolved organic carbon concentrations. Data from the experimental portion of this study, additional hydraulic modeling development for the unsaturated layer and unsaturated layer data from other studies were combined to create nitrogen loading model for modified bioretention systems. The processes incorporated into the IWSZ model include denitrification, dispersion, organic media hydrolysis, oxygen inhibition, bio-available organic carbon limitation and Total Kjeldahl Nitrogen (TKN) leaching. For the hydraulic component, a unifying equation was developed to approximate unsaturated and saturated flow rates. The hydraulic modeling results indicate that during ADCs, greater storage capacities are available in taller compared to shorter IWSZs Data from another study was used to develop a pseudo-nitrification model for the unsaturated layer. A hypothetical case study was then conducted with SWMM-5 software to evaluate nitrogen loadings from various modified bioretention system designs that have equal IWSZ volumes. The results indicate that bioretention systems with taller IWSZs remove greater NO3 loadings, which was likely due to the greater hydraulic performance in the taller compared to shorter IWSZ designs. However, the systems with the shorter IWSZs removed greater TKN and total nitrogen loadings due to the larger unsaturated layer volumes in the shorter IWSZ designs.
34

Shah, Krushal S. "Circuit Modeling and Performance Evaluation of GaN Power HEMT in DC-DC Converters." University of Toledo / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1321475503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Winfrey, Mary Lynn. "Effects of self-modeling on self-efficacy and balance beam performance." Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/845949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The purpose of the investigation was to determine the effect of self-modeling on self-efficacy and performance of balance beam routines. Subjects (n=ll) were intermediate level gymnasts rated at the 5, 6, and 7 skill levels with ages ranging from 8 to 13 years. Subjects were randomly assigned to one of two groups, a self-modeling or a control group. For the self-modeling group, self-modeling videotapes were made of each subject performing her balance beam routine. During a six week period, the self-modeling group subjects viewed the videotape of themselves prior to practice three times a week for six consecutive weeks. All subjects completed two different self-efficacy inventories and a balance beam skills test at four intervals: a pretest, a 2-week test, a 4 week-test, and a posttest. During the six weeks, each group participated in their normal instructional program at the gymnastics academy.The results of this study indicated no significant differences in ratings of self-efficacy or balance beam performance, as based on judge's ratings between the self-modeling group and the control group. However, a significant correlation was found between predicted performance scores and actual performance scores for the self-modeling group (r=.92). This correlation was not significant for the control group (r=.02). Even though a significant effect of self-modeling on self-efficacy and performance scores was not found, this significant correlation indicates that self-modeling may enhance a subject's ability to realistically assess her/his performance. Thus, self-modeling may benefit the learner by developing an accurate conception of one's performance which would enhance the ability to understand and utilize instructional feedback to improve performance.
School of Physical Education
36

Houston, Edward Brian. "The Use of Stormwater Modeling for Design and Performance Evaluation of Best Management Practices at the Watershed Scale." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/34850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

The use of best management practices or BMPs to treat urban stormwater runoff has been pervasive for many years. Extensive research has been conducted to evaluate the performance of individual BMPs at specific locations; however, little research has been published that seeks to evaluate the impacts of small, distributed BMPs throughout a watershed at the regional level. To address this, a model is developed using EPA SWMM 5.0 for the Duck Pond watershed, which is located in Blacksburg, Virginia and encompasses much of the Virginia Polytechnic and State Institute's campus and much of the town of Blacksburg as well. A variety of BMPs are designed and placed within the model. Several variations of the model are created in order to test different aspects of BMP design and to test the BMP modeling abilities of EPA SWMM 5.0. Simulations are performed using one-hour design storms and yearlong hourly rainfall traces. From these simulations, small water quality benefits are observed at the system level. This is seen as encouraging, given that a relatively small amount of the total drainage area is controlled by BMPs and that the BMPs are not sited in optimal locations. As expected, increasing the number of BMPs in the watershed generally increases the level of treatment. The use of the half-inch rule in determining the required water quality volume is examined and found to provide reasonable results.

The design storm approach to designing detention structures is also examined for a two-pond system located within the model. The pond performances are examined under continuous simulation and found to be generally adequate for the simulated rainfall conditions, although they do under-perform somewhat in comparison to the original design criteria.

The usefulness of EPA SWMM 5.0 as a BMP modeling tool is called into question. Many useful features are identified, but so are many limitations. Key abilities such as infiltration from nodes or treatment in conduit flow are found to be lacking. Pollutant mass continuity issues are also encountered, making specific removal rates difficult to define.


Master of Science
37

Voicu, Laura M. "Modeling the Throughput Performance of the SF-SACK Protocol." Scholar Commons, 2006. http://scholarcommons.usf.edu/etd/3904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Besides the two classical techniques used to evaluate the performance of a protocol, computer simulation and experimental measurements, mathematical modeling has been used to study the performance of the TCP protocol. This technique gives an elegant way to gain insights when studying the behavior of a protocol, while providing useful information about its performance. This thesis presents an analytical model for the SF-SACK protocol, a TCP SACK based protocol conceived to be appropriate for data and streaming applications. SF-Sack modifies the multiplicative part of the Additive Increase Multiplicative Decrease of TCP to provide good performance for data and streaming applications, while avoiding the TCP-friendliness problem of the Internet. The modeling of the SF-SACK protocol raises new challenges compared to the classical TCP modeling in two ways: first, the model needs to be adapted to a more complex dynamism of the congestion window, and second, the model needs to incorporate the scheduler that SF-SACK makes use of in order to maintain a periodically updated value of the congestion window. Presented here is a model that is progressively built in order to consider these challenges. The first step is to consider only losses detected by triple-duplicate acknowledgments, with the restriction that one such loss happens each scheduler interval. The second step is to consider losses detected via triple-duplicate acknowledgments, while eliminating the above restriction. Finally, the third step is to include losses detected via time-outs. The result is an analytical characterization of the steady-state send rate and throughput of a SF-SACK flow as a function of the loss probability, the round-trip time (RTT), the time-out interval, and the scheduler interval. The send rate and the throughput of SF-SACK were compared against available results for TCP Reno. The obtained graphs showed that SF-SACK presents a better performance than TCP. The analytical model of the SF-SACK follows the trends of the results that are presently available, using both the ns-2 simulator and experimental measurements.
38

Richter, Konstantina [Verfasser], and Reiner [Akademischer Betreuer] Dumke. "Modeling, evaluation and predicting of IT human resources performance / Konstantina Richter. Betreuer: Reiner Dumke." Magdeburg : Universitätsbibliothek, 2012. http://d-nb.info/105391413X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Roberts, Brian J. "Site-specific RSS signature modeling for WiFi localization." Worcester, Mass. : Worcester Polytechnic Institute, 2009. http://www.wpi.edu/Pubs/ETD/Available/etd-050109-120008/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: empirical database; WiFi localization; RSS; channel modeling; performance evaluation. Includes bibliographical references (leaves 110-111).
40

Kota, Pavan Sriharsha. "Mathematical modeling and performance evaluation of soliton-based and non-soliton all-optical WDM systems." Thesis, Montana State University, 2009. http://etd.lib.montana.edu/etd/2009/kota_pavan/Kota_PavanS0509.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents a performance evaluation of soliton and non-soliton based all-optical Wavelength Division Multiplexed (WDM) networks assuming the existing infrastructure (e.g., fiber and other physical layer components). The performance evaluation is carried out by a conveniently defined Quality (Q) factor, which is a measure of the signal to noise ratio, and indirectly, the bit error rate (BER) of the system. A solution to the Nonlinear Schrödinger Equation (NLSE), describing the propagation of light inside a fiber with linear and nonlinear impairments is found mathematically from which the Q factor is calculated for both solitons and non-solitons. We also compare the accuracy of the Regular Perturbation (RP) based method for analytically calculating the Q factor, with that of the standard Split Step Fourier (SSF) method. Results show that the RP based method gives more accurate results than the widely used SSF method for reasonably low power levels. Results also show that the soliton based systems perform much better than the non-soliton systems for typical system parameters in the existing infrastructure for bit rates of up to 10Gbps per channel. In this thesis, we present a sample WDM based optical network with mesh topology and show that the end-to-end Q factor of a soliton system is higher than that of non-soliton systems.
41

Bamgbopa, Musbaudeen Oladiran. "Modeling And Performance Evaluation Of An Organic Rankine Cycle (orc) With R245fa As Working Fluid." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614367/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents numerical modelling and analysis of a solar Organic Rankine Cycle (ORC) for electricity generation. A regression based approach is used for the working fluid property calculations. Models of the unit&rsquo
s sub-components (pump, evaporator, expander and condenser) are also established. Steady and transient models are developed and analyzed because the unit is considered to work with stable (i.e. solar + boiler) or variable (i.e. solar only) heat input. The unit&rsquo
s heat exchangers (evaporator and condenser) have been identified as critical for the applicable method of analysis (steady or transient). The considered heat resource into the ORC is in the form of solar heated water, which varies between 80-95 0C at a range of mass flow rates between 2-12 kg/s. Simulation results of steady state operation using the developed model shows a maximum power output of around 40 kW. In the defined operation range
refrigerant mass flow rate, hot water mass flow rate and hot water temperature in the system are identified as critical parameters to optimize the power production and the cycle efficiency. The potential benefit of controlling these critical parameters is demonstrated for reliable ORC operation and optimum power production. It is also seen that simulation of the unit&rsquo
s dynamics using the transient model is imperative when variable heat input is involved, due to the fact that maximum energy recovery is the aim with any given level of heat input.
42

Kota, Pavan Sriharsha. "Mathematical modeling and performance evaluation of soliton-based and non-soliton all-optical WDM systems." Thesis, Montana State University, 2008. http://etd.lib.montana.edu/etd/gsetd/2008/kota_pavan/Kota_PavanS1208.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents a performance evaluation of soliton and non-soliton based all-optical Wavelength Division Multiplexed (WDM) networks assuming the existing infrastructure (e.g., fiber and other physical layer components). The performance evaluation is carried out by a conveniently defined Quality (Q) factor, which is a measure of the signal to noise ratio, and indirectly, the bit error rate (BER) of the system. A solution to the Nonlinear Schrödinger Equation (NLSE), describing the propagation of light inside a fiber with linear and nonlinear impairments is found mathematically from which the Q factor is calculated for both solitons and non-solitons. We also compare the accuracy of the Regular Perturbation (RP) based method for analytically calculating the Q factor, with that of the standard Split Step Fourier (SSF) method. Results show that the RP based method gives more accurate results than the widely used SSF method for reasonably low power levels. Results also show that the soliton based systems perform much better than the non-soliton systems for typical system parameters in the existing infrastructure for bit rates of up to 10Gbps per channel. In this thesis, we present a sample WDM based optical network with mesh topology and show that the end-to-end Q factor of a soliton system is higher than that of non-soliton systems.
43

Sonntag, Sören [Verfasser]. "Performance Evaluation of Parallel Packet-Processing Architectures Using SystemC-based Modeling and Refinement / Sören Sonntag." Aachen : Shaker, 2007. http://d-nb.info/1170527558/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Smith, Kevin Boyd. "Modeling, Performance Evaluation, Calibration, and Path Planning of Point Laser Triangulation Probes in Coordinate Metrology /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487935125880273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zhao, Zhili. "Modeling and estimation techniques for understanding heterogeneous traffic behavior." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The majority of current internet traffic is based on TCP. With the emergence of new applications, especially new multimedia applications, however, UDP-based traffic is expected to increase. Furthermore, multimedia applications have sparkled the development of protocols responding to congestion while behaving differently from TCP. As a result, network traffc is expected to become more and more diverse. The increasing link capacity further stimulates new applications utilizing higher bandwidths of future. Besides the traffic diversity, the network is also evolving around new technologies. These trends in the Internet motivate our research work. In this dissertation, modeling and estimation techniques of heterogeneous traffic at a router are presented. The idea of the presented techniques is that if the observed queue length and packet drop probability do not match the predictions from a model of responsive (TCP) traffic, then the error must come from non-responsive traffic; it can then be used for estimating the proportion of non-responsive traffic. The proposed scheme is based on the queue length history, packet drop history, expected TCP and queue dynamics. The effectiveness of the proposed techniques over a wide range of traffic scenarios is corroborated using NS-2 based simulations. Possible applications based on the estimation technique are discussed. The implementation of the estimation technique in the Linux kernel is presented in order to validate our estimation technique in a realistic network environment.
46

Russell, Brian Eugene. "The Empirical Testing of Musical Performance Assessment Paradigm." Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_dissertations/387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The purpose of this study was to test a hypothesized model of aurally perceived performer-controlled musical factors that influence assessments of performance quality. Previous research studies on musical performance constructs, musical achievement, musical expression, and scale construction were examined to identify the factors that influence assessments of performance quality. A total of eight factors were identified: tone, intonation, rhythmic accuracy, articulation, tempo, dynamics, timbre, and interpretation. These factors were categorized as either technique or musical expression factors. Items representing these eight variables were chosen from previous research on scale development. Additional items, along with researcher created items, were also chosen to represent the variables of technique, musical expression and overall perceptions of performance quality. The 44 selected items were placed on the Aural Musical Performance Quality (AMPQ) measure and paired with a four-point Likert scale. The reliability for the AMPQ measure was reported at .977. A total of 58 volunteer adjudicators were recruited to evaluate four recordings that represented one of each instrumental category of interest: brass, woodwind, voice, and string. The resulting performance evaluations (N = 232) were analyzed using statistical regression and path analysis techniques. The results of the analysis provide empirical support for the existence of the model of aurally perceived performer-controlled musical factors. Technique demonstrated significant direct effects on overall perceptions of performance quality and musical expression. Musical expression also demonstrated a significant direct effect on overall perceptions of performance quality. The results of this study are consistent with hypothesized model of performer-controlled musical factors.
47

Van, Zyl Pierrie Jakobus. "Anaerobic digestion of Fischer-Tropsch reaction water : submerged membrane anaerobic reactor design, performance evaluation & modeling." Doctoral thesis, University of Cape Town, 2008. http://hdl.handle.net/11427/4994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Atasoy, Halil Ibrahim. "Design And Fabrication Of Rf Mems Switches And Instrumentation For Performance Evaluation." Thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608831/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents the RF and mechanical design of a metal-to-metal contact RF MEMS switch. Metal-to-metal contact RF MEMS switches are especially preferred in low frequency bands where capacitive switches suffer from isolation due to the limited reactance. Frequency band of operation of the designed switch is from DC to beyond X-band. Measured insertion loss of the structure is less than 0.2 dB, return loss is better than 30 dB, and isolation is better than 20 dB up to 20 GHz. Isolation is greater than 25 dB below 10 GHz. Hence, for wideband applications, this switch offers very low loss and high isolation. Time domain measurement is necessary for the investigation of the dynamic behavior of the devices, determination of the &lsquo
pull in&rsquo
and &lsquo
pull out&rsquo
voltages of the membranes, switching time and power handling of the devices. Also, failure and degradation of the switches can be monitored using the time domain setup. For these purposes a time domain setup is constructed. Moreover, failure mechanisms of the RF MEMS devices are investigated and a power electronic circuitry is constructed for the biasing of RF MEMS switches. Advantage of the biasing circuitry over the direct DC biasing is the multi-shape, high voltage output waveform capability. Lifetimes of the RF MEMS devices are investigated under different bias configurations. Finally, for measurement of complicated RF MEMS structures composed of large number of switches, a bias waveform distribution network is constructed where conventional systems are not adequate because of the high voltage levels. By this way, the necessary instrumentation is completed for controlling a large scale RF MEMS system.
49

Hellkvist, Martin. "Performance Evaluation Of Self-Backhaul For Small-Cell 5G Solutions." Thesis, Uppsala universitet, Signaler och System, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-355228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis evaluates the possibility of using millimeter waves of frequency 28GHz for the use of wireless backhaul in small cell solutions in the coming fifth generation mobile networks. This frequency band has not been used in preceding mobile networks but is undergoing a lot of research. In this thesis simulations are performed to evaluate how the high frequency waves behave inside a three dimensional grid of buildings. The simulations use highly directive antenna arrays with antenna gains of 26dBi. A main results of the investigation was that a high bandwidth of 800MHz was not enough to provide 12Gbps in non line-of-sight propagation within the simulations. Furthermore, without interference limiting techniques, the interference is probable to dominate the noise, even though the high diffraction losses of millimeter waves propose that interference should be very limited in urban areas.
50

Mustafa, Hassan M., and Ayoub Al-Hamadi. "On Teaching Quality Improvement of a Mathematical Topic Using Artificial Neural Networks Modeling (With a Case Study)." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-80718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper inspired by simulation by Artificial Neural Networks (ANNs) applied recently for evaluation of phonics methodology to teach children "how to read". A novel approach for teaching a mathematical topic using a computer aided learning (CAL) package applied at educational field (a children classroom). Interesting practical results obtained after field application of suggested CAL package with and without associated teacher's voice. Presented study highly recommends application of a novel teaching trend based on behaviorism and individuals' learning styles. That is to improve quality of children mathematical learning performance.

To the bibliography