To see the other types of publications on this topic, follow the link: Models evaluation.

Dissertations / Theses on the topic 'Models evaluation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Models evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sawade, Christoph. "Active evaluation of predictive models." Phd thesis, Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2013/6558/.

Full text
Abstract:
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
Maschinelles Lernen befasst sich mit Algorithmen zur Inferenz von Vorhersagemodelle aus komplexen Daten. Vorhersagemodelle sind Funktionen, die einer Eingabe – wie zum Beispiel dem Text einer E-Mail – ein anwendungsspezifisches Zielattribut – wie „Spam“ oder „Nicht-Spam“ – zuweisen. Sie finden Anwendung beim Filtern von Spam-Nachrichten, bei der Text- und Gesichtserkennung oder auch bei der personalisierten Empfehlung von Produkten. Um ein Modell in der Praxis einzusetzen, ist es notwendig, die Vorhersagequalität bezüglich der zukünftigen Anwendung zu schätzen. Für diese Evaluierung werden Instanzen des Eingaberaums benötigt, für die das zugehörige Zielattribut bekannt ist. Instanzen, wie E-Mails, Bilder oder das protokollierte Nutzerverhalten von Kunden, stehen häufig in großem Umfang zur Verfügung. Die Bestimmung der zugehörigen Zielattribute ist jedoch ein manueller Prozess, der kosten- und zeitaufwendig sein kann und mitunter spezielles Fachwissen erfordert. Ziel dieser Arbeit ist die genaue Schätzung der Vorhersagequalität eines gegebenen Modells mit einer minimalen Anzahl von Testinstanzen. Wir untersuchen aktive Evaluierungsprozesse, die mit Hilfe einer Wahrscheinlichkeitsverteilung Instanzen auswählen, für die das Zielattribut bestimmt wird. Die Vorhersagequalität kann anhand verschiedener Kriterien, wie der Fehlerrate, des mittleren quadratischen Verlusts oder des F-measures, bemessen werden. Wir leiten die Wahrscheinlichkeitsverteilungen her, die den Schätzfehler bezüglich eines gegebenen Maßes minimieren. Der verbleibende Schätzfehler lässt sich anhand von Konfidenzintervallen quantifizieren, die sich aus der Verteilung des Schätzers ergeben. In vielen Anwendungen bestimmen individuelle Eigenschaften der Instanzen die Kosten, die für die Bestimmung des Zielattributs anfallen. So unterscheiden sich Dokumente beispielsweise in der Textlänge und dem technischen Anspruch. Diese Eigenschaften beeinflussen die Zeit, die benötigt wird, mögliche Zielattribute wie das Thema oder die Relevanz zuzuweisen. Wir leiten unter Beachtung dieser instanzspezifischen Unterschiede die optimale Verteilung her. Die entwickelten Evaluierungsmethoden werden auf verschiedenen Datensätzen untersucht. Wir analysieren in diesem Zusammenhang Bedingungen, unter denen die aktive Evaluierung genauere Schätzungen liefert als der Standardansatz, bei dem Instanzen zufällig aus der Testverteilung gezogen werden. Eine verwandte Problemstellung ist der Vergleich von zwei Modellen. Um festzustellen, welches Modell in der Praxis eine höhere Vorhersagequalität aufweist, wird eine Menge von Testinstanzen ausgewählt und das zugehörige Zielattribut bestimmt. Ein anschließender statistischer Test erlaubt Aussagen über die Signifikanz der beobachteten Unterschiede. Die Teststärke hängt von der Verteilung ab, nach der die Instanzen ausgewählt wurden. Wir bestimmen die Verteilung, die die Teststärke maximiert und damit die Wahrscheinlichkeit minimiert, sich für das schlechtere Modell zu entscheiden. Des Weiteren geben wir eine Möglichkeit an, den entwickelten Ansatz für den Vergleich von mehreren Modellen zu verwenden. Wir zeigen empirisch, dass die aktive Evaluierungsmethode im Vergleich zur zufälligen Auswahl von Testinstanzen in vielen Anwendungen eine höhere Teststärke aufweist. Im letzten Teil der Arbeit werden das Konzept der aktiven Evaluierung und das des aktiven Modellvergleichs auf Rankingprobleme angewendet. Wir leiten die optimalen Verteilungen für das Schätzen der Qualitätsmaße Discounted Cumulative Gain und Expected Reciprocal Rank her. Eine empirische Studie zur Evaluierung von Suchmaschinen zeigt, dass die neu entwickelten Verfahren signifikant genauere Schätzungen der Rankingqualität liefern als die untersuchten Referenzverfahren.
APA, Harvard, Vancouver, ISO, and other styles
2

Egilsson, Guðlaugur Stefán. "Event Models : An Evaluation Framework." Thesis, University of Skövde, Department of Computer Science, 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-389.

Full text
Abstract:

Event based programming is an important metaphor used in a variety of applications. Research and practice in this field comes primarily from two distinct sources, component software and databases. In both those fields, the need for asynchronous notifications has led to the specification and implementation of event models, but with somewhat different emphasis.

This dissertation defines an evaluation framework for evaluating event models. In doing so, it defines several factors that are important when reviewing different event models with respect to implementing applications or components that require event notification mechanisms.

It has been suggested that the event models defined for COM and CORBA can each be used as the basis for implementing advanced event services. The framework presented in this dissertation is used to evaluate these two event models with respect to their capability to support an advanced event service originating from active database research.

APA, Harvard, Vancouver, ISO, and other styles
3

Martinez, Baquero Guillermo Felipe. "Diagnostic Evaluation of Watershed Models." Thesis, The University of Arizona, 2007. http://hdl.handle.net/10150/193357.

Full text
Abstract:
With increasing model complexity there is a pressing need for new methods that can be used to mine information from large volumes of model results and available data. This work explores strategies to identify and evaluate the causes of discrepancy between models and data related to hydrologic processes, and to increase our knowledge about watershed input-output relationships. In this context, we evaluate the performance of the abcd monthly water balance model for 764 watersheds in the conterminous United States. The work required integration of the Hydro-Climatic Data Network dataset with various kinds of spatial information, and a diagnostic approach to relating model performance with assumptions and characteristics of the basins. The diagnostic process was implemented via classification of watersheds, evaluation of hydrologic signatures and the identification of dominant processes. Knowledge acquired during this process was used to test modifications of the model for hydrologic regions where the performance was "poor".
APA, Harvard, Vancouver, ISO, and other styles
4

Sgherri, Silvia. "Policy evaluation with macroeconometric models." Thesis, University of Warwick, 2000. http://wrap.warwick.ac.uk/4154/.

Full text
Abstract:
This thesis presents a number of examples where macroeconometric models are employed as useful tools for evaluation of contemporary policy problems. A range of approaches is proposed to shed light on how macromodels can actually contribute to the policy debate. In particular, the thesis emphasises how different models maybe augmented or modified and stresses the need for care in the experimental design of policy simulations. Small stylised models of the UK economy are estimated in the first part of this thesis. They are used to assess the performance of simple monetary policy rules under the current inflation targeting monetary regime. In a monetary policy regime of inflation targeting, the appropriate target band-width can be assessed by calculating the variance of inflation in a macroeconomic model under alternative policy rules. A recent Bank of England study concludes from stochastic simulation of a small semi-structural model that a 'fairly substantial lump of inflation uncertainty' exists in the United Kingdom. In chapter 2 an extended and improved version of that model is developed while their estimates of inflation variability are revised downwards by deploying analytic techniques. In chapter 3 a new small 'semi structural' dynamic model of the UK economy is estimated, with particular attention to the modelling of wages and prices. It is used to assess the performance of simple monetary policy rules, including 'inflation forecast targeting' and 'Taylor' rules, while taking into account different degrees of forward-lookingness in both inflation targeting horizon and wage bargaining. Computation of asymptotic inflation-output standard-error trade-offs is provided under various specifications and parametrisations of the model. Large-scale country models have the convenience to make explicit a complete range of relationships among macroeconomic variables most of which, for obvious reasons, are neglected in smaller dynamic models. As a consequence, such quantitative framework offers an unique opportunity to evaluate not only the aggregate impact of exogenous shocks on the variables of interest, but also to identify the underlying economic mechanisms enabling the transmission of such shocks. In the second part of the thesis, I undertake simulations of the National Institute's Domestic Econometric Model (NIDEM) to analyse the characteristics of the UK monetary transmission mechanism. Chapter 4 emphasises that the impact of interest rate movements on real variables is strictly determined by both the monetary regime at work and the underlying assumptions regarding consumption behaviour. Certainly, the steady integration of the members of the EMU and increasing awareness of the need for closer co-operation in monetary and fiscal policy have stimulated greater interest in modelling interdependencies between European countries and the impact and feedbacks from the rest of the world economy. Many of the key issues have now an international aspect, so it becomes more and more difficult to rely on single-country models to provide necessary analysis. International transmission mechanisms can therefore be better tackled with a multi-country model. The third and last part of this thesis focuses on cross-country asymmetric transmissions in response to a common monetary shock within EMU. In particular, in chapter 5 an empirical analysis of the links between monetary and fiscal policy within EMU is presented. This is done through simulation of a neo-classical highly non-Ricardian multi-country model: the IMF's MULTIMOD Mark III (MM3). Chapter 6 provides further evidence about the effects of embracing a Monetary Union when underlying macroeconomist structures still differ across countries. By use of the same model-based quantitative framework, this chapter examines the role of nominal and real rigidities in European labour markets for the assessment of asymmetries in monetary transmission under various monetary regimes.
APA, Harvard, Vancouver, ISO, and other styles
5

Jessop, Alan Thomas. "Multiattribute models for engineering evaluation." Thesis, Heriot-Watt University, 1999. http://hdl.handle.net/10399/1226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lystig, Theodore C. "Evaluation of hidden Markov models /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/9597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Malmsten, Hans. "Properties and evaluation of volatility models." Doctoral thesis, Stockholm : Economic Research Institute, Stockholm School of Economics (Ekonomiska forskningsinstitutet vid Handelshögsk.) (EFI), 2004. http://www.hhs.se/efi/summary/641.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Venter, Daniel Jacobus Lodewyk. "An evaluation of paired comparison models." Thesis, University of Port Elizabeth, 2004. http://hdl.handle.net/10948/364.

Full text
Abstract:
Introduction: A typical task in quantitative data analysis is to derive estimates of population parameters based on sample statistics. For manifest variables this is usually a straightforward process utilising suitable measurement instruments and standard statistics such the mean, median and standard deviation. Latent variables on the other hand are typically more elusive, making it difficult to obtain valid and reliable measurements. One of the most widely used methods of estimating the parameter value of a latent variable is to use a summated score derived from a set of individual scores for each of the various attributes of the latent variable. A serious limitation of this method and other similar methods is that the validity and reliability of measurements depend on whether the statements included in the questionnaire cover all characteristics of the variable being measured and also on respondents’ ability to correctly indicate their perceived assessment of the characteristics on the scale provided. Methods without this limitation and that are especially useful where a set of objects/entities must be ranked based on the parameter values of one or more latent variables, are methods of paired comparisons. Although the underlying assumptions and algorithms of these methods often differ dramatically, they all rely on data derived from a series of comparisons, each consisting of a pair of specimens selected from the set of objects/entities being investigated. Typical examples of the comparison process are: subjects (judges) who have to indicate for each pair of objects which of the two they prefer; sport teams that compete against each other in matches that involve two teams at a time. The resultant data of each comparison range from a simple dichotomy to indicate which of the two objects are preferred/better, to an interval or ratio scale score for e d Bradley-Terry models, and were based on statistical theory assuming that the variable(s) being measured is either normally (Thurstone-Mosteller) or exponentially (Bradley-Terry) distributed. For many years researchers had to rely on these PCM’s when analysing paired comparison data without any idea about the implications if the distribution of the data from which their sample were obtained differed from the assumed distribution for the applicable PCM being utilised. To address this problem, PCM’s were subsequently developed to cater for discrete variables and variables with distributions that are neither normal or exponential. A question that remained unanswered is how the performance, as measured by the accuracy of parameter estimates, of PCM's are affected if they are applied to data from a range of discrete and continuous distribution that violates the assumptions on which the applicable paired comparison algorithm is based. This study is an attempt to answer this question by applying the most popular PCM's to a range of randomly derived data sets that spans typical continuous and discrete data distributions. It is hoped that the results of this study will assist researchers when selecting the most appropriate PCM to obtain accurate estimates of the parameters of the variables in their data sets.
APA, Harvard, Vancouver, ISO, and other styles
9

Derradji-Aouat, Ahmed. "Evaluation of Prevost's elasto-plastic models." Thesis, University of Ottawa (Canada), 1988. http://hdl.handle.net/10393/5545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sathi, Veer Reddy, and Jai Simha Ramanujapura. "A Quality Criteria Based Evaluation of Topic Models." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13274.

Full text
Abstract:
Context. Software testing is the process, where a particular software product, or a system is executed, in order to find out the bugs, or issues which may otherwise degrade its performance. Software testing is usually done based on pre-defined test cases. A test case can be defined as a set of terms, or conditions that are used by the software testers to determine, if a particular system that is under test operates as it is supposed to or not. However, in numerous situations, test cases can be so many that executing each and every test case is practically impossible, as there may be many constraints. This causes the testers to prioritize the functions that are to be tested. This is where the ability of topic models can be exploited. Topic models are unsupervised machine learning algorithms that can explore large corpora of data, and classify them by identifying the hidden thematic structure in those corpora. Using topic models for test case prioritization can save a lot of time and resources. Objectives. In our study, we provide an overview of the amount of research that has been done in relation to topic models. We want to uncover various quality criteria, evaluation methods, and metrics that can be used to evaluate the topic models. Furthermore, we would also like to compare the performance of two topic models that are optimized for different quality criteria, on a particular interpretability task, and thereby determine the topic model that produces the best results for that task. Methods. A systematic mapping study was performed to gain an overview of the previous research that has been done on the evaluation of topic models. The mapping study focused on identifying quality criteria, evaluation methods, and metrics that have been used to evaluate topic models. The results of mapping study were then used to identify the most used quality criteria. The evaluation methods related to those criteria were then used to generate two optimized topic models. An experiment was conducted, where the topics generated from those two topic models were provided to a group of 20 subjects. The task was designed, so as to evaluate the interpretability of the generated topics. The performance of the two topic models was then compared by using the Precision, Recall, and F-measure. Results. Based on the results obtained from the mapping study, Latent Dirichlet Allocation (LDA) was found to be the most widely used topic model. Two LDA topic models were created, optimizing one for the quality criterion Generalizability (TG), and one for Interpretability (TI); using the Perplexity, and Point-wise Mutual Information (PMI) measures respectively. For the selected metrics, TI showed better performance, in Precision and F-measure, than TG. However, the performance of both TI and TG was comparable in case of Recall. The total run time of TI was also found to be significantly high than TG. The run time of TI was 46 hours, and 35 minutes, whereas for TG it was 3 hours, and 30 minutes.Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision, Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision, recall was comparable. Furthermore, the computational cost to create TI is significantly higher than for TG. Hence, we conclude that, the selection of the topic model optimization should be based on the aim of the task the model is used for. If the task requires high interpretability of the model, and precision is important, such as for the prioritization of test cases based on content, then TI would be the right choice, provided time is not a limiting factor. However, if the task aims at generating topics that provide a basic understanding of the concepts (i.e., interpretability is not a high priority), then TG is the most suitable choice; thus making it more suitable for time critical tasks.
APA, Harvard, Vancouver, ISO, and other styles
11

Johnson, Clair Marie. "Power and Participation: Relationships among Evaluator Identities, Evaluation Models, and Stakeholder Involvement." Thesis, Boston College, 2015. http://hdl.handle.net/2345/bc-ir:104710.

Full text
Abstract:
Thesis advisor: Lauren Saenz
Stakeholder involvement is widely acknowledged to be an important aspect of program evaluation (Mertens, 2007; Greene, 2005a; Brandon, 1998). However, limited work has been done to empirically study evaluators’ practices of stakeholder involvement and ways in which stakeholder involvement is affected or guided by various factors. As evaluators interact with and place value on the input of stakeholders, social, cultural, and historical backgrounds will always be infused into the context (Mertens & Wilson, 2012; MacNeil, 2005). The field of evaluation has done little to critically examine how such contexts impact evaluators’ perceptions of stakeholders and their involvement. The present study attempts to fill these gaps, focusing specifically on the relationships among evaluator identities and characteristics, evaluation models, and stakeholder involvement. Using the frameworks of critical evaluation theory (Freeman & Vasconcelos, 2010) and a theory of capital (Bourdieu, 1986), the present study utilized a sequential explanatory mixed methods approach. A sample of 272 practicing program evaluators from the United States and Canada provided quantitative survey data, while a sample of nine evaluators provided focus group and interview data. Regression analyses and thematic content analyses were conducted. Findings from the quantitative strand included relationships between: (1) measures of individualism-collectivism and stakeholder involvement outcomes, (2) contextual evaluation variables and stakeholder involvement outcomes, (3) use of use, values or social justice branch evaluation models and stakeholder involvement outcomes, and (4) whether the evaluator identified as a person of color and the diversity of involved stakeholders. Findings from the qualitative strand demonstrated the role of dominant frameworks of evaluation serving to perpetuate systems of power. Participating evaluators revealed ways in which they feel and experience systems of power acting on them, including participation in, recognition of, and responses to oppression. The qualitative strand showed that evaluation models may be used to help recognize power dynamics, but that they are also used to reinforce existing power dynamics. Implications and recommended directions for future research are discussed
Thesis (PhD) — Boston College, 2015
Submitted to: Boston College. Lynch School of Education
Discipline: Educational Research, Measurement and Evaluation
APA, Harvard, Vancouver, ISO, and other styles
12

Altay, Mirkan. "Comparison And Evaluation Of Various Mesfet Models." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12605987/index.pdf.

Full text
Abstract:
There exist various models for Microwave MESFET equivalent circuit representations. These models use different mathematical models to describe the same MESFET and give similar results. However, there are some differences in the results when compared to the experimental measurements. In this thesis, various theoretical models are applied to the same MESFET and comparison made with measured data. It is shown that some models worked better on some parameters of the MESFET, while the others were more effective on other parameters. Altogether eight models were examined and data optimized to fit these theoretical models. In using optimization algorithms MATLAB FMINSEARCH and GENETIC ALGORITHM CODE were used alternatively to solve the initial value problem.
APA, Harvard, Vancouver, ISO, and other styles
13

Al-Humaidan, Fahad Mohammed. "Evaluation and development models for business processes." Thesis, University of Newcastle Upon Tyne, 2006. http://hdl.handle.net/10443/1947.

Full text
Abstract:
Most organisations are working hard to improve their performance and to achieve competitive advantage over their rivals. They may accomplish these ambitions through carrying out their business processes more effectively. Hence it is important to consider such processes and look for ways in which they can be improved. Any organisational business process encompasses several elements that interact and collaborate with each other to achieve the required objectives. These elements can be classified into hard aspects, which deal with tangible issues related to the software system or the technology in general, and soft aspects, which deal with issues related to the human part of the business process. If the business process needs to be analysed and redesigned to improve its performance, it is important to use a suitable approach or intervention that takes into account all of these elements. This thesis proposes an approach to investigate organisational business processes by considering both soft and hard aspects. The approach, Soft Workflow Modelling (SWfM), is developed as a result of reviewing several workflow products and models using a developed workflow perspectives framework which involves several perspectives covering the soft and hard aspects of the workflow system. The SWfM approach models the organisational business process as a workflow system by handling the various perspectives of the workflow perspectives framework. This approach combines the Soft Systems Methodology (SSM) with the Unified Modelling Language (UML), as a standard modelling language of the object-oriented paradigm. The basic framework adopted is that of SSM with the inclusion of UML diagrams and techniques to deal with the aspects that SSM cannot handle. The approach also supports SSM by providing a developed tool to assist in constructing a conceptual model which is considered as the basis to model the workflow system. A case study is developed for illustrative purposes.
APA, Harvard, Vancouver, ISO, and other styles
14

Booth, Jonathan Alan Nesbit. "The evaluation of project models in construction." Thesis, University of Nottingham, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hitchen, Barry. "Behavioural evaluation of animal models of diabetes." Thesis, Ulster University, 2017. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.744769.

Full text
Abstract:
Diabetes is considered as a prominent threat to the worldwide population, and if left untreated may result in a number of severe complications, cognitive deterioration, and death. For the most prevalent form of the disease, type 2 diabetes, the main contributing factors to the development of the disease is being overweight and obese. There are a number of mediating factors regarding the relationship between excessive weight gain and subsequent development of diabetes, the most widely supported of which is the typical eating behaviours of the individual. While the physiological characteristics of diabetes are well-known, much less is understood in relation to the behavioural consequences. Animal models may aid our understanding of a disease, but the animal model used should not only replicate the physiological symptoms but also the behavioural characteristics. The studies discussed within this thesis encompass a thorough behavioural evaluation of two of the most commonly used diet-induced and chemically-induced animal models of diabetes; the high-fat diet model of obesity induced diabetes and Streptozotocin induced diabetes. The main focus of investigation was on the impact of the disease regarding the food preferences and motivations of mice to obtain food, using a classical method for assessing preference (T-maze) and motivation to respond for different appetitive stimuli on an operant conditioning schedule of food reinforcement (progressive ratio schedule, PR). During the latter phase of experiments, a range of behavioural processes (e.g. food deprivation alterations, prefeeding with or without satiation, and extinction) were also assessed in relation to how these may be affected by the presence of diabetes. In the final study, a new model of PR performance was applied to the results, with the intention of addressing issues relating to the accurate measurement of motivation under a PR schedule. Orderly behavioural data were obtained but neither model of diabetes showed clear effects on food motivation. Results are presented and discussed in relation to the limitations and implications of the research.
APA, Harvard, Vancouver, ISO, and other styles
16

Paulsson, Felix, and Issa Bitar. "An evaluation of coverage models for LoRa." Thesis, Jönköping University, JTH, Avdelningen för datateknik och informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-54152.

Full text
Abstract:
LoRaWAN is a wireless network technology based on the LoRa modulation technology. When planning such a network, it is important to estimate the network’s coverage, which can be done by calculating path loss. To do this, one can utilize empirical models of radio wave propagation. Previous research has investigated the accuracy of such empirical models for LoRa inside cities. However, as the accuracy of these models is heavily dependent on the exact characteristics of the environment, it is of interest to validate these results. In addition, the effect of base station elevation on the models’ accuracy has yet to be researched. Following the problems stated above, the purpose of this study is to investigate the accuracy of empirical models of radio wave propagation for LoRa in an urban environment. More specifically, we investigate the accuracy of the models and the effect of base station elevation on the models’ accuracy. The latter is the main contribution of this study. To perform these investigations, a quantitative experiment was conducted in the city of Jönköping, Sweden. In the experiment a base station was positioned at elevations of 30, 23, and 15m. The path loss was measured from 20 locations around the base station for each level of elevation. The measured path loss was then compared to predictions from three popular empirical models: the Okumura-Hata model, the COST 231-Walfisch-Ikegami model, and the 3GPP UMa NLOS model. Our analysis showed a clear underestimation of the path loss for all models. We conclude that for an environment and setup similar to ours, models underestimate the path loss by approximately 20dB. They can be improved by adding a constant correction value, resulting in a mean absolute error of at least 3,7-5,6dB. We also conclude that the effect of base station elevation varies greatly between different models. The 3GPP model underestimated the path loss equally for all elevations and could therefore easily be improved by a constant correction value. This resulted in a mean absolute error of approximately 4dB for all elevations.
APA, Harvard, Vancouver, ISO, and other styles
17

Ayad, Sarah. "Business Process Models Quality : evaluation and improvement." Thesis, Paris, CNAM, 2013. http://www.theses.fr/2013CNAM0922/document.

Full text
Abstract:
La problématique scientifique abordée correspond à la modélisation et à l'amélioration des processus métiers. Ce problème est d'un intérêt croissant pour les entreprises qui prennent conscience de l'impact indéniable que peuvent avoir une meilleure compréhension et une meilleure gestion des processus métiers (PM) sur l'efficacité, la cohérence et la transparence de leurs activités. Le travail envisagé dans le cadre de la thèse vise à proposer une méthode et un outil pour mesurer et améliorer la qualité des modèles de processus métier. L’originalité de l’approche est qu’elle vise non seulement la qualité syntaxique mais aussi la qualité sémantique et pragmatique en s’appuyant notamment sur les connaissances du domaine
In recent years the problems related to modeling and improving business processes have been of growing interest. Indeed, companies are realizing the undeniable impact of a better understanding and management of business processes (BP) on the effectiveness, consistency, and transparency of their business operations. BP modeling aims at a better understanding of processes, allowing deciders to achieve strategic goals of the company. However, inexperienced systems analysts often lack domain knowledge leading and this affects the quality of models they produce.Our approach targets the problem related to business process modeling quality by proposing an approach encompassing methods and tools for business process (BP) models quality measurement and improvement. We propose to support this modeling effort with an approach that uses domain knowledge to improve the semantic quality of BP models.The main contribution of this thesis is fourfold:1. Exploiting the IS domain knowledge: A business process metamodel is identified.Semantics are added to the metamodel by the mean of OCL constraints.2. Exploiting the application domain knowledge. It relies on domain ontologies. Alignment between the concepts of both metamodels is defined and illustrated.3. Designing of the guided quality process encompassing methods and techniques to evaluate and improve the business process models. Our process propose many quality constraints and metrics in order to evaluat the quality of the models and finally the process propose relevant recommendations for improvement.4. Development of a software prototype “BPM-Quality”. Our prototype implements all theabove mentioned artifacts and proposes a workflow enabling its users to evaluate andimprove CMs efficiently and effectively.We conducted a survey to validate the selection of the quality constraints through a first experience and also conducted a second experiment to evaluate the efficacy and efficiency of our overall approach and proposed improvements
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Ruomeng. "Evaluation of Current Concrete Creep Prediction Models." University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1461963600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Preacher, Kristopher J. "The Role of Model Complexity in the Evaluation of Structural Equation Models." The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1054130634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Rado, Omesaad A. M. "Contributions to evaluation of machine learning models. Applicability domain of classification models." Thesis, University of Bradford, 2019. http://hdl.handle.net/10454/18447.

Full text
Abstract:
Artificial intelligence (AI) and machine learning (ML) present some application opportunities and challenges that can be framed as learning problems. The performance of machine learning models depends on algorithms and the data. Moreover, learning algorithms create a model of reality through learning and testing with data processes, and their performance shows an agreement degree of their assumed model with reality. ML algorithms have been successfully used in numerous classification problems. With the developing popularity of using ML models for many purposes in different domains, the validation of such predictive models is currently required more formally. Traditionally, there are many studies related to model evaluation, robustness, reliability, and the quality of the data and the data-driven models. However, those studies do not consider the concept of the applicability domain (AD) yet. The issue is that the AD is not often well defined, or it is not defined at all in many fields. This work investigates the robustness of ML classification models from the applicability domain perspective. A standard definition of applicability domain regards the spaces in which the model provides results with specific reliability. The main aim of this study is to investigate the connection between the applicability domain approach and the classification model performance. We are examining the usefulness of assessing the AD for the classification model, i.e. reliability, reuse, robustness of classifiers. The work is implemented using three approaches, and these approaches are conducted in three various attempts: firstly, assessing the applicability domain for the classification model; secondly, investigating the robustness of the classification model based on the applicability domain approach; thirdly, selecting an optimal model using Pareto optimality. The experiments in this work are illustrated by considering different machine learning algorithms for binary and multi-class classifications for healthcare datasets from public benchmark data repositories. In the first approach, the decision trees algorithm (DT) is used for the classification of data in the classification stage. The feature selection method is applied to choose features for classification. The obtained classifiers are used in the third approach for selection of models using Pareto optimality. The second approach is implemented using three steps; namely, building classification model; generating synthetic data; and evaluating the obtained results. The results obtained from the study provide an understanding of how the proposed approach can help to define the model’s robustness and the applicability domain, for providing reliable outputs. These approaches open opportunities for classification data and model management. The proposed algorithms are implemented through a set of experiments on classification accuracy of instances, which fall in the domain of the model. For the first approach, by considering all the features, the highest accuracy obtained is 0.98, with thresholds average of 0.34 for Breast cancer dataset. After applying recursive feature elimination (RFE) method, the accuracy is 0.96% with 0.27 thresholds average. For the robustness of the classification model based on the applicability domain approach, the minimum accuracy is 0.62% for Indian Liver Patient data at r=0.10, and the maximum accuracy is 0.99% for Thyroid dataset at r=0.10. For the selection of an optimal model using Pareto optimality, the optimally selected classifier gives the accuracy of 0.94% with 0.35 thresholds average. This research investigates critical aspects of the applicability domain as related to the robustness of classification ML algorithms. However, the performance of machine learning techniques depends on the degree of reliable predictions of the model. In the literature, the robustness of the ML model can be defined as the ability of the model to provide the testing error close to the training error. Moreover, the properties can describe the stability of the model performance when being tested on the new datasets. Concluding, this thesis introduced the concept of applicability domain for classifiers and tested the use of this concept with some case studies on health-related public benchmark datasets.
Ministry of Higher Education in Libya
APA, Harvard, Vancouver, ISO, and other styles
21

Ansin, Elin. "An evaluation of the Cox-Snell residuals." Thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-256665.

Full text
Abstract:
It is common practice to use Cox-Snell residuals to check for overall goodness of tin survival models. We evaluate the presumed relation of unit exponentially dis-tributed residuals for a good model t and evaluate under some violations of themodel. This is done graphically with the usual graphs of Cox-Snell residual andformally using Kolmogorov-Smirnov goodness of t test. It is observed that residu-als from a correctly tted model follow unit exponential distribution. However, theCox-Snell residuals do not seem to be sensitive to the violations of the model.
APA, Harvard, Vancouver, ISO, and other styles
22

Barchyn, Thomas Edward, and University of Lethbridge Faculty of Arts and Science. "Field-based aeolian sediment transport threshold measurement : sensors, calculation methods, and standards as a strategy for improving inter-study comparison." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Geography, 2010, 2010. http://hdl.handle.net/10133/2616.

Full text
Abstract:
Aeolian sediment transport threshold is commonly defined as the minimum wind speed (or shear stress) necessary for wind-driven sediment transport. Threshold is a core parameter in most models of aeolian transport. Recent advances in methodology for field-based measurement of threshold show promise for improving parameterizations; however, investigators have varied in choice of method and sensor. The impacts of modifying measurement system configuration are unknown. To address this, two field tests were performed: (i) comparison of four piezoelectric sediment transport sensors, and (ii) comparison of four calculation methods. Data from both comparisons suggest that threshold measurements are non-negligibly modified by measurement system configuration and are incomparable. A poor understanding of natural sediment transport dynamics suggests that development of calibration methods could be difficult. Development of technical standards was explored to improve commensurability of measurements. Standards could assist future researchers with data syntheses and integration.
xi, 108 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
23

Santer, B. D. "Regional validation of General Circulation Models." Thesis, University of East Anglia, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Shim, Minsuk. "Models comparing estimates of school effectiveness based on cross-sectional and longitudinal designs." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/31519.

Full text
Abstract:
The primary purpose of this study is to compare the six models (cross-sectional, two-wave, and multiwave, with and without controls) and determine which of the models most appropriately estimates school effects. For a fair and adequate evaluation of school effects, this study considers the following requirements of an appropriate analytical model. First, a model should have controls for students' background characteristics. Without controlling for the initial differences of students, one may not analyze the between-school differences appropriately, as students are not randomly assigned to schools. Second, a model should explicitly address individual change and growth rather than status, because students' learning and growth is the primary goal of schooling. In other words, studies should be longitudinal rather than cross-sectional. Most researches, however, have employed cross-sectional models because empirical methods of measuring change have been considered inappropriate and invalid. This study argues that the discussions about measuring change have been unjustifiably restricted to the two-wave model. It supports the idea of a more recent longitudinal approach to the measurement of change. That is, one can estimate the individual growth more accurately using multiwave data. Third, a model should accommodate the hierarchical characteristics of school data because schooling is a multilevel process. This study employs an Hierarchical Linear Model (HLM) as a basic methodological tool to analyze the data. The subjects of the study were 648 elementary students in 26 schools. The scores on three subtests of Canadian Tests of Basic Skills (CTBS) were collected for this grade cohort across three years (grades 5, 6 and 7). The between-school differences were analyzed using the six models previously mentioned. Students' general cognitive ability (CCAT) and gender were employed as the controls for background characteristics. Schools differed significantly in their average levels of academic achievement at grade 7 across the three subtests of CTBS. Schools also differed significantly in their average rates of growth in mathematics and reading between grades 5 and 7. One interesting finding was that the bias of the unadjusted model against adjusted model for the multiwave design was not as large as that for the cross-sectional design. Because the multiwave model deals with student growth explicitly and growth can be reliably estimated for some subject areas, even without controls for student intake, this study concluded that the multiwave models are a better design to estimate school effects. This study also discusses some practical implications and makes suggestions for further studies of school effects.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
25

Nekvasil, Marek. "Evaluation of Semantic Applications for Enterprises." Doctoral thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-77049.

Full text
Abstract:
Broader and broader areas of application deployment are covered by semantic technologies recently and in the meantime their scope is increasing constantly. The possibilities of semantic applications are now so vast that they cannot be judged as a single market segment any more. The business skepticism that arises due to the uncertainty of investments in such technologies is only augmented by these differences and picking up on that this thesis concentrates on the aspects that can enable and evaluate not only the economic efficiency of engaging the semantic technologies in a business environment but also the effectiveness of doing so. This work concentrates on the ways of how to prove the differences amongst semantic applications, define their distinct segments based on use-case, and subsequently identify their Critical Success Factors and evaluate them against the real conditions of the applications' deployment with the participation of people involved in their development. Following the results of these interactions this thesis presents an innovative approach that enables to construct models for judging the maturity of enterprises for the deployment of the respective applications including the actual construction of these models for all the identified applications' use-cases segments. Moreover, in a later part of the work the evaluation using these models is demonstrated on AQUA application (outcome of a project the author did personally partake in) along with sketching additional specifics that may help the timely assessment of semantics in certain cases. The results presented in later chapters are supported by the underlying background researches in the fields of Semantic Technologies and IT assessment both of whose state-of-the-art methods are described here. Usability of the current standardized methods (such as those used in COBIT) for assessing semantic applications is also considered with respect to the lack of other best practices in business deployment of semantics.
APA, Harvard, Vancouver, ISO, and other styles
26

Opstad, Thomas Andrew. "Superintendent evaluation a review of small-school models /." Vancouver, Wash. : Washington State University, 2010. http://www.dissertations.wsu.edu/Dissertations/Spring2010/t_opstad_031710.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ten, Eyck Patrick. "Problems in generalized linear model selection and predictive evaluation for binary outcomes." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/6003.

Full text
Abstract:
This manuscript consists of three papers which formulate novel generalized linear model methodologies. In Chapter 1, we introduce a variant of the traditional concordance statistic that is associated with logistic regression. This adjusted c − statistic as we call it utilizes the differences in predicted probabilities as weights for each event/non- event observation pair. We highlight an extensive comparison of the adjusted and traditional c-statistics using simulations and apply these measures in a modeling application. In Chapter 2, we feature the development and investigation of three model selection criteria based on cross-validatory c-statistics: Model Misspecification Pre- diction Error, Fitting Sample Prediction Error, and Sum of Prediction Errors. We examine the properties of the corresponding selection criteria based on the cross- validatory analogues of the traditional and adjusted c-statistics via simulation and illustrate these criteria in a modeling application. In Chapter 3, we propose and investigate an alternate approach to pseudo- likelihood model selection in the generalized linear mixed model framework. After outlining the problem with the pseudo-likelihood model selection criteria found using the natural approach to generalized linear mixed modeling, we feature an alternate approach, implemented using a SAS macro, that obtains and applies the pseudo-data from the full model for fitting all candidate models. We justify the propriety of the resulting pseudo-likelihood selection criteria using simulations and implement this new method in a modeling application.
APA, Harvard, Vancouver, ISO, and other styles
28

湯旭瑜 and Yuk-yue Tong. "Lay models of personality: assessment and implications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B3124368X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sze, Nang-ngai, and 施能藝. "Quantitative analyses for the evaluation of traffic safety and operations." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39707398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Barker, Colin. "Genomic evaluation of models of human disease : the Fechm1pas model of erythropoietic protoporphyria." Thesis, University of Leicester, 2006. http://hdl.handle.net/2381/30778.

Full text
Abstract:
Erythropoeitic protoporphyria (EPP) is a member of the prophyria disease class and is caused by abnormal function of the enzyme ferrochelatase (Fech). In humans it has variable penetrance, but primarily leads to toxicity in the skin and liver to varying degrees. Here I have investigated the nature of EPP progression using the Fechm1PAS mouse model. This mouse contains a point mutation in the fech gene which results in reduced Fech activity to < 7% of wild type, with resultant loss of haem, anaemia and hepatic cholestasis. The phenotypic progression of Fechm1PAS/m1PAS mice was established using pathology and clinical biochemistry from 18 days gestation to 32 weeks of age. Pathological changes were found from 4 weeks with biochemical and differential gene expression (DGE) analysis showing intraheptic cholestasis from birth. Genomic data from cDNA microarrays was derived and analysed with the DGE by phenotypic anchoring. DGE was observed in all processes responsible for cell protection and epigenetic regulation. DGE analysis have led me to hypothesise that the porphyria leads to a chronic reactive oxygen species (ROS) attack, causing DNA damage, eventually leading to hepatocarinoma. This was indicated by changes in the GSH, cytochrome P450, circadian rhythm and methylation pathways. DGE in these processes included downregulation of DNA methyltransferases (Dnmt1, Dnmt6a and Dnmt6b), and upregulation of cytochrome oxidase (Cox and Por) and GSH metabolism transcription factors (Gclc and Gclm). The findings made here contribute further to the understanding of EPP progression and the relationship between phenotype and DGE in EPP.
APA, Harvard, Vancouver, ISO, and other styles
31

Baverel, Paul. "Development and Evaluation of Nonparametric Mixed Effects Models." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-144583.

Full text
Abstract:
A nonparametric population approach is now accessible to a more comprehensive network of modelers given its recent implementation into the popular NONMEM application, previously limited in scope by standard parametric approaches for the analysis of pharmacokinetic and pharmacodynamic data. The aim of this thesis was to assess the relative merits and downsides of nonparametric models in a nonlinear mixed effects framework in comparison with a set of parametric models developed in NONMEM based on real datasets and when applied to simple experimental settings, and to develop new diagnostic tools adapted to nonparametric models. Nonparametric models as implemented in NONMEM VI showed better overall simulation properties and predictive performance than standard parametric models, with significantly less bias and imprecision in outcomes of numerical predictive check (NPC) from 25 real data designs. This evaluation was carried on by a simulation study comparing the relative predictive performance of nonparametric and parametric models across three different validation procedures assessed by NPC. The usefulness of a nonparametric estimation step in diagnosing distributional assumption of parameters was then demonstrated through the development and the application of two bootstrapping techniques aiming to estimate imprecision of nonparametric parameter distributions. Finally, a novel covariate modeling approach intended for nonparametric models was developed with good statistical properties for identification of predictive covariates. In conclusion, by relaxing the classical normality assumption in the distribution of model parameters and given the set of diagnostic tools developed, the nonparametric approach in NONMEM constitutes an attractive alternative to the routinely used parametric approach and an improvement for efficient data analysis.
APA, Harvard, Vancouver, ISO, and other styles
32

Villagomez, Garcia Ivan, and der Meulen Steffan Van. "The evaluation of business models by venture capitalists." Thesis, Högskolan i Jönköping, Internationella Handelshögskolan, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-19384.

Full text
Abstract:
The purpose of this study is to identify the role a business model plays for Venture Capitalists (VCs) when analysing a new venture proposal for funding. The primary data for this research was collected through six qualitative interviews conducted during a two month period. Furthermore, the gathered data was evaluated in accordance with the information found in current literature which describes de term "business model" as well as specific criteria for it. The findings from this research demonstrate that the perception of the role of a business model is strongly similar among the VCs whom were interviewed. They all argued that a business model plays a secondary role in the evaluation process and see it as part of the business plan. At the same time, this research could could pinpoint the fact that no specific instrument including explicit evaluation criteria is currently being implemented by the VCs in question in order to evaluate a business model. Notwithstanding this study cannot be generalized since the pool of applicants included only six Investment Manages working in Venture Capital Funds in Sweden and Mexico. At the same, even though the geographical differences exist, the evaluation process resulted quite similar amongst them. Evidence from this study has demostrated that the current ambiguity of the meaning of the term "business model" is the most frequent perceived challenge to the evaluation of these. Therefore, our interest to shed more light into the topic was encouraged.
APA, Harvard, Vancouver, ISO, and other styles
33

Hussain, Arif. "Evaluation of different models for flow through paper." Thesis, Karlstads universitet, Fakulteten för teknik- och naturvetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-7484.

Full text
Abstract:
For understanding the energy use during vacuum dewatering and through air drying process, air flow through highly complex structure of paper has been investigated. Experiments were performed for a wide range of pressure drops and basis weights. In addition the pulp samples are refined to three different beating degrees. The calculated Reynolds numbers, based on the fiber diameter, varies in a wide range between 0.0002 and 80. The majority of data at low Reynolds number (below approximately 0.2) agree rather well with Darcy‟s law, so that the air flow is proportional to the pressure drop and inversely proportional to the grammage. However, the data at high Reynolds number obtained from air flow experiments using vacuum dewatering equipment, where large amount of air sucked through the web have more news value. Different mathematical models for flow through porous media are investigated to see how well they can describe the experimental findings at high Reynolds numbers. It was found that for high Reynolds number flow, the flow rate is a unique function of the quotient of pressure drop and grammage for a specific degree of beating. It was also found that increased beating leads to a reduced air flow through the sample. However, no clear conclusion regarding the importance of compressibility and inertial forces when modeling the process could be made.
För att bättre kunna beskriva energianvändning vid vakuumavvattning och genomblåsningstorkning har luftflödet genom det komplexa pappersmaterialet mätts upp för en lång rad olika tryckfall och ytvikter. Även malningens inverkan på luftflödet genom pappersstrukturen har undersökts. Reynolds tal har bestämts baserat på fiberdiametern och för de redovisade försöken varierar Reynolds tal i ett stort område mellan 0.0002 och 80. Huvuddelen av de uppmätta flödeshastigheterna vid låga Reynolds tal (upp till ungefär 0.2) stämmer relativt väl överens med Darcys lag, så att luftflödet är proportionellt mot tryckfallet och omvänt proportionellt mot ytvikten. Vid högre värden på Reynolds tal har luftflödesmätningar utförts i en utrustning konstruerad för att studera vakuumavvattning. Vid vakuumavvattning strömmar stora luftmängder genom arket och de här data har större nyhetsvärde. Olika matematiska modeller för strömning genom porösa material utvärderas avseende hur väl de beskriver de uppmätta data vid höga värden på Reynolds tal. Det visar sig att även vid höga värden på Reynolds tal är strömningshastigheten en unik funktion av kvoten mellan tryckfall och ytvikt för en specifik malgrad. Ett annat resultat är att ökad malning leder till ett minskat flöde genom arket. Det gick inte att dra några definitiva slutsatser beträffande bidraget av friktion respektive tröghetskrafter till det totala tryckfallet över arket.
APA, Harvard, Vancouver, ISO, and other styles
34

Reinecke, Philipp [Verfasser]. "Efficient system evaluation using stochastic models / Philipp Reinecke." Berlin : Freie Universität Berlin, 2013. http://d-nb.info/1033306738/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Gupta, Gaurav. "Models and protocols for evaluation of fingerprint sensors." Morgantown, W. Va. : [West Virginia University Libraries], 2005. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4361.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2005.
Title from document title page. Document formatted into pages; contains vi, 78 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 71-75).
APA, Harvard, Vancouver, ISO, and other styles
36

Sallhammar, Karin. "Stochastic Models for Combined Security and Dependability Evaluation." Doctoral thesis, Norwegian University of Science and Technology, Department of Telematics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1927.

Full text
Abstract:

Security is a topic of ever increasing interest. Today it is widely accepted that, due to the unavoidable presence of vulnerabilities, design faults and administrative errors, an ICT system will never be totally secure. Connecting a system to a network will necessarily introduce a risk of inappropriate access resulting in disclosure, corruption and/or loss of information. Therefore, the security of a system should ideally be interpreted in a probabilistic manner. More specifically, there is an urgent need for modelling methods that provide operational measures of the security. Dependability, on the other hand, is the ability of a computer system to deliver service that can justifiably be trusted. In a dependability context one distinguishes between accidental faults, which are modelled as random processes, and intentional faults, i.e., attacks, which in most cases are not considered at all. A major drawback of this approach is that attacks may in many cases be the dominating failure source for today’s networked systems. The classical way of dependability evaluation can therefore be very deceptive: highly dependable systems may in reality fail much more frequently than expected, due to the exploitation from attackers. To be considered trustworthy, a system must be both dependable and secure. However, these two aspects have so far tended to be treated separately. A unified modelling framework for security and dependability evaluation would be advantageous from both points of view. The security community can benefit from the mature dependability modelling techniques, which can provide the operational measures that are so desirable today. On the other hand, by adding hostile actions to the set of possible fault sources, the dependability community will be able to make more realistic models than the ones that are currently in use. This thesis proposes a stochastic modeling approach, which can be used to predict a system’s security and dependability behavior. As will be seen, the basic model has a number of possible applications. For example, it can be used as a tool for trade-off analysis of security countermeasures, or it can be used as a basis for real-time assessment of the system trustworthiness.


Paper IV © Academy Publisher
APA, Harvard, Vancouver, ISO, and other styles
37

Devadasu, Venkat Ratnam. "Evaluation of antioxidants encapsulated nanoparticles in animal models." Thesis, University of Strathclyde, 2011. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=16829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lu, Maozu. "The encompassing principle and evaluation of econometric models." Thesis, University of Southampton, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ortega, Omayra Y. "Evaluation of rotavirus models with coinfection and vaccination." Diss., University of Iowa, 2008. http://ir.uiowa.edu/etd/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

BERTRAN, ISELA MACIA. "EVALUATION OF SOFTWARE QUALITY BASED ON UML MODELS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=13748@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Um dos objetivos da engenharia de software é a construção de software com um nível de qualidade elevado com o menor custo e no menor tempo possível. Nesse contexto, muitas técnicas para o controle da qualidade de design de software têm sido definidas. Além disso, mecanismos baseados em métricas para a detecção de problemas também têm sido definidos. A maioria dessas técnicas e mecanismos foca a análise do código fonte. Porém, para reduzir retrabalho inútil, é importante utilizar técnicas de análise da qualidade capazes de detectar problemas de design já desde os modelos dos sistemas. Esta dissertação propõe: (i) um conjunto de estratégias de detecção para identificar, em modelos UML, problemas de design específicos e recorrentes na literatura: Long Parameter List, God Class, Data Class, Shotgun Surgery, Misplaced Class e God Package, e (ii) a utilização do modelo da qualidade QMOOD para avaliar design de software a partir de seus diagramas de classes. Para automatizar a aplicação destes mecanismos foi implementada uma ferramenta: a QCDTool. Os mecanismos desenvolvidos foram avaliados no contexto de dois estudos experimentais. O primeiro estudo avaliou a acurácia, precisão e recall das estratégias de detecção propostas. Esse estudo mostrou os benefícios e desvantagens da aplicação, em modelos, das estratégias de detecção propostas. O segundo estudo avaliou a utilidade da aplicação do modelo da qualidade QMOOD em diagramas UML. Esse estudo mostrou que foi possível identificar, em diagramas de classes, variações das propriedades de design, e, conseqüentemente, dos atributos da qualidade nos sistemas analisados.
One of the goals of software engineering is the development of high quality software at a small cost an in a short period of time. In this context, several techniques have been defined for controlling the quality of software designs. Furthermore, many metrics-based mechanisms have been defined for detecting software design flaws. Most of these mechanisms and techniques focus on analyzing the source code. However, in order to reduce unnecessary rework it is important to use quality analysis techniques that allow the detection of design flaws earlier in the development cycle. We believe that these techniques should analyze design flaws starting from software models. This dissertation proposes: (i) a set of strategies to detect, in UML models, specific and recurrent design problems: Long Parameter List, God Class, Data Class, Shotgun Surgery, Misplaced Class and God Package; (ii) and the use of QMOOD quality model to analyze class diagrams. To automate the application of these mechanisms we implemented a tool: the QCDTool. The detection strategies and QMOOD model were evaluated in the context of two experimental studies. The first study analyzed the accuracy, precision and recall of the proposed detection strategies. The second study analyzed the utility of use QMOOD quality model in the class diagrams. The results of the first study have shown the benefits and drawbacks of the application in class diagrams of some of the proposed detection strategies. The second study shows that it was possible to identify, based on class diagrams, variations of the design properties and consequently, of the quality attributes in the analyzed systems.
APA, Harvard, Vancouver, ISO, and other styles
41

OLIVEIRA, KLEINNER SILVA FARIAS DE. "EMPIRICAL EVALUATION OF EFFORT ON COMPOSING DESIGN MODELS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28757@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
Composição de modelos desempenha um papel fundamental em muitas atividades de engenharia de software como, por exemplo, evolução e reconciliação de modelos conflitantes desenvolvido em paralelo por diferentes times de desenvolvimento. Porém, os desenvolvedores têm dificuldades de realizar análises de custos e benefícios, bem como entender o real esforço de composição. Sendo assim, eles são deixados sem qualquer conhecimento prático sobre quanto é investido; além das estimativas de evangelistas que frequentemente divergem. Se o esforço de composição é alto, então os potenciais benefícios tais como aumento de produtividade podem ser comprometidos. Esta incapacidade de avaliar esforço de composição é motivada por três problemas: (i) as abordagens de avaliação atuais são inadequadas para mensurar os conceitos encontrados em composição, por exemplo, esforço e conflito; (ii) pesquisadores não sabem quais fatores podem influenciar o esforço de composição na prática. Exemplos de tais fatores seriam linguagem de modelagem e técnicas de composição que são responsáveis para manipular os modelos; (iii) a falta de conhecimento sobre como tais fatores desconhecidos afetam o esforço de composição. Esta tese, portanto, apresenta uma abordagem de avaliação de esforço de composição de modelos derivada de um conjunto de estudos experimentais. As principais contribuições são: (i) um modelo de qualidade para auxiliar a avaliação de esforço em composição de modelos; (ii) conhecimento prático sobre o esforço de composição e o impacto de fatores que afetam tal esforço; e (iii) diretivas sobre como avaliar esforço de composição, minimizar a propensão a erros, e reduzir os efeitos negativos dos fatores na prática de composição de modelos.
Model composition plays a central role in many software engineering activities such as evolving models to add new features and reconciling conflicting design models developed in parallel by different development teams. As model composition is usually an error-prone and effort-consuming task, its potential benefits, such as gains in productivity can be compromised. However, there is no empirical knowledge nowadays about the effort required to compose design models. Only feedbacks of model composition evangelists are available, and they often diverge. Consequently, developers are unable to conduct any cost-effectiveness analysis as well as identify, predict, or reduce composition effort. The inability of evaluating composition effort is due to three key problems. First, the current evaluation frameworks do not consider fundamental concepts in model composition such as conflicts and inconsistencies. Second, researchers and developers do not know what factors can influence the composition effort in practice. Third, practical knowledge about how such influential factors may affect the developers effort is severely lacking. In this context, the contributions of this thesis are threefold: (i) a quality model for supporting the evaluation of model composition effort, (ii) practical knowledge, derived from a family of quantitative and qualitative empirical studies, about model composition effort and its influential factors, and (iii) insight about how to evaluate model composition efforts and tame the side effects of such influential factors.
APA, Harvard, Vancouver, ISO, and other styles
42

Erninger, Anders, and Moulham Alsakati. "Storage Capacity Evaluation of Autoassociative Neural Network Models." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297684.

Full text
Abstract:
utoassociative memory models have been an at-tractive area for researchers lately. Their potential for modellingsome aspects of the human memory put them in the spotlight fordeeper research. In this project, we study the effects of making anautoassociative memory network capture some functional aspectsof the human memory. This by applying some modifications to aHopfield network with Hebbian learning rule. The modificationsused in this project are implementing Oja’s rule on the learningrule as well as creating a sparse network instead of an all-to-allconnected one. We then evaluate the storage capacity for differentautoassociative memory networks based on the Hopfield networkmodel and the Hebbian learning rule. We found that Oja’s rulesignificantly improved the network ability to recall newly learnedpatterns as well as slightly improved the ability to retainingold memories. Sparsifying the network led to decreasing theoverall storage capacity depending on the amount of connectionsremoved.
Autoassociativa neurala nätverk har varit ett attraktivt område för forskare den senaste tiden. Deras potential att modellera några aspekter av det mänskliga minnet placerade dem i rampljuset för djupare forskning. I detta projekt studerar vi effekterna av att modifiera ett autoassociativ minnesnätverk. Vi utvärderar lagringskapaciteten för olika autoassociativa minnesmodeller baserat på Hopfield- nätverksmodellen och Hebbians inlärningsregel. Modifieringarna som studeras i detta projekt är implementering av Ojas princip på inlärningsregeln och skapnandet av ett glest nätverk istället för ett nätverk med en alla-till-alla anslutning. Vi fann att Ojas princip förbättrade nätverkets förmåga att återkalla nyligen inlärda mönster och förbättrade förmågan upprätthålla gamla minnen. Det glesa nätverket ledde till att den totala lagringskapaciteten minskade med hänsyn till mängden av antal borttagna anslutningar.
Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
APA, Harvard, Vancouver, ISO, and other styles
43

Yen, Ming-Fang. "Frailty and mixture models in cancer screening evaluation." Thesis, University College London (University of London), 2004. http://discovery.ucl.ac.uk/1446761/.

Full text
Abstract:
The prevalence of screen-detected premalignancies is too large for it to be feasible that all can progress to carcinoma at the same average rate, unless that rate is very low indeed. There are likely to be frailties in the rates of progression. Failure to take heterogeneity into account will lead to biased estimates and could result in inappropriate screening policy. Approaches to investigation of heterogeneity in the propensity for screen-detected disease to progress comprise the main objectives of this project. We used Markov models with constant hazard rates in sequence throughout the process of disease natural history within subjects, with heterogeneity terms by means of (1) frailty models for continuous heterogeneity, (2) mover-stayer models for dichotomous heterogeneity (in both cases for progression between sequential homogeneous models), and (3) latent variables and states to estimate the parameters of progressive disease natural history in the presence of unobserved factors. Approaches had to be developed to address problems of tractability and estimation. For example, in the presence of frailty, solution of the Kolmogorov equations by routine matrix algebra is no longer possible. Heterogeneous models, both discrete and continuous, were found to be tractable, and estimation was possible for a variety of designs and data structures. Such models illuminated various issues in real screening applications. Quantifying heterogeneity of potential progress of disease is of potential importance to the screening process. There are trade-offs between model complexity, identifiability and data availability, but there are clear examples, such as that of cervical screening, where a heterogeneous model improves model fit and gives more realistic estimates than a homogenous.
APA, Harvard, Vancouver, ISO, and other styles
44

Xu, Ning. "On the Development and Evaluation of Predictive Models." Thesis, New York University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10260298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kahsu, Lidia. "Evaluation of a method for identifying timing models." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-15093.

Full text
Abstract:
In today’s world, embedded systems which have very large and highly configurable software systems, consisting of hundreds of tasks with huge lines of code and mostly with real-time constraints, has replaced the traditional systems. Generally in real-time systems, the WCET of a program is a crucial component, which is the longest execution time of a specified task. WCET is determined by WCET analysis techniques and the values produced should be tight and safe to ensure the proper timing behavior of a real-time system. Static WCET is one of the techniques to compute the upper bounds of the execution time of programs, without actually executing the programs but relying on mathematical models of the software and the hardware involved. Mathematical models can be used to generate timing estimations on source code level when the hardware is not yet fully accessible or the code is not yet ready to compile. In this thesis, the methods used to build timing models developed by WCET group in MDH have been assessed by evaluating the accuracy of the resulting timing models for a number of combinations of hardware architecture. Furthermore, the timing model identification is extended for various hardware platforms, like advanced architecture with cache and pipeline and also included floating-point instructions by selecting benchmarks that uses floating-points as well.
APA, Harvard, Vancouver, ISO, and other styles
46

Mousavi, Biouki Seyed Mohammad Mahdi. "Design and performance evaluation of failure prediction models." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/25925.

Full text
Abstract:
Prediction of corporate bankruptcy (or distress) is one of the major activities in auditing firms’ risks and uncertainties. The design of reliable models to predict distress is crucial for many decision-making processes. Although a variety of models have been designed to predict distress, the relative performance evaluation of competing prediction models remains an exercise that is unidimensional in nature. To be more specific, although some studies use several performance criteria and their measures to assess the relative performance of distress prediction models, the assessment exercise of competing prediction models is restricted to their ranking by a single measure of a single criterion at a time, which leads to reporting conflicting results. The first essay of this research overcomes this methodological issue by proposing an orientation-free super-efficiency Data Envelopment Analysis (DEA) model as a multi-criteria assessment framework. Furthermore, the study performs an exhaustive comparative analysis of the most popular bankruptcy modelling frameworks for UK data. Also, it addresses two important research questions; namely, do some modelling frameworks perform better than others by design? and to what extent the choice and/or the design of explanatory variables and their nature affect the performance of modelling frameworks? Further, using different static and dynamic statistical frameworks, this chapter proposes new Failure Prediction Models (FPMs). However, within a super-efficiency DEA framework, the reference benchmark changes from one prediction model evaluation to another one, which in some contexts might be viewed as “unfair” benchmarking. The second essay overcomes this issue by proposing a Slacks-Based Measure Context-Dependent DEA (SBM-CDEA) framework to evaluate the competing Distress Prediction Models (DPMs). Moreover, it performs an exhaustive comparative analysis of the most popular corporate distress prediction frameworks under both a single criterion and multiple criteria using data of UK firms listed on London Stock Exchange (LSE). Further, this chapter proposes new DPMs using different static and dynamic statistical frameworks. Another shortcoming of the existing studies on performance evaluation lies in the use of static frameworks to compare the performance of DPMs. The third essay overcomes this methodological issue by suggesting a dynamic multi-criteria performance assessment framework, namely, Malmquist SBM-DEA, which by design, can monitor the performance of competing prediction models over time. Further, this study proposes new static and dynamic distress prediction models. Also, the study addresses several research questions as follows; what is the effect of information on the performance of DPMs? How the out-of-sample performance of dynamic DPMs compares to the out-of-sample performance of static ones? What is the effect of the length of training sample on the performance of static and dynamic models? Which models perform better in forecasting distress during the years with Higher Distress Rate (HDR)? On feature selection, studies have used different types of information including accounting, market, macroeconomic variables and the management efficiency scores as predictors. The recently applied techniques to take into account the management efficiency of firms are two-stage models. The two-stage DPMs incorporate multiple inputs and outputs to estimate the efficiency measure of a corporation relative to the most efficient ones, in the first stage, and use the efficiency score as a predictor in the second stage. The survey of the literature reveals that most of the existing studies failed to have a comprehensive comparison between two-stage DPMs. Moreover, the choice of inputs and outputs for DEA models that estimate the efficiency measures of a company has been restricted to accounting variables and features of the company. The fourth essay adds to the current literature of two-stage DPMs in several respects. First, the study proposes to consider the decomposition of Slack-Based Measure (SBM) of efficiency into Pure Technical Efficiency (PTE), Scale Efficiency (SE), and Mix Efficiency (ME), to analyse how each of these measures individually contributes to developing distress prediction models. Second, in addition to the conventional approach of using accounting variables as inputs and outputs of DEA models to estimate the measure of management efficiency, this study uses market information variables to calculate the measure of the market efficiency of companies. Third, this research provides a comprehensive analysis of two-stage DPMs through applying different DEA models at the first stage – e.g., input-oriented vs. output oriented, radial vs. non-radial, static vs. dynamic, to compute the measures of management efficiency and market efficiency of companies; and also using dynamic and static classifier frameworks at the second stage to design new distress prediction models.
APA, Harvard, Vancouver, ISO, and other styles
47

Tredger, Edward. "On the evaluation of uncertainties in climate models." Thesis, London School of Economics and Political Science (University of London), 2009. http://etheses.lse.ac.uk/3002/.

Full text
Abstract:
The prediction of the Earth's climate system is of immediate importance to many decision-makers. Anthropogenic climate change is a key area of public policy and will likely have widespread impacts across the world over the 21st Century. Understanding potential climate changes, and their magnitudes, is important for effective decision making. The principal tools used to provide such climate predictions are physical models, some of the largest and most complex models ever built. Evaluation of state-of-the-art climate models is vital to understanding our ability to make statements about future climate. This Thesis presents a framework for the analysis of climate models in light of their inherent uncertainties and principles of statistical good practice. The assessment of uncertainties in model predictions to-date is incomplete and warrants more attention that it has previously received. This Thesis aims to motivate a more thorough investigation of climate models as fit for use in decision-support. The behaviour of climate models is explored using data from the largest ever climate modelling experiment, the climateprediction.net project. The availability of a large set of simulations allows novel methods of analysis for the exploration of the uncertainties present in climate simulations. It is shown that climate models are capable of producing very different behaviour and that the associated uncertainties can be large. Whilst no results are found that cast doubt on the hypothesis that greenhouse gases are a significant driver of climate change, the range of behaviour shown in the climateprediction.net data set has implications for our ability to predict future climate and for the interpretation of climate model output. It is argued that uncertainties should be explored and communicated to users of climate predictions in such a way that decision-makers are aware of the relative robustness of climate model output.
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Zhengrong. "Model-based Tests for Standards Evaluation and Biological Assessments." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/29108.

Full text
Abstract:
Implementation of the Clean Water Act requires agencies to monitor aquatic sites on a regular basis and evaluate the quality of these sites. Sites are evaluated individually even though there may be numerous sites within a watershed. In some cases, sampling frequency is inadequate and the evaluation of site quality may have low reliability. This dissertation evaluates testing procedures for determination of site quality based on modelbased procedures that allow for other sites to contribute information to the data from the test site. Test procedures are described for situations that involve multiple measurements from sites within a region and single measurements when stressor information is available or when covariates are used to account for individual site differences. Tests based on analysis of variance methods are described for fixed effects and random effects models. The proposed model-based tests compare limits (tolerance limits or prediction limits) for the data with the known standard. When the sample size for the test site is small, using model-based tests improves the detection of impaired sites. The effects of sample size, heterogeneity of variance, and similarity between sites are discussed. Reference-based standards and corresponding evaluation of site quality are also considered. Regression-based tests provide methods for incorporating information from other sites when there is information on stressors or covariates. Extension of some of the methods to multivariate biological observations and stressors is also discussed. Redundancy analysis is used as a graphical method for describing the relationship between biological metrics and stressors. A clustering method for finding stressor-response relationships is presented and illustrated using data from the Mid-Atlantic Highlands. Multivariate elliptical and univariate regions for assessment of site quality are discussed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
49

Berta, Paolo. "Statistical evaluation of quality in healthcare." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/17173.

Full text
Abstract:
Governance of the healthcare systems is one of the most important challenges forWestern countries. Within this, an accurate assessment of the quality is key to policy makers and public managers, in order to guarantee equity, effectiveness and efficiency. In this thesis, we investigate aspects and methods related to healthcare evaluation by focussing on the healthcare system in Lombardy (Italy), where public and private providers compete with each other, patients are free to choose where to be hospitalized, and a pay-for-performance program was recently implemented. The general aim of this thesis is to highlight the role of statistics within a quality evaluation framework, in the form of advancing the statistical methods used to measure quality, of evaluating the effectiveness of implemented policies, and of testing the effect that mechanisms of competition and cooperation can have on the quality of a healthcare system. We firstly advance a new methodological approach for measuring hospital quality, providing a new tool for managers involved in performance evaluations. Multilevel models are typically used in healthcare, in order to account for the hierarchical structure of the data. These models however do not account for unobserved heterogeneity. We therefore propose an extension of the cluster-weighted models to the multilevel framework and focus in particular on the case of a binary dependent variable, which is common in healthcare. The resulting multilevel logistic cluster-weighted model is shown to perform well in a healthcare evaluation context. Secondly, we evaluate the effectiveness of a pay-for-performance program. Differently from the existent literature, in this thesis we evaluate this program on the basis of five health outcomes and across a wide range of medical conditions. Availability of data pre and post-policy in Lombardy allows us to use a difference-in-differences approach. The statistical model includes multiple dependent outcomes, that allow quantifying the joint effect of the program, and random effects, that account for the heterogeneity of the data at the ward and hospital level. The results show that the policy has overall a positive effect on the hospitals' performance. Thirdly, we study the effect of pro-competition reforms on the hospital quality. In Lombardy, competition between hospitals has been mostly driven by the adoption of a quasi-market system. Our results show that no association exists between hospital quality and competition. We speculate that this may be the result of asymmetric information, i.e. the lack of transparent information provided to citizens about the quality of hospitals. This is bound to reduce the impact of pro-competition reforms on quality and can in part explain the conflicting results found in the literature on this subject. Our results should motivate a public disclosure of quality evaluations. Regardless of the specifics of a system, hospitals are altruistic economic agents and they cooperate in order to improve their quality. In this work, we analyse the effect of cooperation on quality, taking the network of patients' transfers between hospitals as a proxy of their level of cooperation. Using the latest network models, we find that cooperation does lead to an increase in quality and should therefore be encouraged by policy makers.
APA, Harvard, Vancouver, ISO, and other styles
50

Ogle, Gwendolyn J. "Towards A Formative Evaluation Tool." Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/27309.

Full text
Abstract:
Evaluation is an integral part of instructional design. Formative evaluation, specifically, is a phase identified in many instructional design models and is recognized as an important step for program improvement and acceptance. Although evaluation has many models and approaches, very few deal specifically with formative evaluation. Further, no one set of guidelines has been found that provides a comprehensive set of procedures for planning and implementing a formative evaluation. Encapsulating such guidelines into a “tool” that automates the process was the author’s initial idea. The author’s intent in Chapter 2 was to find a model or checklist as a stepping off point for future formative evaluation tool development. In lieu of finding such a model, one was created (Chapter 3), pulling from several formative evaluation models and the author’s own experience. Chapter 3 also discusses the purpose behind developing a formative evaluation tool - to create an accessible, efficient, intuitive, and expedient way for instructional designers and developers to formatively evaluate their instruction or instructional materials. Chapter 4 focuses on the methodology selected to evaluate the tool, presented in prototype. Chapter 5 presents the results of the evaluation; comments received from the expert reviewers are presented and ideas for tool improvement are generated. Finally, the Appendices include the formative evaluation tool prototype as well as the documentation that accompanied the tool during its evaluation. The initial idea behind this developmental dissertation was the creation of a formative evaluation tool. The focus of the dissertation itself, however, was on the justification for such a tool, and the literature behind the making of the model and consequently the tool. The result of this developmental dissertation was the prototype of an evaluation tool that with improvements and modifications is deemed promising by the experts who reviewed it. Although designed with formative evaluation in mind, it was generally agreed that this tool could be utilized for both formative and summative evaluation. The expert review was successful not because the tool was without fault, but because the review truly achieved its purpose – to identify areas of strength, weakness, and to suggest improvements.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography