To see the other types of publications on this topic, follow the link: Model for assessment of financial processes.

Dissertations / Theses on the topic 'Model for assessment of financial processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Model for assessment of financial processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shahi, Sepideh. "Business sensible design: Exploratory research on the importance of considering cost and profit for undergraduate industrial design students." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1368026398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Bo. "Overview of Financial Risk Assessment." Kent State University Honors College / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ksuhonors1399203159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Xu. "Fractional differential equations for modelling financial processes with jumps." HKBU Institutional Repository, 2015. https://repository.hkbu.edu.hk/etd_oa/192.

Full text
Abstract:
The standard Black-Scholes model is under the assumption of geometric Brownian motion, and the log-returns for Black-Scholes model are independent and Gaussian. However, most of the recent literature on the statistical properties of the log-returns makes this hypothesis not always consistent. One of the ongoing research topics is to nd a better nancial pricing model instead of the Black-Scholes model. In the present work, we concentrate on two typical 1-D option pricing models under the general exponential L evy processes, namely the nite moment log-stable (FMLS) model and the the Carr-Geman-Madan-Yor-eta (CGMYe) model, and we also propose a multivariate CGMYe model. Both the frameworks, and the numerical estimations and simulations are studied in this thesis. In the future work, we shall continue to study the fractional partial di erential equations (FPDEs) of the nancial models, and seek for the e cient numerical algorithms of the American pricing problems. Keywords: fractional partial di erential equation; option pricing models; exponential L evy process; approximate solution.
APA, Harvard, Vancouver, ISO, and other styles
4

Koekemoer, Silma Marguerite. "Wage negotiations : a financial assessment model / by Silma Marguerite Koekemoer." Thesis, North-West University, 2008. http://hdl.handle.net/10394/3736.

Full text
Abstract:
The purpose of this study is to propose a financial assessment model for use in preparation for salary and wage negotiations. Media reports covering recent salary and wage negotiations, as well as industrial action, were studied to identify financial factors that could have an impact on the negotiation process. The financial factors were grouped according to external, internal and personal financial factors. Of note is the fact that the majority of financial factors were found to be of an internal nature, that is, concerning the internal organizational environment. This group could be further divided into pure financial factors and financial factors relating to the relationship between management and the employees. The principles and calculations used in value based management were considered. Particularly the balance between direct and indirect cost, in relation to the return that is generated, as the return creates value for the owners and shareholders, but return requires input cost, which includes labour cost and should be closely managed to optimize the balance between the two. A study of negotiation preparation methodologies and practical preparations for negotiations identified financial assessment and preparation as an area that is relatively neglected when preparing for salary and wage negotiations. Although the reasons for this were not particularly researched, it can be deduced that time plays a role, as well as ready access to models and tools to assist negotiators in this area. The findings of the theoretical research were confirmed by the empirical study, which consisted of structured interviews with persons normally involved in salary and wage negotiations. Of note is the fact that several of the interviewees indicated that they are not particularly financially literate and that they therefore battle to assess the finances attached to salary and wage negotiations. A financial assessment model is proposed which incorporates the identified financial factors and the principles of value based management. The model has been designed to be simple to use and applicable to any industry or organization. It still needs to be extensively tested and developed into a software product that is interactive, simple to use and automates the calculations contained in the model.<br>Thesis (M.B.A.)--North-West University, Potchefstroom Campus, 2009.
APA, Harvard, Vancouver, ISO, and other styles
5

Kopciuk, Karen. "Modelling Issues in Three-state Progressive Processes." Thesis, University of Waterloo, 2001. http://hdl.handle.net/10012/1114.

Full text
Abstract:
This dissertation focuses on several issues pertaining to three-state progressive stochastic processes. Casting survival data within a three-state framework is an effective way to incorporate intermediate events into an analysis. These events can yield valuable insights into treatment interventions and the natural history of a process, especially when the right censoring is heavy. Exploiting the uni-directional nature of these processes allows for more effective modelling of the types of incomplete data commonly encountered in practice, as well as time-dependent explanatory variables and different time scales. In Chapter 2, we extend the model developed by Frydman (1995) by incorporating explanatory variables and by permitting interval censoring for the time to the terminal event. The resulting model is quite general and combines features of the models proposed by Frydman (1995) and Kim <i>et al</i>. (1993). The decomposition theorem of Gu (1996) is used to show that all of the estimating equations arising from Frydman's log likelihood function are self-consistent. An AIDS data set analyzed by these authors is used to illustrate our regression approach. Estimating the standard errors of our regression model parameters, by adopting a piecewise constant approach for the baseline intensity parameters, is the focus of Chapter 3. We also develop data-driven algorithms which select changepoints for the intervals of support, based on the Akaike and Schwarz Information Criteria. A sensitivity study is conducted to evaluate these algorithms. The AIDS example is considered here once more; standard errors are estimated for several piecewise constant regression models selected by the model criteria. Our results indicate that for both the example and the sensitivity study, the resulting estimated standard errors of certain model parameters can be quite large. Chapter 4 evaluates the goodness-of-link function for the transition intensity between states 2 and 3 in the regression model we introduced in chapter 2. By embedding this hazard function in a one-parameter family of hazard functions, we can assess its dependence on the specific parametric form adopted. In a simulation study, the goodness-of-link parameter is estimated and its impact on the regression parameters is assessed. The logistic specification of the hazard function from state 2 to state 3 is appropriate for the discrete, parametric-based data sets considered, as well as for the AIDS data. We also investigate the uniqueness and consistency of the maximum likelihood estimates based on our regression model for these AIDS data. In Chapter 5 we consider the possible efficiency gains realized in estimating the survivor function when an intermediate auxiliary variable is incorporated into a time-to-event analysis. Both Markov and hybrid time scale frameworks are adopted in the resulting progressive three-state model. We consider three cases for the amount of information available about the auxiliary variable: the observation is completely unknown, known exactly, or known to be within an interval of time. In the Markov framework, our results suggest that observing subjects at just two time points provides as much information about the survivor function as knowing the exact time of the intermediate event. There was generally a greater loss of efficiency in the hybrid time setting. The final chapter identifies some directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
6

Siu, Kin-bong Bonny. "Expected shortfall and value-at-risk under a model with market risk and credit risk." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37727473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Walljee, Raabia. "The Levy-LIBOR model with default risk." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96957.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2015<br>ENGLISH ABSTRACT : In recent years, the use of Lévy processes as a modelling tool has come to be viewed more favourably than the use of the classical Brownian motion setup. The reason for this is that these processes provide more flexibility and also capture more of the ’real world’ dynamics of the model. Hence the use of Lévy processes for financial modelling is a motivating factor behind this research presentation. As a starting point a framework for the LIBOR market model with dynamics driven by a Lévy process instead of the classical Brownian motion setup is presented. When modelling LIBOR rates the use of a more realistic driving process is important since these rates are the most realistic interest rates used in the market of financial trading on a daily basis. Since the financial crisis there has been an increasing demand and need for efficient modelling and management of risk within the market. This has further led to the motivation of the use of Lévy based models for the modelling of credit risky financial instruments. The motivation stems from the basic properties of stationary and independent increments of Lévy processes. With these properties, the model is able to better account for any unexpected behaviour within the market, usually referred to as "jumps". Taking both of these factors into account, there is much motivation for the construction of a model driven by Lévy processes which is able to model credit risk and credit risky instruments. The model for LIBOR rates driven by these processes was first introduced by Eberlein and Özkan (2005) and is known as the Lévy-LIBOR model. In order to account for the credit risk in the market, the Lévy-LIBOR model with default risk was constructed. This was initially done by Kluge (2005) and then formally introduced in the paper by Eberlein et al. (2006). This thesis aims to present the theoretical construction of the model as done in the above mentioned references. The construction includes the consideration of recovery rates associated to the default event as well as a pricing formula for some popular credit derivatives.<br>AFRIKAANSE OPSOMMING : In onlangse jare, is die gebruik van Lévy-prosesse as ’n modellerings instrument baie meer gunstig gevind as die gebruik van die klassieke Brownse bewegingsproses opstel. Die rede hiervoor is dat hierdie prosesse meer buigsaamheid verskaf en die dinamiek van die model wat die praktyk beskryf, beter hierin vervat word. Dus is die gebruik van Lévy-prosesse vir finansiële modellering ’n motiverende faktor vir hierdie navorsingsaanbieding. As beginput word ’n raamwerk vir die LIBOR mark model met dinamika, gedryf deur ’n Lévy-proses in plaas van die klassieke Brownse bewegings opstel, aangebied. Wanneer LIBOR-koerse gemodelleer word is die gebruik van ’n meer realistiese proses belangriker aangesien hierdie koerse die mees realistiese koerse is wat in die finansiële mark op ’n daaglikse basis gebruik word. Sedert die finansiële krisis was daar ’n toenemende aanvraag en behoefte aan doeltreffende modellering en die bestaan van risiko binne die mark. Dit het verder gelei tot die motivering van Lévy-gebaseerde modelle vir die modellering van finansiële instrumente wat in die besonder aan kridietrisiko onderhewig is. Die motivering spruit uit die basiese eienskappe van stasionêre en onafhanklike inkremente van Lévy-prosesse. Met hierdie eienskappe is die model in staat om enige onverwagte gedrag (bekend as spronge) vas te vang. Deur hierdie faktore in ag te neem, is daar genoeg motivering vir die bou van ’n model gedryf deur Lévy-prosesse wat in staat is om kredietrisiko en instrumente onderhewig hieraan te modelleer. Die model vir LIBOR-koerse gedryf deur hierdie prosesse was oorspronklik bekendgestel deur Eberlein and Özkan (2005) en staan beken as die Lévy-LIBOR model. Om die kredietrisiko in die mark te akkommodeer word die Lévy-LIBOR model met "default risk" gekonstrueer. Dit was aanvanklik deur Kluge (2005) gedoen en formeel in die artikel bekendgestel deur Eberlein et al. (2006). Die doel van hierdie tesis is om die teoretiese konstruksie van die model aan te bied soos gedoen in die bogenoemde verwysings. Die konstruksie sluit ondermeer in die terugkrygingskoers wat met die wanbetaling geassosieer word, sowel as ’n prysingsformule vir ’n paar bekende krediet afgeleide instrumente.
APA, Harvard, Vancouver, ISO, and other styles
8

Onuoha, Luke. "Systematic financial resources allocation processes : a model for critical resources mobilization and deployment for Nigerian universities." Thesis, Aston University, 2015. http://publications.aston.ac.uk/25324/.

Full text
Abstract:
Purpose – The purpose of this research is to study the perceived impact of some factors on the resources allocation processes of the Nigerian universities and to suggest a framework that will help practitioners and academics to understand and improve such processes. Design/methodology/approach – The study adopted the interpretive qualitative approach aimed at an ‘in-depth’ understanding of the resource allocation experiences of key university personnel and their perceived impact of the contextual factors affecting such processes. The analysis of individual narratives from each university established the conditions and factors impacting the resources allocation processes within each institution. Findings – The resources allocation process issues in the Nigerian universities may be categorised into people (core and peripheral units’ challenge, and politics and power); process (resources allocation processes); and resources (critical financial shortage and resources dependence response). The study also provides insight that resourcing efficiency in Nigerian universities appears strongly constrained by the rivalry among the resource managers. The efficient resources allocation process (ERAP) model is proposed to resolve the identified resourcing deficiencies. Research limitations/implications – The research is not focused to provide generalizable observations but ‘in-depth’ perceived factors and their impact on the resources allocation processes in Nigerian universities. The study is limited to the internal resources allocation issues within the universities and excludes the external funding factors. The resource managers’ responses to the identified factors may affect their internal resourcing efficiency. Further research using more empirical samples is required to obtain more widespread results and the implications for all universities. Originality/value – This study contributes a fresh literature framework to resources allocation processes focusing at ‘people’, ‘process’ and ‘resources’. Also a middle range theory triangulation is developed in relation to better understanding of resourcing process management. The study will be of interest to university managers and policy makers.
APA, Harvard, Vancouver, ISO, and other styles
9

Anselmi, Pasquale. "The Gain-Loss Model: A formal model for assessing learning processes." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3421632.

Full text
Abstract:
The thesis presents the Gain-Loss Model, a formal model for assessing learning processes. The theoretical framework is knowledge space theory, which is a novel approach to the assessment of knowledge proposed by Doignon and Falmagne in 1985. The Gain-Loss Model assesses the knowledge of students in the different steps of the learning process, and the effectiveness of educational interventions in promoting specific learning. The core element is represented by a skill multimap associating each problem with a collection of subsets of skills that are necessary and sufficient to solve it. The model is characterized by parameters which provide information relevant at different levels of didactic practice. The model has been the subject of investigation at different levels. Its functioning has been analyzed under different conditions, and theoretical developments have been proposed for improving its informative power in practical applications. The investigations have been conducted through simulated studies and empirical applications. The thesis presents the work completed on the model. On one hand, the theoretical development of the model itself, as well as some extensions of it, are described. On the other hand, the results of the simulation studies and the empirical applications are presented and discussed.<br>La tesi presenta il Gain-Loss Model, un modello formale per la valutazione degli interventi educativi. Il contesto teorico è la teoria degli spazi di conoscenza, che costituisce un approccio innovativo alla valutazione delle conoscenze proposto da Doignon e Falmagne nel 1985. Il Gain-Loss Model valuta le conoscenze possedute dagli studenti nelle diverse fasi del processo educativo e l’efficacia degli interventi didattici nel promuovere l’acquisizione di specifiche abilità. L’elemento di base è rappresentato dalla definizione di una multimappa di abilità che associa ad ogni problema una collezione di sottoinsiemi di abilità necessarie e sufficienti per risolverlo. Il modello è caratterizzato da parametri che forniscono informazioni utili a diversi livelli della didattica. Il modello è stato oggetto di analisi a diversi livelli. È stato studiato il suo funzionamento in diverse condizioni e sono stati proposti sviluppi teorici in grado di aumentarne l’utilità nelle applicazioni pratiche. Le analisi sono state condotte mediante studi simulati ed applicazioni empiriche. La tesi presenta il lavoro condotto sul modello. Da una parte, viene descritto lo sviluppo teorico del modello e vengono proposte alcune estensioni dello stesso. Dall’altra, vengono presentati e discussi i risultati degli studi simulati e delle applicazioni empiriche.
APA, Harvard, Vancouver, ISO, and other styles
10

Manzardo, Alessandro. "NEW MODEL TO ACHIEVE THE WATER MANAGEMENT AS A COMPETITIVE TOOL FOR INDUSTRIAL PROCESSES." Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3425272.

Full text
Abstract:
The issue of freshwater use and related impacts is central to international debate. The reason is that freshwater, even though renewable, is a scarce resource with limited availability in a growing number of regions all over the world. The consequent increasing competitiveness on freshwater resources is recognized to affect companies by exposing them to several environmental and market risks. In this contest, businesses clearly showed interest in freshwater management tool so that, in recent year, the scientific community has been working on the development of suitable models and methods. Even though several experiences can be identified in the literature, most significant researches are taking place within the framework of the Life Cycle Assessment, an internationally accepted methodology to assess potential environmental impacts of products, processes and organizations. When focusing on freshwater related issue it is also known as Water Footprint assessment. Current methods, specifically developed to address this issue, present limits in term of transparency, completeness and comprehensiveness. These limitations prevent companies to understand their water environmental hot-spots and therefore to set effective environmental and market performance improvement strategies. The present research focuses on the development of a new model to achieve the freshwater management as a competitive tool for industrial processes. To do so the specific objective of the research was to develop a set of indicators to overcome identified limits and to test its applicability in real case studies. To define the set of indicators, the methodology of the research took into consideration the Life Cycle Assessment framework adopting the criteria agreed within the UNEP-SETAC (United Nation Environmental Program – Society of Environmental Toxicology and Chemistry) Water Use Life Cycle Initiative; to test and discuss its applicability and effectiveness, the methodology of the multiple case studies was adopted. The case studies were selected considering their significance in term of freshwater scarcity and their capability to represent life cycle processes in different locations and therefore to address the issue of regionalization. The four products studied in this research were: a water collection system, an organic oat beverage, an organic strawberry jam and a tomato sauce. The development of the set of indicators is addressed in the first part of the research. To guarantee transparency and effective life cycle impact assessment analysis, the entire environmental impact chain was modelled in order to separately address consumptive and degradative freshwater use. To guarantee completeness and comprehensiveness and therefore to avoid potential environmental burden shifting, a so called water footprint profile covering accepted freshwater related impact methods, was created. The applicability and effectiveness of the proposed set of indicators is presented in the second part of this work. The four case studies were conducted according to the Life Cycle Assessment stages. Results of the applicability of the proposed set of indicators highlighted the importance of regionalization and comprehensiveness and allowed to understand the importance of considering degradative and consumptive freshwater use separately. It was in fact possible to define environmental impact reduction strategies in each of the case studies presented. The research activities were carried out at the Department of Industrial Engineering (Dipartimento di Ingegneria Industriale-DII) at the University of Padova (Italy) and at the Golisano Institute for Sustainability of the Rochester Institute of Technology (New York State –USA). The results of the research activities are summarized in 5 chapters. Chapter 1 includes an introduction of the issue of freshwater scarcity and presents the evolution of models to address freshwater use and related impacts starting from the virtual water assessment to the most recent development within the Life Cycle Assessment framework. Limits of current models and methods are presented. Objective and structure of the research are also described. Chapter 2 reports on materials and methods used in the present research, from the description of the general framework of Life Cycle Assessment studies to the specific criteria used in the indicators definition. Set of developed indicators is therefore presented by specifying procedures for their application and describing the solutions adopted to conform to internationally accepted requirements (such as ISO 14046). Chapter 3 presents the results of the application of the identified set of indicators in four different case studies. To identify potential strategies for companies and to test the effectiveness of the proposed set of indicators, a sensitivity analysis on results is performed. Chapter 4 presents the discussion on results with reference to published literature, the UNEP-SETAC Water Use Life Cycle Initiative criteria, the ISO 14046 principles and objectives of the research. Chapter 5 reports on the conclusion and perspectives for future research.<br>Il tema dell’utilizzo dell’acqua dolce e degli impatti ambientali a esso associati sono centrali all’interno del dibattitto internazionale. La ragione principale di quest’attenzione sta nel fatto che l’acqua dolce, sebbene rinnovabile, sia presente in quantità limitata in un numero crescente di regioni in tutto il pianeta. La conseguente accresciuta competizione per accedere a queste risorse ha delle conseguenze concrete nel mondo delle imprese che si trovano a dover affrontare rischi di natura ambientale e di mercato. In questo contesto, le aziende hanno mostrato un notevole interesse verso gli strumenti per la gestione delle risorse idriche tanto da spingere la comunità scientifica a moltiplicare gli sforzi per lo sviluppo di modelli e metodi adatti a garantire un utilizzo più sostenibile di queste risorse. Sebbene si possano identificare diverse esperienze in letteratura, gli sviluppi più significativi si sono avuti all’interno del contesto del Life Cycle Assessment, una metodologia ampiamente accettata a livello internazionale per la quantificazione e valutazione dei potenziali impatti ambientali di prodotti, processi ed organizzazioni. Quando ci si concentra sul tema risorse idriche questo approccio è conosciuto con il nome di Water Footprint. I modelli attuali, sviluppati nello specifico per trattare questa problematica, presentano dei limiti in termini di trasparenza, completezza e comprensività. Queste limitazioni non consentono al mondo delle imprese di comprendere i propri hot-spot ambientali riguardanti l’acqua e quindi di definire opportune strategie ambientali e di mercato per il miglioramento della competitività di prodotti e processi. La presente ricerca si focalizza sulla creazione di un modello innovativo per tradurre la gestione dell’acqua dolce in uno strumento per la competitività dei processi. L’obiettivo della ricerca è stato quello di sviluppare un set di indicatori per superare i limiti evidenziati e quindi verificarne l’applicabilità in dei casi di studio reali. Nella definizione del set di indicatori, la metodologia della ricerca ha preso in considerazione il contesto metodologico del Life Cycle Assessment (analisi di ciclo di vita) nel rispetto dei requisiti presentati in materia da parte dell’ UNEP-SETAC Water Use Life Cycle Initiative. Per mettere alla prova e discutere l’efficacia degli indicatori così creati è stata adottata la metodologia del caso di studio multiplo. La scelta dei casi di studio è stata compiuta in funzione della loro criticità in tema di utilizzo della risorsa idrica e in funzione della loro capacità di presentare processi localizzati in regioni con condizioni climatiche e di disponibilità di acqua dolce differenti. I quattro prodotti scelti per questa ricerca sono: un sistema di collettamento e recupero delle acque piovane, una bevanda a base di avena biologica, una marmellata di fragole biologiche ed una salsa di pomodoro per il condimento della pasta. Lo sviluppo del set di indicatori è affrontato nella prima parte della ricerca. Per garantire la trasparenza e l’efficacia dell’analisi degli impatti di ciclo di vita, l’intera catena di valutazione ambientale è stata modellata al fine di quantificare separatamente gli effetti del consumo e dell’uso degradativo dell’acqua dolce. Per garantire completezza e comprensività, così da evitare il problema del burden-shifting, è stato sviluppato un Water Footprint Profile che considera i metodi più accettati e diffusi nella quantificazione degli impatti ambientali relativi all’acqua dolce. L’applicabilità ed efficacia del set di indicatori è presentata nella seconda parte della ricerca. I quattro casi di studio sono stati condotti nel rispetto dei requisiti del Life Cycle Assessment. I risultati dell’applicabilità del set di indicatori proposto, ha messo in luce l’importanza della regionalizzazione e della comprensività e hanno permesso di capire l’importanza di valutare in modo separato il consumo e l’uso degradativo dell’acqua dolce. In ogni caso di studio è stato possibile determinare una strategia per la riduzione dei consumi di acqua dolce. Le attività di ricerca sono state condotte presso il Dipartimento di Ingegneria Industriale dell’Università di Padova (Italia) e presso il Golisano Institute for Sustainability del Rochester Institute of Technology (New York State –USA). I risultati della ricerca sono presentati in cinque capitoli. Capitolo 1: include un’introduzione al problema della scarsità d’acqua dolce e presenta l’evoluzione dei modelli per considerare l’utilizzo di acqua dolce ed i relativi impatti a partire dal concetto di virtual water fino alle recenti evoluzioni all’interno del contesto del Life Cycle Assessment. Sono quindi chiariti i limiti dei modelli e metodi attuali. Infine sono presentati gli obiettivi e la metodologia di ricerca. Capitolo 2: riferisce in merito ai materiali e metodi adottati dalla ricerca, dalla descrizione del modello generale degli studi di Life Cycle Assessment fino ai criteri considerati per la definizione degli indicatori. Questi sono poi presentati specificandone procedure applicative e soluzioni di conformità agli standard accettati a livello internazionale tra cui l’ISO 14046. Capitolo 3: presenta i risultati dell’applicazione del set di indicatori in quattro diversi casi di studio. Per la definizione delle strategie di riduzione degli impatti sull’acqua dolce e per verificare l’efficacia degli indicatori, è stata condotta un’analisi di sensitività e contribuzione specifica in ogni caso di studio. Capitolo 4: presenta le discussioni dei risultati ottenutici con riferimento ai modelli pubblicati in letteratura, ai criteri dell’ UNEP-SETAC Water Use Life Cycle Initiative, dei principi della norma ISO 14046 e degli obiettivi della ricerca. Capitolo 5 presenta le conclusioni e le indicazioni per futuri sviluppi della ricerca.
APA, Harvard, Vancouver, ISO, and other styles
11

Pesee, Chatchai. "Stochastic modelling of financial processes with memory and semi-heavy tails." Thesis, Queensland University of Technology, 2005. https://eprints.qut.edu.au/16057/2/Chatchai%20Pesee%20Thesis.pdf.

Full text
Abstract:
This PhD thesis aims to study financial processes which have semi-heavy-tailed marginal distributions and may exhibit memory. The traditional Black-Scholes model is expanded to incorporate memory via an integral operator, resulting in a class of market models which still preserve the completeness and arbitragefree conditions needed for replication of contingent claims. This approach is used to estimate the implied volatility of the resulting model. The first part of the thesis investigates the semi-heavy-tailed behaviour of financial processes. We treat these processes as continuous-time random walks characterised by a transition probability density governed by a fractional Riesz- Bessel equation. This equation extends the Feller fractional heat equation which generates a-stable processes. These latter processes have heavy tails, while those processes generated by the fractional Riesz-Bessel equation have semi-heavy tails, which are more suitable to model financial data. We propose a quasi-likelihood method to estimate the parameters of the fractional Riesz- Bessel equation based on the empirical characteristic function. The second part considers a dynamic model of complete financial markets in which the prices of European calls and puts are given by the Black-Scholes formula. The model has memory and can distinguish between historical volatility and implied volatility. A new method is then provided to estimate the implied volatility from the model. The third part of the thesis considers the problem of classification of financial markets using high-frequency data. The classification is based on the measure representation of high-frequency data, which is then modelled as a recurrent iterated function system. The new methodology developed is applied to some stock prices, stock indices, foreign exchange rates and other financial time series of some major markets. In particular, the models and techniques are used to analyse the SET index, the SET50 index and the MAI index of the Stock Exchange of Thailand.
APA, Harvard, Vancouver, ISO, and other styles
12

Pesee, Chatchai. "Stochastic Modelling of Financial Processes with Memory and Semi-Heavy Tails." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16057/.

Full text
Abstract:
This PhD thesis aims to study financial processes which have semi-heavy-tailed marginal distributions and may exhibit memory. The traditional Black-Scholes model is expanded to incorporate memory via an integral operator, resulting in a class of market models which still preserve the completeness and arbitragefree conditions needed for replication of contingent claims. This approach is used to estimate the implied volatility of the resulting model. The first part of the thesis investigates the semi-heavy-tailed behaviour of financial processes. We treat these processes as continuous-time random walks characterised by a transition probability density governed by a fractional Riesz- Bessel equation. This equation extends the Feller fractional heat equation which generates a-stable processes. These latter processes have heavy tails, while those processes generated by the fractional Riesz-Bessel equation have semi-heavy tails, which are more suitable to model financial data. We propose a quasi-likelihood method to estimate the parameters of the fractional Riesz- Bessel equation based on the empirical characteristic function. The second part considers a dynamic model of complete financial markets in which the prices of European calls and puts are given by the Black-Scholes formula. The model has memory and can distinguish between historical volatility and implied volatility. A new method is then provided to estimate the implied volatility from the model. The third part of the thesis considers the problem of classification of financial markets using high-frequency data. The classification is based on the measure representation of high-frequency data, which is then modelled as a recurrent iterated function system. The new methodology developed is applied to some stock prices, stock indices, foreign exchange rates and other financial time series of some major markets. In particular, the models and techniques are used to analyse the SET index, the SET50 index and the MAI index of the Stock Exchange of Thailand.
APA, Harvard, Vancouver, ISO, and other styles
13

Hank, Tobias Benedikt. "A Biophysically Based Coupled Model Approach For the Assessment of Canopy Processes Under Climate Change Conditions." Diss., lmu, 2008. http://nbn-resolving.de/urn:nbn:de:bvb:19-87254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Parker, Bobby I. Mr. "Assessment of the Sustained Financial Impact of Risk Engineering Service on Insurance Claims Costs." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_theses/100.

Full text
Abstract:
This research paper creates a comprehensive statistical model, relating financial impact of risk engineering activity, and insurance claims costs. Specifically, the model shows important statistical relationships among six variables including: types of risk engineering activity, risk engineering dollar cost, duration of risk engineering service, and type of customer by industry classification, dollar premium amounts, and dollar claims costs. We accomplish this by using a large data sample of approximately 15,000 customer-years of insurance coverage, and risk engineering activity. Data sample is from an international casualty/property insurance company and covers four years of operations, 2006-2009. The choice of statistical model is the linear mixed model, as presented in SAS 9.2 software. This method provides essential capabilities, including the flexibility to work with data having missing values, and the ability to reveal time-dependent statistical associations.
APA, Harvard, Vancouver, ISO, and other styles
15

Siu, Kin-bong Bonny, and 蕭健邦. "Expected shortfall and value-at-risk under a model with market risk and credit risk." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37727473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Moffitt, Kevin Christopher. "Toward Enhancing Automated Credibility Assessment: A Model for Question Type Classification and Tools for Linguistic Analysis." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145456.

Full text
Abstract:
The three objectives of this dissertation were to develop a question type model for predicting linguistic features of responses to interview questions, create a tool for linguistic analysis of documents, and use lexical bundle analysis to identify linguistic differences between fraudulent and non-fraudulent financial reports. First, The Moffitt Question Type Model (MQTM) was developed to aid in predicting linguistic features of responses to questions. It focuses on three context independent features of questions: tense (past vs. present vs. future), perspective (introspective vs. extrospective), and abstractness (concrete vs. conjectural). The MQTM was tested on responses to real-world pre-polygraph examination questions in which guilty (n = 27) and innocent (n = 20) interviewees were interviewed. The responses were grouped according to question type and the linguistic cues from each groups' transcripts were compared using independent samples t-tests with the following results: future tense questions elicited more future tense words than either past or present tense questions and present tense questions elicited more present tense words than past tense questions; introspective questions elicited more cognitive process words and affective words than extrospective questions; and conjectural questions elicited more auxiliary verbs, tentativeness words, and cognitive process words than concrete questions. Second, a tool for linguistic analysis of text documents, Structured Programming for Linguistic Cue Extraction (SPLICE), was developed to help researchers and software developers compute linguistic values for dictionary-based cues and cues that require natural language processing techniques. SPLICE implements a GUI interface for researchers and an API for developers. Finally, an analysis of 560 lexical bundles detected linguistic differences between 101 fraudulent and 101 non-fraudulent 10-K filings. Phrases such as "the fair value of," and "goodwill and other intangible assets" were used at a much higher rate in fraudulent 10-Ks. A principal component analysis reduced the number of variables to 88 orthogonal components which were used in a discriminant analysis that classified the documents with 71% accuracy. Findings in this dissertation suggest the MQTM could be used to predict features of interviewee responses in most contexts and that lexical bundle analysis is a viable tool for discriminating between fraudulent and non-fraudulent text.
APA, Harvard, Vancouver, ISO, and other styles
17

Alyaseri, Isam. "QUALITATIVE AND QUANTITATIVE PROCEDURE FOR UNCERTAINTY ANALYSIS IN LIFE CYCLE ASSESSMENT OF WASTEWATER SOLIDS TREATMENT PROCESSES." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/dissertations/795.

Full text
Abstract:
In order to perform the environmental analysis and find the best management in the wastewater treatment processes using life cycle assessment (LCA) method, uncertainty in LCA has to be evaluated. A qualitative and quantitative procedure was constructed to deal with uncertainty for the wastewater treatment LCA studies during the inventory and analysis stages. The qualitative steps in the procedure include setting rules for the inclusion of inputs and outputs in the life cycle inventory (LCI), setting rules for the proper collection of data, identifying and conducting data collection analysis for the significant contributors in the model, evaluating data quality indicators, selecting the proper life cycle impact assessment (LCIA) method, evaluating the uncertainty in the model through different cultural perspectives, and comparing with other LCIA methods. The quantitative steps in the procedure include assigning the best guess value and the proper distribution for each input or output in the model, calculating the uncertainty for those inputs or outputs based on data characteristics and the data quality indicators, and finally using probabilistic analysis (Monte Carlo simulation) to estimate uncertainty in the outcomes. Environmental burdens from the solids handling unit at Bissell Point Wastewater Treatment Plant (BPWWTP) in Saint Louis, Missouri was analyzed. Plant specific data plus literature data were used to build an input-output model. Environmental performance of an existing treatment scenario (dewatering-multiple hearth incineration-ash to landfill) was analyzed. To improve the environmental performance, two alternative scenarios (fluid bed incineration and anaerobic digestion) were proposed, constructed, and evaluated. System boundaries were set to include the construction, operation and dismantling phases. The impact assessment method chosen was Eco-indicator 99 and the impact categories were: carcinogenicity, respiratory organics and inorganics, climate change, radiation, ozone depletion, ecotoxicity, acidification-eutrophication, and minerals and fossil fuels depletion. Analysis of the existing scenario shows that most of the impacts came from the operation phase on the categories related to fossil fuels depletion, respiratory inorganics, and carcinogens due to energy consumed and emissions from incineration. The proposed alternatives showed better performance than the existing treatment. Fluid bed incineration had better performance than anaerobic digestion. Uncertainty analysis showed there is 57.6% possibility to have less impact on the environment when using fluid bed incineration than the anaerobic digestion. Based on single scores ranking in the Eco-indicator 99 method, the environmental impact order is: multiple hearth incineration > anaerobic digestion > fluid bed incineration. This order was the same for the three model perspectives in the Eco-indicator 99 method and when using other LCIA methods (Eco-point 97 and CML 2000). The study showed that the incorporation of qualitative/quantitative uncertainty analysis into LCA gave more information than the deterministic LCA and can strengthen the LCA study. The procedure tested in this study showed that Monte Carlo simulation can be used in quantifying uncertainty in the wastewater treatment studies. The procedure can be used to analyze the performance of other treatment options. Although the analysis in different perspectives and different LCIA methods did not impact the order of the scenarios, it showed a possibility of variation in the final outcomes of some categories. The study showed the importance of providing decision makers with the best and worst possible outcomes in any LCA study and informing them about the perspectives and assumptions used in the assessment. Monte Carlo simulation is able to perform uncertainty analysis in the comparative LCA only between two products or scenarios based on the (A-B) approach due to the overlapping between the probability distributions of the outcomes. It is recommended to modify it to include more than two scenarios.
APA, Harvard, Vancouver, ISO, and other styles
18

Ormon, Stephen Wayne. "Development of a hierarchical, model-based design decision-support tool for assessing uncertainty of cost estimates." Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-04092002-084914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Splawinski, Sophie. "An assessment of freezing rain processes in the Saint- Lawrence River Valley: synoptic-dynamic analysis and operational model verification." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121459.

Full text
Abstract:
Freezing rain (FZRA), a hazardous meteorological phenomenon, poses a significant threat to the general public and can severely damage societal infrastructure. The phenomenon is well known throughout the St-Lawrence River Valley (SLRV), which is known to have one of the highest frequencies of FZRA in the world owing to its orography and spatial orientation. Our focus is to provide meteorologists with the means to better predict both the onset and duration of FZRA at Montreal (CYUL), Quebec City (CYQB), and Massena (KMSS) in a two-stage process: by introducing a new 2-dimensional elliptic regression statistical forecast model and to assess synoptic and mesoscale conditions associated with past events. Analysis of a 27-year period, from 1979 through 2005, was conducted with a total of 99, 102, and 70 FZRA events at CYQB, CYUL, and KMSS, respectively. Our statistical analysis provides meteorologists with the POZR (probability of freezing rain): the ability to input model forecasted temperatures at two pressure levels and determine the probability of the onset of FZRA based on a 30-year climatology of northeasterly related precipitation. Synoptic-dynamic analysis of past events acknowledges the need for a high-resolution forecast model to adequately resolve mesoscale processes crucial to FZRA maintenance. Tests performed using a verification dataset (2006-2011 ZR event data) show the accuracy and feasibility of the model, which could be implemented by forecasting offices. Furthermore, synoptic-dynamic assessment of events within the verification dataset and forecast model comparison provide insight into missed forecasts, great forecasts and false alarms. Utilizing these methods could provide meteorologists with the opportunity to produce more highly accurate forecasts of both the onset and duration of freezing rain events.<br>La pluie verglaçante (PV) est une forme de précipitation qui pose un danger non seulement pour le secteur publique mais également pour le secteur aérien. Ce phénomène est particulièrement connu dans la vallée du fleuve St. Laurent (VFSL). L'orientation de la vallée ainsi que l'orographie explique ce nombre accru d'évènement. Le but vise est donc de fournir aux météorologues les outils nécessaires pour améliorer les prévisions de durée et de location de PV pour les villes de Montréal (CYUL), Québec (CYQB), et Massena (KMSS). Afin d'y parvenir, deux étapes sont requises : la première, introduire un nouveau modèle statistique de prévision et la deuxième, évaluer les conditions synoptiques et meso-échelle des évènements au cours des 27 dernières années. Pendant cette période, 99, 102, et 70 évènements de PV se sont produit à CYQB, CYUL, et KMSS, respectivement. Notre analyse statistique fourni aux météorologues une probabilité d'occurrence de PV (POZR). L'analyse est un moyen d'introduire les prévisions de température à deux différents niveaux et déterminer la probabilité de PV en utilisant un modèle baser sur les données provenant d'une climatologie de précipitations et de vents dans la VFSL. Les analyses synoptiques et dynamiques des évènements passés nous ont montré la nécessité d'incorporer un modèle de prévisions à haute résolution dans la vallée; nécessaire pour résoudre adéquatement l'orographie. Ceci est impératif pour la réussite de prévisions de PV. Ensuite, en utilisant un ensemble de données de vérification, on effectues des analyses de faisabilité et de précision du modèle, ce qui peut être utilise dans un bureau de prévisions. Finalement, une comparaison d'évènements de POZR variées démontre les forces et faiblesses dans les modèles de prévisions actuels. Ces derniers, couplés avec un nouveau modèle de prévision de PV, fourni au météorologues une opportunité de produire des prévisions de PV plus précises dans la VFSL.
APA, Harvard, Vancouver, ISO, and other styles
20

Francis, Merwin. "A model for assessing the anticipated relative financial impact of implementing the tools of lean manufacturing on a manufacturing concern." Thesis, Nelson Mandela Metropolitan University, 2011. http://hdl.handle.net/10948/1326.

Full text
Abstract:
Lean manufacturing has seen its creators, Toyota, rise from insignificance in the middle of the previous century, to the biggest selling car manufacturer in the world today. Another Japanese car manufacturer, Honda, which has also been practising the principles of lean avidly during the last few decades, has also made huge strides towards becoming a dominant force in the car market. These Japanese companies‟ adoption of lean has seen many of their mass producing United States (US) and European counterparts struggle for survival. Maynard (2003:10) predicted that by the end of the decade, at least one of the „Big Three‟ auto makers in the US – Chrysler, Ford, and General Motors (GM) – would be forced to undertake significant restructuring to continue in operation. At the time of this writing all indications are that this prediction will come true. GM is in the process of major shareholding restructuring in an attempt to keep the company afloat, having run up insurmountable debts in the face of the current global economic downturn. Adopting the lean methodology has become a matter of necessity. The continued use of mass production methods alone is no longer viable; companies need to also employ lean methods intelligently in order to remain competitive. This study is regarded as a crucial endeavour to assist operations managers of manufacturing concerns in developing lean implementation strategies which will maximise the benefits to the organization.
APA, Harvard, Vancouver, ISO, and other styles
21

Holsomback, James Richard. "Assessment and Analysis of Per Pupil Expenditures: a Study Testing a Micro-Financial Model in Equity and Student Outcome Determination." Thesis, University of North Texas, 1999. https://digital.library.unt.edu/ark:/67531/metadc279253/.

Full text
Abstract:
The purpose of this study was to examine district level financial data to assess equity across districts, to compare equity benchmarks established in the literature using selected functions from the state's financial database, and to determine the predictive value of those functions to the Texas Assessment of Academic Skills (TAAS) tests of 1997.
APA, Harvard, Vancouver, ISO, and other styles
22

Driscoll, Jessica Margit. "Impacts of Climate Change in Snowmelt-Dominated Alpine Catchments: Development and Assessment of Comparative Methods to Quantify the Role of Dynamic Storage and Subsurface Hydrologic Processes." Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/560860.

Full text
Abstract:
Snowmelt-dominated systems are a significant source of water supply for the Western United States. Changes in timing and duration of snowmelt are predicted to continue under climate change; however, the impact this change will have on water resources is not well understood. The ability to compare hydrologic processes across space and time is critical to accurately assess the physical and chemical response of headwater systems to climate change. This dissertation builds upon previous work by using long-term data from two snowmelt dominated catchments to investigate the response of hydrologic processes at different temporal and spatial scales. First, results from an hourly spatially-distributed energy balance snowmelt model were spatially and temporally aggregated to provide daily, catchment-wide snowmelt estimates, which, along with measured discharge and hydrochemical data were used to assess and compare hydrologic processes which occur on an annual scale in two headwater catchments for an eleven year study period. Second, the magnitude and timing of snowmelt, discharge fluxes and hydrochemical data were used to assess and compare inter-annual catchment response in two headwater catchments for an eleven year study period. Third, a pseudoinverse method was developed to compare mineral weathering fluxes in a series of nested sub-catchments over an eleven year study period. Advances from this work include the use of an independently-created energy balance snowmelt model for spatially-distributed hydrologic input for catchment-scale water balance, application of a quantifiable measure of catchment-scale hydrologic flux hysteresis and the development of a method to quantify and compare mineral weathering reactions between source waters across space and time. These methods were utilized to quantify and assess its role of dynamic storage in mitigating climate change response.
APA, Harvard, Vancouver, ISO, and other styles
23

Priestley, Richard. "Approximate factor structures, macroeconomic and financial factors, unique and stable return generating processes and market anomalies : an empirical investigation of the robustness of the arbitrage pricing theory." Thesis, Brunel University, 1994. http://bura.brunel.ac.uk/handle/2438/5448.

Full text
Abstract:
This thesis presents an empirical investigation into the Arbitrage Pricing Theory (APT). At the onset of the thesis it is recognised that tests of the APT are conditional on a number of preconditions and assumptions. The first line of investigation examines the effect of the assumed nature of the form of the return generating process of stocks. It is found that stocks follow an approximate factor structure and tests of the APT are sensitive to the specified form of the return generating process. We provide an efficient estimation methodology for the case when stocks follow an approximate factor structure. The second issue we raise is that of the appropriate factors, the role of the market portfolio and the performance of the APT against the Capital Asset Pricing Model (CAPM). The conclusions that we draw are that the APT is robust to a number of specified alternatives and furthermore, the APT outperforms the CAPM in comparative tests. In addition, within the APT specification there is a role for the market portfolio. Through a comparison of the results in chapters 2 and 3 it is evident that the APT is not robust to the specification of unexpected components. We evaluate the validity of extant techniques in this respect and find that they are unlikely to be representative of agents actual unexpected components. Consequently we put forth an alternative methodology based upon estimating expectations from a learning scheme. This technique is valid in respect to our prior assumptions. Having addressed these preconditions and assumptions that arise in tests of the APT a thorough investigation into the empirical content of the APT is then undertaken. Concentrating on the issues that the return generating process must be unique and that the estimated risk premia should be stable overtime the results indicate that the APT does have empirical content. Finally, armed with the empirically valid APT we proceed to analyse the issue of seasonalities in stock returns. The results confirm previous findings that there are seasonal patterns in the UK stock market, however, unlike previous findings we show that these seasonal patterns are part of the risk return structure and can be explained by the yearly business cycle. Furthermore, the APT retains empirical content when these seasonal patterns are removed from the data. The overall finding of this thesis is that the APT does have empirical content and provides a good description of the return generating process of UK stocks.
APA, Harvard, Vancouver, ISO, and other styles
24

Van, Damme Martin. "Assessment of global atmospheric ammonia using IASI infrared satellite observations." Doctoral thesis, Universite Libre de Bruxelles, 2015. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209085.

Full text
Abstract:
ENGLISH:<p>The natural nitrogen cycle has been and is significantly perturbed by anthropogenic emissions of reactive nitrogen (Nr) compounds into the atmosphere, resulting from our production of energy and food. In the last century global ammonia (NH3) emissions have doubled and represent nowadays more than half of total the Nr emissions. NH3 is also the principal atmospheric base in the atmosphere and rapidly forms aerosols by reaction with acids. It is therefore a species of high relevance for the Earth's environment, climate and human health (Chapter 1). As a short-lived species, NH3 is highly variable in time and space, and while ground based measurements are possible, they are sparse and their spatial coverage is largely heterogeneous. Consequently, global spatial and temporal patterns of NH3 emissions are poorly understood and account for the largest uncertainties in the nitrogen cycle. The aim of this work is to assess distributions and saptiotemporal variability of NH3 using satellite measurements to improve our understanding of its contribution to the global nitrogen cycle and its related effects.<p><p>Recently, satellite instruments have demonstrated their abilities to measure NH3 and to supplement the sparse surface measuring network by providing global total columns daily. The Infrared Atmospheric Sounding Interferometer (IASI), on board MetOp platforms, is measuring NH3 at a high spatiotemporal resolution. IASI circles the Earth in a polar Sun-synchronous orbit, covering the globe twice a day with a circular pixel size of 12km diameter at nadir and with overpass times at 9:30 and 21:30 (local solar time when crossing the equator). An improved retrieval scheme based on the calculation of Hyperspectral Range Index (HRI) is detailed in Chapter 2 and compared with previous retrieval methods. This approach fully exploits the hyperspectral nature of IASI by using a broader spectral range (800-1200 cm-1) where NH3 is optically active. It allows retrieving total columns from IASI spectra globally and twice a day without large computational resources and with an improved detection limit. More specifically the retrieval procedure involves two steps: the calculation of a dimensionless spectral index (HRI) and the conversion of this index into NH3 total columns using look-up tables (LUTs) built from forward radiative transfer simulations under various atmospheric conditions. The retrieval also includes an error characterization of the retrieved column, which is of utmost importance for further analysis and comparisons. Global distributions using five years of data (1 November 2007 to 31 October 2012) from IASI/MetOp-A are presented and analyzed separately for the morning and evening overpasses. The advantage of the HRI-based retrieval scheme over other methods, in particular to identify smaller emission sources and transport patterns over the oceans is shown. The benefit of the high spatial sampling and resolution of IASI is highlighted with the regional distribution over China and the first four-year time series are briefly discussed.<p><p>We evaluate four years (1 January 2008 to 31 December 2011) of IASI-NH3 columns from the morning observations and of LOTOS-EUROS model simulations over Europe and Western Russia. We describe the methodology applied to account for the variable retrieval sensitivity of IASI measurements in Chapter 3. The four year mean distributions highlight three main agricultural hotspots in Europe: The Po Valley, the continental part of Northwestern Europe, and the Ebro Valley. A general good agreement between IASI and LOTOS-EUROS is shown, not only over source regions but also over remote areas and over seas when transport is observed. The yearly analyses reveal that, on average, the measured NH3 columns are higher than the modeled ones. Large discrepancies are observed over industrial areas in Eastern Europe and Russia pointing to underestimated if not missing emissions in the underlying inventories. For the three hotspots areas, we show that the seasonality between IASI and LOTOS-EUROS matches when the sensitivity of the satellite measurements is taken into account. The best agreement is found in the Netherlands, both in magnitude and timing, most likely as the fixed emission timing pattern was determined from experimental data sets from this country. Moreover, comparisons of the daily time series indicate that although the dynamic of the model is in reasonable agreement with the measurements, the model may suffer from a possible misrepresentation of emission timing and magnitude. Overall, the distinct temporal patterns observed for the three sites underline the need for improved timing of emissions. Finally, the study of the Russian fires event of 2010 shows that NH3 modeled plumes are not enough dispersed, which is confirmed with a comparison using in situ measurements.<p><p>Chapter 4 describes the comparisons of IASI-NH3 measurements with several independent ground-based and airborne data sets. Even though the in situ data are sparse, we show that the yearly distributions are broadly consistent. For the monthly analyzes we use ground-based measurements in Europe, China and Africa. Overall, IASI-derived concentrations are in fair agreement but are also characterized by less variability. Statistically significant correlations are found for several sites, but low slopes and high intercepts are calculated in all cases. At least three reasons can explain this: (1) the lack of representativity of the point surface measurement for the large IASI pixel, (2) the use of a single profile shape in the retrieval scheme over land, which does therefore not account for a varying boundary layer height, (3) the impact of the averaging procedure applied to satellite measurements to obtain a consistent quantity to compare with the in situ monthly data. The use of hourly surface measurements and of airborne data sets allows assessing IASI individual observations. Much higher correlation coefficients are found in particular when comparing IASI-derived volume mixing ratio with vertically resolved measurements performed from the NOAA WP-3D airplane during CalNex campaign in 2010. The results demonstrate the need, for validation of the satellite columns, of measurements performed at various altitudes and covering a large part of the satellite footprint.<p><p>The six-year of IASI observations available at the end of this thesis are used to analyze regional time series for the first time (Chapter 5). More precisely, we use the IASI measurements over that period (1 January 2008 to 31 December 2013) to identify seasonal patterns and inter-annual variability at subcontinental scale. This is achieved by looking at global composite seasonal means and monthly time series over 12 regions around the world (Europe, Eastern Russia and Northern Asia, Australia, Mexico, South America, 2 sub-regions for Northern America and South Asia, 3 sub-regions for Africa), considering separately but simultaneously measurements from IASI morning and evening overpasses. The seasonal cycle is inferred for the majority of these regions. The relations between the NH3 atmospheric abundance and emission processes is emphasized at smaller regional scale by extracting at high spatial resolution the global climatology of the month of maxima columns. In some region, the predominance of a single source appears clearly (e.g. agriculture in Europe and North America, fires in central South Africa and South America), while in others a composite of source processes on small scale is demonstrated (e.g. Northern Central Africa and Southwestern Asia).<p><p>Chapter 6 presents the achievements of this thesis, as well as ongoing activities and future perspectives.<p>FRANCAIS:<p>Le cycle naturel de l'azote est fortement perturbé suite aux émissions atmosphériques de composés azotés réactifs (Nr) résultant de nos besoins accrus en énergie et en nourriture. Les émissions d'ammoniac (NH3) ont doublé au cours du siècle dernier, représentant aujourd'hui plus de la moitié des émissions totales de Nr. De plus, le NH3 étant le principal composé basique de notre atmosphère, il réagit rapidement avec les composés acides pour former des aérosols. C'est dès lors un constituant prépondérant pour l'environnement, le climat et la santé publique. Les problématiques environnementales y étant liées sont décrites au Chapitre 1. En tant que gaz en trace le NH3 se caractérise par une importante variabilité spatiale et temporelle. Bien que des mesures in situ soient possibles, elles sont souvent rares et couvrent le globe de façon hétérogène. Il en résulte un manque de connaissance sur l'évolution temporelle et la variabilité spatiale des émissions, ainsi que de leurs amplitudes, qui représentent les plus grandes incertitudes pour le cycle de l'azote (également décrites au Chapitre 1).<p><p>Récemment, les sondeurs spatiaux opérant dans l'infrarouge ont démontré leurs capacités à mesurer le NH3 et par là à compléter le réseau d'observations de surface. Particulièrement, l'Interféromètre Atmosphérique de Sondage Infrarouge (IASI), à bord de la plateforme MetOp, mesure le NH3 à une relativement haute résolution spatiotemporelle. Il couvre le globe deux fois par jour, grâce à son orbite polaire et son balayage autour du nadir, avec un temps de passage à 9h30 et à 21h30 (temps solaire local quand il croise l'équateur). Une nouvelle méthode de restitution des concentrations basée sur le calcul d'un index hyperspectral sans dimension (HRI) est détaillée et comparée aux méthodes précédentes au Chapitre 2. Cette méthode permet d'exploiter de manière plus approfondie le caractère hyperspectral de IASI en se basant sur une bande spectrale plus étendue (800-1200 cm-1) au sein de laquelle le NH3 est optiquement actif. Nous décrivons comment restituer ces concentrations deux fois par jour sans nécessiter de grandes ressources informatiques et avec un meilleur seuil de détection. Plus spécifiquement, la procédure de restitution des concentrations consiste en deux étapes: le HRI est calculé dans un premier temps pour chaque spectre puis est ensuite converti en une colonne totale de NH3 à l'aide de tables de conversions. Ces tables ont été construites sur base de simulations de transfert radiatif effectuées pour différentes conditions atmosphériques. Le processus de restitution des concentrations comprend également le calcul d'une erreur sur la colonne mesurée. Des distributions globales moyennées sur cinq ans (du 1 novembre 2007 au 31 Octobre 2012) sont présentées et analysées séparément pour le passage diurne et nocturne de IASI. L'avantage de ce nouvel algorithme par rapport aux autres méthodes, permettant l'identification de sources plus faibles de NH3 ainsi que du transport depuis les sources terrestres au-dessus des océans, est démontré. Le bénéfice de la haute couverture spatiale et temporelle de IASI est mis en exergue par une description régionale au-dessus de la Chine ainsi que par l'analyse de premières séries temporelles hémisphériques sur quatre ans.<p><p>Au Chapitre 3, nous évaluons quatre ans (du 1 janvier 2008 au 31 décembre 2011) de mesures matinales de IASI ainsi que de simulations du modèle LOTOS-EUROS, effectuées au-dessus de l'Europe et de l'ouest de la Russie. Nous décrivons une méthodologie pour prendre en compte, dans la comparaison avec le modèle, la sensibilité variable de l'instrument IASI pour le NH3. Les comparaisons montrent alors une bonne concordance générale entre les mesures et les simulations. Les distributions pointent trois régions sources: la vallée du Pô, le nord-ouest de l'Europe continentale et la vallée de l'Ebre. L'analyse des distributions annuelles montre qu'en moyenne, les colonnes de NH3 mesurées sont plus élevées que celles simulées, à part pour quelques cas spécifiques. Des différences importantes ont été identifiées au-dessus de zones industrielles en Europe de l'est et en Russie, ce qui tend à incriminer une sub-estimation voire une absence de ces sources dans les inventaires d'émissions utilisés en entrée du modèle. Nous avons également montré que la saisonnalité est bien reproduite une fois la sensibilité des mesures satellites prise en compte. La meilleure concordance entre le modèle et IASI est observée pour les Pays-Bas, ce qui est certainement dû au fait que le profil temporel des émissions utilisé pour les simulations LOTOS-EUROS est basé sur des études expérimentales réalisées dans ce pays. L'étude des séries temporelles journalières indique que la dynamique du modèle est raisonnablement en accord avec les mesures mais pointe néanmoins une possible mauvaise représentation du profil temporel ainsi que de l'ampleur des émissions. Finalement, l'étude des importants feux ayant eu cours en Russie à l'été 2010 a montré que les panaches modélisés sont moins étendus que ceux observés, ce qui a été confirmé grâce à une comparaison avec des mesures sols.<p><p>Le chapitre 4 est dédié à la confrontation des mesures IASI avec différents jeux de données indépendants acquis depuis le sol et par avion. Les distributions globales annuelles sont concordantes, bien que la couverture spatiale des mesures sols soit limitée. Des mesures effectuées à la surface en Europe, en Chine et en Afrique sont utilisées pour les comparaisons mensuelles. Ces dernières révèlent une bonne concordance générale, bien que les mesures satellites montrent une plus faible amplitude de variations de concentrations. Des corrélations statistiquement significatives ont été calculées pour de nombreux sites, mais les régressions linéaires sont caractérisées par des pentes faibles et des ordonnées à l'origine élevées dans tous les cas. Au minimum, trois raisons contribuent à expliquer cela: (1) le manque de représentativité des mesures ponctuelles pour l'étendue des pixels IASI, (2) l'utilisation d'une seule forme de profil vertical pour la restitution des concentrations, qui ne prend dès lors pas en compte la hauteur de la couche limite, (3) l'impact de la procédure utilisée pour moyenner les observations satellites afin d'obtenir des quantités comparables aux mesures sols mensuelles. La prise en compte de mesures en surface effectuées à plus haute résolution temporelle ainsi que de mesures faites depuis un avion permet d'évaluer les observations IASI individuelles. Les coefficients de corrélation calculés sont bien plus élevés, en particulier pour la comparaison avec les mesures effectuées depuis l'avion NOAA WP-3D pendant la campagne CalNex en 2010. Ces résultats démontrent la nécessité de ce type d'observations, effectuées à différentes altitudes et couvrant une plus grande surface du pixel, pour valider les colonnes IASI-NH3.<p><p>Les six ans de données IASI disponibles à la fin de cette thèse sont utilisées pour tracer les premières séries temporelles sub-continentales (Chapitre 5). Plus spécifiquement, nous explorons les mesures IASI durant cette période (du 1 janvier 2008 jusqu'au 31 décembre 2013) pour identifier des structures saisonnières ainsi que la variabilité inter-annuelle à l'échelle sous-continentale. Pour arriver à cela, des moyennes saisonnières composites ont été produites ainsi que des séries temporelles mensuelles au-dessus de 12 régions du globe (Europe, est de la Russie et nord de l'Asie, Australie, Mexique, Amérique du Sud, 2 sous-régions en Amérique du nord et en Asie du sud et 3 sous-régions en Afrique), considérant séparément mais simultanément les mesures matinales et nocturnes de IASI. Le cycle saisonnier est raisonnablement bien décrit pour la plupart des régions. La relation entre la quantité de NH3 atmosphérique et ses sources d'émission est mise en exergue à l'échelle plus régionale par l'extraction à haute résolution spatiale d'une climatologie des mois de colonnes maximales. Dans certaines régions, la prédominance d'un processus source apparait clairement (par exemple l'agriculture en Europe et en Amérique du nord, les feux en Afrique du Sud et en Amérique du Sud), alors que, pour d'autres, la diversité des sources d'émissions est démontrée (par exemple pour le nord de l'Afrique centrale et l'Asie du sud-ouest).<p><p>Le Chapitre 6 reprend brièvement les principaux aboutissements de cette thèse et présente les différentes recherches en cours et les perspectives associées.<p><br>Doctorat en Sciences agronomiques et ingénierie biologique<br>info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
25

Biro, Christopher J. "An Assessment of the Short-Term Response of the Cuyahoga River to the Removal of the LeFever Dam, Cuyahoga Falls, Ohio." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1447429263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tanner, Janet Jeffery. "Financial Analysis and Fiscal Viability of Secondary Schools in Mukono District, Uganda." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1289.

Full text
Abstract:
Within the worldwide business community, many analysis tools and techniques have evolved to assist in the evaluation and encouragement of financial health and fiscal viability. However, in the educational community, such analysis is uncommon. It has long been argued that educational institutions bear little resemblance to, and should not be treated like, businesses. This research identifies an educational environment where educational institutions are, indeed, businesses, and may greatly benefit from the use of business analyses. The worldwide effort of Education for All (EFA) has focused on primary education, particularly in less developed countries (LDCs). In Sub-Saharan Africa, Uganda increased its primary school enrollments from 2.7 million in 1996 to 7.6 million in 2003. This rapid primary school expansion substantially increased the demand for secondary education. Limited government funding for secondary schools created an educational bottleneck. In response to this demand, laws were passed to allow the establishment of private secondary schools, operated and taxed as businesses. Revenue reports, filed by individual private schools with the Uganda Revenue Authority, formed the database for the financial analysis portion of this research. These reports, required of all profitable businesses in Uganda, are similar to audited corporate financial statements. Survey data and national examination (UNEB) scores were also utilized. This research explored standard business financial analysis tools, including financial statement ratio analysis, and evaluated the applicability of each to this LDC educational environment. A model for financial assessment was developed and industry averages were calculated for private secondary schools in the Mukono District of Uganda. Industry averages can be used by individual schools as benchmarks in assessing their own financial health. Substantial deviations from the norms signal areas of potential concern. Schools may take appropriate corrective action, leading to sustainable fiscal viability. An example of such analysis is provided. Finally, school financial health, defined by eight financial measures, was compared with quality of education, defined by UNEB scores. Worldwide, much attention is given to education and its role in development. This research, with its model for financial assessment of private LDC schools, offers a new and pragmatic perspective.
APA, Harvard, Vancouver, ISO, and other styles
27

Tavares, Ivo Alberto Valente. "Uncertainty quantification with a Gaussian Process Prior : an example from macroeconomics." Doctoral thesis, Instituto Superior de Economia e Gestão, 2021. http://hdl.handle.net/10400.5/21444.

Full text
Abstract:
Doutoramento em Matemática Aplicada à Economia e Gestão<br>This thesis may be broadly divided into 4 parts. In the first part, we do a literature review of the state of the art in misspecification in Macroeconomics, and what so far has been the contribution of a relatively new area of research called Uncertainty Quantification to the Macroeconomics subject. These reviews are essential to contextualize the contribution of this thesis in the furthering of research dedicated to correcting non-linear misspecifications, and to account for several other sources of uncertainty, when modelling from an economic perspective. In the next three parts, we give an example, using the same simple DSGE model from macroeconomic theory, of how researchers may quantify uncertainty in a State-Space Model using a discrepancy term with a Gaussian Process prior. The second part of the thesis, we used a full Gaussian Process (GP) prior on the discrepancy term. Our experiments showed that despite the heavy computational constraints of our full GP method, we still managed to obtain a very interesting forecasting performance with such a restricted sample size, when compared with similar uncorrected DSGE models, or corrected DSGE models using state of the art methods for time series, such as imposing a VAR on the observation error of the state-space model. In the third part of our work, we improved on the computational performance of our previous method, using what has been referred in the literature as Hilbert Reduced Rank GP. This method has close links to Functional Analysis, and the Spectral Theorem for Normal Operators, and Partial Differential Equations. It indeed improved the computational processing time, albeit just slightly, and was accompanied with a similarly slight decrease in the forecasting performance. The fourth part of our work delved into how our method would account for model uncertainty just prior, and during, the great financial crisis of 2007-2009. Our technique allowed us to capture the crisis, albeit at a reduced applicability possibly due to computational constraints. This latter part also was used to deepen the understanding of our model uncertainty quantification technique with a GP. Identifiability issues were also studied. One of our overall conclusions was that more research is needed until this uncertainty quantification technique may be used in as part of the toolbox of central bankers and researchers for forecasting economic fluctuations, specially regarding the computational performance of either method.<br>info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
28

Georgiou, Leonidas. "DCE-MRI assessment of hepatic uptake and efflux of the contrast agent, gadoxetate, to monitor transporter-mediated processes and drug-drug interactions : in vitro and in vivo studies." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/dcemri-assessment-of-hepatic-uptake-and-efflux-of-the-contrast-agent-gadoxetate-to-monitor-transportermediated-processes-and-drugdrug-interactions-in-vitro-and-in-vivo-studies(d4b3bc62-8636-470b-90ae-38e25e7ee7be).html.

Full text
Abstract:
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) techniques offer the opportunity to understand the physiological processes involved in the distribution of the contrast agent in vivo. This work utilises a liver specific contrast agent (gadoxetate) and demonstrates the potential use of these techniques to study transporter-mediated process in vivo. In vitro experiments investigated gadoxetate’s interaction with uptake and efflux transporters at the cellular level, ideally a prerequisite to understand the contribution of transporter proteins in in vivo pharmacokinetics. MRI was used to measure the accumulation of gadoxetate in fresh rat hepatocytes. Furthermore, LC-MS/MS methodology was optimised in conjunction with two in vitro systems to determine the role of transporters in gadoxetate uptake and efflux. HEK-OATP1B1 transfected cells were used to optimise the LC-MS/MS technique and Caco-2 cell monolayers were used to examine whether gadoxetate is a substrate of the efflux transporters, Pgp and BCRP. Subsequent studies demonstrated the use of DCE-MRI techniques to study transporter-mediated processes. Two pharmacokinetic models were proposed to quantify the uptake and efflux of gadoxetate in vivo. The suitability of the models in describing the liver concentration profiles of gadoxetate was assessed in pre-clinical and clinical reproducibility studies. Further pre-clinical experiments demonstrated the ability of the proposed DCE-MRI techniques to monitor changes in the uptake and efflux rate estimates of gadoxetate into hepatocytes, through co-administration of the transporter inhibitor, rifampicin, at two doses. The work presented demonstrates the potential use of DCE-MRI techniques as a diagnostic probe to assess transporter-mediated processes and drug-drug interactions (DDIs) in vivo.
APA, Harvard, Vancouver, ISO, and other styles
29

Urbánková, Michaela. "Hodnocení finanční situace podniku a návrhy na její zlepšení." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2013. http://www.nusl.cz/ntk/nusl-223937.

Full text
Abstract:
The thesis is focused on the evaluation of the financial situation in the business company using various methods and indicators of financial analysis. The practical part is based on the analysis of financial statements of an analysis of absolute indicators, analysis of financial ratios and selected models of systems analysis indicators. In conclusion, on the basis of results of assessment of the financial health of the company and proposed measures to improve the operation of the company in the coming years.
APA, Harvard, Vancouver, ISO, and other styles
30

Bastide, Dorinel-Marian. "Handling derivatives risks with XVAs in a one-period network model." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM027.

Full text
Abstract:
La réglementation requiert des établissements bancaires d'être en mesure de conduire des analyses de scénarios de tests de résistance (stress tests) réguliers de leurs expositions, en particulier face aux chambres de compensation (CCPs) auxquels ils sont largement exposés, en appliquant des chocs de marchés pour capturer le risque de marché et des chocs économiques pouvant conduire à l'état de faillite, dit aussi de défaut, divers acteurs financiers afin de refléter les risques de crédit et de contrepartie. Un des rôles principaux des CCPs est d'assurer par leur interposition entre acteurs financiers la réduction du risque de contrepartie associé aux pertes potentiels des engagements contractuels non respectés dus à la faillite d'une ou plusieurs des parties engagées. Elles facilitent également les divers flux financiers des activités de trading même en cas de défaut d'un ou plusieurs de leurs membres en re-basculant certaines des positions de ces membres et en allouant toute perte qui pourrait se matérialiser suite à ces défauts aux membres survivants . Pour développer une vision juste des risques et disposer d'outils performants de pilotage du capital, il apparaît essentiel d'être en mesure d'appréhender de manière exhaustive les pertes et besoins de liquidités occasionnés par ces divers chocs dans ces réseaux financiers ainsi que d'avoir une compréhension précise des mécanismes sous-jacents. Ce projet de thèse aborde différentes questions de modélisation permettant de refléter ces besoins, qui sont au cœur de la gestion des risques d'une banque dans les environnements actuels de trading centralisé. Nous commençons d'abord par définir un dispositif de modèle statique à une période reflétant les positions hétérogènes et possibilité de défauts joints de multiples acteurs financiers, qu'ils soient membres de CCPs ou autres participants financiers, pour identifier les différents coûts, dits de XVA, générés par les activités de clearing et bilatérales avec des formules explicites pour ces coûts. Divers cas d'usage de ce dispositif sont illustrés avec des exemples d'exercices de stress test sur des réseaux financiers depuis le point de vue d'un membre ou de novation de portefeuille de membres en défaut sur des CCPs avec les autres membres survivants. Des modèles de distributions à queues épaisses pour générer les pertes sur les portefeuilles et les défauts sont privilégiés avec l'application de techniques de Monte-Carlo en très grande dimension accompagnée des quantifications d'incertitudes numériques. Nous développons aussi l'aspect novation de portefeuille de membres en défauts et les transferts de coûts XVA associés. Ces novations peuvent s'exécuter soit sur les places de marchés (exchanges), soit par les CCP elles-mêmes qui désignent les repreneurs optimaux ou qui mettent aux enchères les positions des membres défaillants avec des expressions d'équilibres économiques. Les défauts de membres sur plusieurs CCPs en commun amènent par ailleurs à la mise en équation et la résolution de problèmes d'optimisation multidimensionnelle du transfert des risques abordées dans ces travaux<br>Finance regulators require banking institutions to be able to conduct regular scenario analyses to assess their resistance to various shocks (stress tests) of their exposures, in particular towards clearing houses (CCPs) to which they are largely exposed, by applying market shocks to capture market risk and economic shocks leading some financial players to bankruptcy, known as default state, to reflect both credit and counterparty risks. By interposing itself between financial actors, one of the main purposes of CCPs are to limit counterparty risk due to contractual payment failures due to one or several defaults among engaged parties. They also facilitate the various financial flows of the trading activities even in the event of default of one or more of their members by re-arranging certain positions and allocating any loss that could materialize following these defaults to the surviving members. To develop a relevant view of risks and ensure effective capital steering tools, it is essential for banks to have the capacity to comprehensively understand the losses and liquidity needs caused by these various shocks within these financial networks as well as to have an understanding of the underlying mechanisms. This thesis project aims at tackling modelling issues to answer those different needs that are at the heart of risk management practices for banks under clearing environments. We begin by defining a one-period static model for reflecting the market heterogeneous positions and possible joint defaults of multiple financial players, being members of CCPs and other financial participants, to identify the different costs, known as XVAs, generated by both clearing and bilateral activities, with explicit formulas for these costs. Various use cases of this modelling framework are illustrated with stress test exercises examples on financial networks from a member's point of view or innovation of portfolio of CCP defaulted members with other surviving members. Fat-tailed distributions are favoured to generate portfolio losses and defaults with the application of very large-dimension Monte-Carlo methods along with numerical uncertainty quantifications. We also expand on the novation aspects of portfolios of defaulted members and the associated XVA costs transfers. These innovations can be carried out either on the marketplaces (exchanges) or by the CCPs themselves by identifying the optimal buyers or by conducting auctions of defaulted positions with dedicated economic equilibrium problems. Failures of members on several CCPs in common also lead to the formulation and resolution of multidimensional optimization problems of risk transfer that are introduced in this thesis
APA, Harvard, Vancouver, ISO, and other styles
31

Gardner, Masako Amai. "Annual Exceedance Probability Analysis." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd944.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hofmann, Eduard. "Hodnocení finanční situace podniku a návrhy na její zlepšení." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2013. http://www.nusl.cz/ntk/nusl-224238.

Full text
Abstract:
The Master’s thesis deals with the evaluation and assessment of the financial situation and of the financial health of the company BLANESTA, s.r.o. throughout the years 2008 to 2011 by means of selected methods of financial analysis. The aim of this work is to propose changes leading to the efficient use of business resources chosen on the basis of the potential problems that will arise from the evaluation of the current financial state of the company.
APA, Harvard, Vancouver, ISO, and other styles
33

Другова, Олена Сергіївна. "Оцінка конкурентного потенціалу підприємств машинобудування". Thesis, НТУ "ХПІ", 2015. http://repository.kpi.kharkov.ua/handle/KhPI-Press/17474.

Full text
Abstract:
Дисертація на здобуття наукового ступеня кандидата економічних наук за спеціальністю 08.00.04 – економіка та управління підприємствами (за видами економічної діяльності). – Національний технічний університет "Харківський політехнічний інститут", 2015. Дисертаційна робота присвячена актуальним проблемам теоретичних, методичних та практичних аспектів оцінки конкурентного потенціалу. В дисертації досліджено теоретичне підґрунтя, розкрито сутність, обґрунтовано характер взаємозв'язку та уточнено поняття конкурентного потенціалу, конкурентоспроможності, конкурентої позиції. У роботі визначено понятійний апарат теорії потенціалу, уточнено сутність конкурентного потенціалу підприємства, який, на відміну від існуючих дефініцій, подано як можливості ресурсів, здатностей і компетенцій підприємства формувати його конкурентні переваги порівняно із іншими господарюючими суб'єктами на обраному ринковому сегменті. Розроблено систему оціночних показників рівня конкурентного потенціалу підприємства, яка відрізняється складом його елементів (фінансового, виробничого, трудового, збутового, управлінського) і показниками, що їх описують та дозволяє визначити і оцінити стан ресурсних і функціональних можливостей підприємства у конкурентному середовищі; Обґрунтовано науково–методичний підхід до порівняльної оцінки конкурентного потенціалу підприємства, який, на відміну від існуючих, ґрунтується на використанні методів багатомірного аналізу і дозволяє ранжувати складові конкурентного потенціалу підприємства за їх рівнем у конкурентній групі. Запропоновано структурно – логічні моделі прийняття управлінських рішень щодо формування, розвитку або підтримки досягнутого рівня конкурентного потенціалу, що ґрунтуються на його складових і дозволяють сформувати релевантний виявленим проблемам портфель заходів.<br>Thesis for granting the degree of a candidate of economic sciences in speciality 08.00.04 - economy and management of the enterprise (according to the type of economic activity) – National Technical University "Kharkiv Politechnical Institute", 2015. The dissertation work is dedicated to actual problems of theoretical, methodical and practical aspects of competitive potential estimation. In this thesis the theoretical basis of the following notions is revealed, the nature of their interrelation is justified and they are profoundly elaborated: competitive potential, competitiveness, competitive position. Definition of the notion "competitive potential" is validated in this work, which allowed to consider it as capabilities of resources, endowments and competencies of the enterprise to form competitive advantage in relation to other economic entities on the specific market segment. Approach to evaluate the level of the enterprise competitive potential has been established, which is substantiated using the combination of its elements (financial, production, labor, distributional, managerial) and indicators describing them. This particular approach allows outlining and defining the state of resources and functional capabilities of the enterprise in the competitive environment. Methodical approach is grounded on comparative estimation of the competitive potential of the enterprise, which, unlike existing, is based on use of methods of the multivariate analysis and allows to rank the enterprises according to their level in a competitive group. Logico-structural models for taking management decisions are offered for shaping, development and enhancement of the competitive potential. They enable elaboration of a portfolio of necessary actions within the relevant period.
APA, Harvard, Vancouver, ISO, and other styles
34

Другова, Олена Сергіївна. "Оцінка конкурентного потенціалу підприємств машинобудування". Thesis, НТУ "ХПІ", 2015. http://repository.kpi.kharkov.ua/handle/KhPI-Press/17470.

Full text
Abstract:
Дисертація на здобуття наукового ступеня кандидата економічних наук за спеціальністю 08.00.04 – економіка та управління підприємствами (за видами економічної діяльності). – Національний технічний університет "Харківський політехнічний інститут", 2015. Дисертаційна робота присвячена актуальним проблемам теоретичних, методичних та практичних аспектів оцінки конкурентного потенціалу. В дисертації досліджено теоретичне підґрунтя, розкрито сутність, обґрунтовано характер взаємозв'язку та уточнено поняття конкурентного потенціалу, конкурентоспроможності, конкурентої позиції. У роботі визначено понятійний апарат теорії потенціалу, уточнено сутність конкурентного потенціалу підприємства, який, на відміну від існуючих дефініцій, подано як можливості ресурсів, здатностей і компетенцій підприємства формувати його конкурентні переваги порівняно із іншими господарюючими суб'єктами на обраному ринковому сегменті. Розроблено систему оціночних показників рівня конкурентного потенціалу підприємства, яка відрізняється складом його елементів (фінансового, виробничого, трудового, збутового, управлінського) і показниками, що їх описують та дозволяє визначити і оцінити стан ресурсних і функціональних можливостей підприємства у конкурентному середовищі; Обґрунтовано науково–методичний підхід до порівняльної оцінки конкурентного потенціалу підприємства, який, на відміну від існуючих, ґрунтується на використанні методів багатомірного аналізу і дозволяє ранжувати складові конкурентного потенціалу підприємства за їх рівнем у конкурентній групі. Запропоновано структурно – логічні моделі прийняття управлінських рішень щодо формування, розвитку або підтримки досягнутого рівня конкурентного потенціалу, що ґрунтуються на його складових і дозволяють сформувати релевантний виявленим проблемам портфель заходів.<br>Thesis for granting the degree of a candidate of economic sciences in speciality 08.00.04 – economy and management of the enterprise (according to the type of economic activity) – National Technical University "Kharkiv Politechnical Institute", 2015. The dissertation work is dedicated to actual problems of theoretical, methodical and practical aspects of competitive potential estimation. In this thesis the theoretical basis of the following notions is revealed, the nature of their interrelation is justified and they are profoundly elaborated: competitive potential, competitiveness, competitive position. Definition of the notion "competitive potential" is validated in this work, which allowed to consider it as capabilities of resources, endowments and competencies of the enterprise to form competitive advantage in relation to other economic entities on the specific market segment. Approach to evaluate the level of the enterprise competitive potential has been established, which is substantiated using the combination of its elements (financial, production, labor, distributional, managerial) and indicators describing them. This particular approach allows outlining and defining the state of resources and functional capabilities of the enterprise in the competitive environment. Methodical approach is grounded on comparative estimation of the competitive potential of the enterprise, which, unlike existing, is based on use of methods of the multivariate analysis and allows to rank the enterprises according to their level in a competitive group. Logico-structural models for taking management decisions are offered for shaping, development and enhancement of the competitive potential. They enable elaboration of a portfolio of necessary actions within the relevant period.
APA, Harvard, Vancouver, ISO, and other styles
35

VILLA, SIMONE. "Continuous Time Bayesian Networks for Reasoning and Decision Making in Finance." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/69953.

Full text
Abstract:
L'analisi dell'enorme quantità di dati finanziari, messi a disposizione dai mercati elettronici, richiede lo sviluppo di nuovi modelli e tecniche per estrarre efficacemente la conoscenza da utilizzare in un processo decisionale informato. Lo scopo della tesi concerne l'introduzione di modelli grafici probabilistici utilizzati per il ragionamento e l'attività decisionale in tale contesto. Nella prima parte della tesi viene presentato un framework che utilizza le reti Bayesiane per effettuare l'analisi e l'ottimizzazione di portafoglio in maniera olistica. In particolare, esso sfrutta, da un lato, la capacità delle reti Bayesiane di rappresentare distribuzioni di probabilità in modo compatto ed efficiente per modellare il portafoglio e, dall'altro, la loro capacità di fare inferenza per ottimizzare il portafoglio secondo diversi scenari economici. In molti casi, si ha la necessità di ragionare in merito a scenari di mercato nel tempo, ossia si vuole rispondere a domande che coinvolgono distribuzioni di probabilità che evolvono nel tempo. Le reti Bayesiane a tempo continuo possono essere utilizzate in questo contesto. Nella seconda parte della tesi viene mostrato il loro utilizzo per affrontare problemi finanziari reali e vengono descritte due importanti estensioni. La prima estensione riguarda il problema di classificazione, in particolare vengono introdotti un algoritmo per apprendere tali classificatori da Big Data e il loro utilizzo nel contesto di previsione dei cambi valutari ad alta frequenza. La seconda estensione concerne l'apprendimento delle reti Bayesiane a tempo continuo in domini non stazionari, in cui vengono modellate esplicitamente le dipendenze statistiche presenti nelle serie temporali multivariate consentendo loro di cambiare nel corso del tempo. Nella terza parte della tesi viene descritto l'uso delle reti Bayesiane a tempo continuo nell'ambito dei processi decisionali di Markov, i quali consentono di modellare processi decisionali sequenziali in condizioni di incertezza. In particolare, viene introdotto un metodo per il controllo di sistemi dinamici a tempo continuo che sfrutta le proprietà additive e contestuali per scalare efficacemente su grandi spazi degli stati. Infine, vengono mostrate le prestazioni di tale metodo in un contesto significativo di trading.<br>The analysis of the huge amount of financial data, made available by electronic markets, calls for new models and techniques to effectively extract knowledge to be exploited in an informed decision-making process. The aim of this thesis is to introduce probabilistic graphical models that can be used to reason and to perform actions in such a context. In the first part of this thesis, we present a framework which exploits Bayesian networks to perform portfolio analysis and optimization in a holistic way. It leverages on the compact and efficient representation of high dimensional probability distributions offered by Bayesian networks and their ability to perform evidential reasoning in order to optimize the portfolio according to different economic scenarios. In many cases, we would like to reason about the market change, i.e. we would like to express queries as probability distributions over time. Continuous time Bayesian networks can be used to address this issue. In the second part of the thesis, we show how it is possible to use this model to tackle real financial problems and we describe two notable extensions. The first one concerns classification, where we introduce an algorithm for learning these classifiers from Big Data, and we describe their straightforward application to the foreign exchange prediction problem in the high frequency domain. The second one is related to non-stationary domains, where we explicitly model the presence of statistical dependencies in multivariate time-series while allowing them to change over time. In the third part of the thesis, we describe the use of continuous time Bayesian networks within the Markov decision process framework, which provides a model for sequential decision-making under uncertainty. We introduce a method to control continuous time dynamic systems, based on this framework, that relies on additive and context-specific features to scale up to large state spaces. Finally, we show the performances of our method in a simplified, but meaningful trading domain.
APA, Harvard, Vancouver, ISO, and other styles
36

Sari, Vanessa. "Monitoramento e modelagem da produção de sedimentos em uma bacia hidrográfica no noroeste do Rio Grande do Sul." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/172321.

Full text
Abstract:
O entendimento da dinâmica hidrossedimentológica em uma bacia hidrográfica pode ser realizado pelo monitoramento das variáveis hidrossedimentológicas e pela modelagem desses processos. Nesse contexto, essa pesquisa analisou a eficiência do modelo Soil and Water Assessment Tool (SWAT) na previsão dos processos hidrossedimentológicos na bacia do Taboão (Pejuçara, RS), considerando as saídas (vazão e produção de sedimentos) em um passo de tempo mensal e diário. Para tal, foram utilizados dados de chuva horária dos anos 2008 a 2016, monitorada em quatro pluviógrafos instalados na bacia (PVGs 34, 40, 43 e 51), e dados climáticos da estação meteorológica de Cruz Alta. As informações de vazão, para os anos de 2011 a 2016, foram obtidas por meio da conversão dos dados de nível de água monitorados no exutório da bacia, utilizando uma curva-chave cota x vazão. A concentração de sedimentos suspensos (CSS), para os anos de 2013 a 2015, foi estimada por meio de modelos de redes neurais artificias (RNAs), empregando como entrada dados de turbidez e de nível de água, monitorados no exutório da bacia. O preenchimento das falhas dos registros de precipitação horária foi executado por meio de modelos de Combinações de RNAs (CRNAs) associados à média simples (MS) ou à média ponderada pelo inverso da distância (MP), utilizando como entrada dados pluviométricos dos postos vizinhos. As falhas nos dados de nível de água foram preenchidas por modelos de RNAs, que usaram como entrada níveis de água monitorados em sub-bacias embutidas ou adjacente à bacia do Taboão (bacias do Donato, Turcato, Alemão e Andorinhas), e dados de precipitação média dos quatro pluviógrafos utilizados nessa pesquisa Foram determinadas as defasagens temporais entre os níveis de água das diferentes bacias, e testados o uso da precipitação média com aplicação de filtro temporal linear e/ou exponencial. Os registros falhos nos dados de turbidez foram preenchidos por modelos de RNAs, que empregaram como entrada informações de nível de água monitoradas, de 10 em 10 minutos, no exutório da bacia. A calibração do modelo SWAT para a previsão dos processos hidrológicos foi realizada usando dados de vazão, diários e mensais, para os anos de 2013, 2014 e 2016 e; a etapa de verificação foi executada para os anos de 2011 e 2015. Considerou-se o Método de Green & Ampt para determinação da infiltração de água no solo e 2 anos (2008-2009) para período de aquecimento do modelo SWAT. A calibração do modelo para a produção de sedimentos foi realizada para os anos de 2013 e 2015 e o processo de verificação foi efetuado para o ano de 2014. A calibração e a análise de sensibilidade dos parâmetros foram realizadas com auxílio do SWAT-CUP, utilizando o algoritmo SUFI-2. O coeficiente de Nash–Sutcliffe (NS) das RNAs para preenchimento das falhas de precipitação variou entre 0,35, classificado como “Insatisfatório”, e 0,86, avaliado como “Muito Bom”, considerando critérios propostos por Moriasi et al. (2007). Das 13 RNAs desenvolvidas para preenchimento das falhas nos níveis de água, apenas uma delas foi classificada como de desempenho “Satisfatório” durante o treinamento e; as demais enquadraram-se como de desempenho “Muito Bom”. Na etapa de verificação, sete RNAs foram consideradas com desempenho “Muito Bom” e cinco com “Bom” desempenho No preenchimento das falhas de turbidez, das cinco RNAs desenvolvidas, quatro mostraram “Bom” desempenho durante o treinamento, e uma rede teve desempenho “Muito Bom”; enquanto que, no processo de verificação, duas RNAs tiveram desempenho “Muito Bom”, uma delas foi classificada com desempenho “Bom” e; duas RNAs foram consideradas com desempenho “Satisfatório”. As estatísticas de desempenho dos modelos de RNAs desenvolvidos para o preenchimento das falhas de nível de água, de turbidez e de precipitação também demonstraram que tais redes representam uma alternativa interessante para a obtenção de séries contínuas desses dados, possibilitando o uso posterior dos registros para a modelagem hidrossedimentológica. A calibração do modelo SWAT para estimativa da vazão mensal mostrou desempenho “Muito Bom” (NS=0,78), e para a determinação da vazão diária foi considerado “Bom” (NS=0,72). Na etapa de verificação, o modelo manteve o “Bom” desempenho (NS=0,68) para estimativa da vazão diária, decaindo para desempenho “Satisfatório” (NS=0,64) para a simulação em escala mensal. Para a estimativa da produção de sedimentos mensal, o desempenho do modelo foi considerado “Bom” tanto na calibração (NS=0,66) quanto na verificação (NS=0,70). Na escala diária o desempenho foi “Satisfatório” para a calibração (NS=0,64) e “Insatisfatório” para a verificação (NS=0,38) Tais resultados indicam que o modelo SWAT é uma ferramenta promissora para aplicações na previsão hidrossedimentológica na bacia do Taboão, especialmente em termos de simulações dos processos hidrológicos. No entanto, existem limitações para aplicações na estimativa da produção de sedimentos, sobretudo quando considerados os processos em escala diária. Essas limitações são consequência da presença de processos erosivos na bacia (voçorocas), que não são simulados pelas rotinas presentes no modelo SWAT, bem como pelo escoamento dominante ser do tipo subsuperficial, com ocorrência de pipping; indicando-se, portanto, adequações nas rotinas do modelo para melhor representatividade desses processos.<br>The understanding of hydrosedimentological dynamics in a watershed can be obtained by monitoring the hydrossedimentological variables and by modeling these processes. In this context, this research analyzed the efficiency of the Soil and Water Assessment Tool (SWAT) in predicting the hydrosedimentological processes in the Taboão basin (Pejuçara, RS), considering the outputs (flow and sediment production) in a monthly and daily time step. For that, hourly rainfall data from 2008 to 2016 were monitored at four pluviographs installed in the basin (PVGs 34, 40, 43 and 51), and climate data were obtained from the Cruz Alta meteorological station. The flow information for the years 2011 to 2016 was obtained by converting the monitored water level data into flow by using a rating curve. The suspended sediment concentration (SSC), from 2013 to 2015, was estimated using artificial neural network (ANN) models, using as input turbidity and water level data, monitored in the basin. The filling of the hourly rainfall records was performed by models of Combinations of RNAs (CRNAs) associated with the simple mean (MS) or weighted mean to the inverse distance (MP), using as input rainfall data from the neighboring stations. Failures in the water-level data were filled by RNA models, which used as input water levels monitored in sub-basins adjacent or embedded to the Taboão basin (Donato, Turcato, Alemão and Andorinha basins), and mean precipitation data of the four pluviographs used in this research. The temporal lags between the water levels of the different basins were determined and the use of the average precipitation with linear and exponential temporal filters was tested The turbidity data records were filled by RNA models, using water level information monitored at every 10 minutes. The SWAT model calibration for predicting the hydrological processes was performed using daily and monthly flow data for the years 2013, 2014 and 2016 and the verification step was performed for the years 2011 and 2015; considering Green & Ampt Method for infiltration estimation and 2 years of warm-up period (2008-2009). The calibration of the model for sediment yield was performed for the years 2013 and 2015 and the verification process was carried out for the year 2014. The calibration and sensitivity analysis of the parameters were performed with the assistance of SWAT-CUP, using the SUFI-2 algorithm. The Nash-Sutcliffe Coefficient (NS) of the RNAs used to fill precipitation faults varied between 0.35, classified as "Unsatisfactory", and 0.86, evaluated as "Very Good", considering criteria proposed by Moriasi et al. (2007). Of the 13 RNAs developed to fill water level failures, only one of them was classified as a "Satisfactory" performance during training and; the others have been classified as "Very Good" performance. In the verification step, seven RNAs were considered to have "Very Good" performance and five had "Good” performance. In the fulfillment of the turbidity faults, of the five RNAs developed, four showed "Good" performance during the training, and one network had "Very Good" performance; while in the verification process two ANNs performed "Very Good", one of them was classified as "Good" and; two ANNs were considered to have "Satisfactory" performance The performance statistics of the ANN models developed to fill the water level, turbidity and precipitation failures also demonstrated that such networks represent an interesting alternative to obtain continuous series of these data, allowing the later use of the records for hydrossedimentological modeling. In the verification processes, the model maintained a “Good” performance (NS=0.68) to estimate the daily flow, decreasing to "Satisfactory" performance (NS=0.64) for the monthly scale simulation. For the estimation of sediment yield the model performance was considered "Good" for monthly calibration period (NS=0.66) and also for the verification (NS=0.70). In daily scale the performance was "Satisfactory" for calibration (NS=0.64) and “Unsatisfactory” in the verification (NS=0.38). These results indicate that the SWAT model is a promising tool for applications in the hydrosedimentological forecasting in the Taboão basin, especially in terms of hydrological processes simulations. However, there are limitations to applications in the estimation of sediment production, especially when considering daily scale processes. These limitations are due to the presence of erosive processes in the basin (gully erosion), which are not simulated by the routines present in the SWAT model, as well as by the existence of the lateral flow with occurrence of pipping; indicating, therefore, the need for adjustments in the routines of the model to better represent these processes.
APA, Harvard, Vancouver, ISO, and other styles
37

Радова, Н. В., Н. В. Радова та N. Radova. "Фінансове забезпечення процесів злиття та поглинання банківських установ". Diss., Одеський національний економічний університет, 2014. http://dspace.oneu.edu.ua/jspui/handle/123456789/3707.

Full text
Abstract:
У дисертації розвинуто теоретичні засади та вдосконалено науково-методичні підходи до фінансового забезпечення процесів злиття і поглинання банківських установ. Уточнено тлумачення поняття «фінансове забезпечення процесів злиття і поглинання банківських установ». Ідентифіковано закономірності й протиріччя реалізації угод злиття і поглинання банків з метою підвищення їх конкурентоспроможності в Україні. Розроблено структурно-функціональну модель фінансового забезпечення процесів злиття та поглинання банківських установ із застосуванням сукупності відповідних методів, інструментів, важелів і видів фінансування. З’ясовано роль державного регулювання щодо підвищення ефективності моніторингу діяльності банків-учасників ринку злиття і поглинання. Удосконалено науково-методичний підхід до підвищення ефективності управління фінансовим забезпеченням процесів злиття та поглинання банків. Виокремлено основні етапи фінансування процесів злиття і поглинання банків. Розвинуто науково-методичні засади доцільності укрупнення банків для досягнення прибутковості банківської діяльності, науково-методичний підхід до фінансування з урахуванням специфіки бізнес-процесів на кожному етапі злиття чи поглинання банків, науковий підхід до підвищення рівня корпоративного управління новим банком і визначення стратегії розвитку новостворених банків за участю іноземного капіталу.<br>В диссертации разработаны теоретические основы и усовершенствованы научно-методические подходы финансового обеспечения процессов слияния и поглощения банковских учреждений. Раскрыты сущность и специфика преимуществ внешнего развития банковского учреждения за счет реализации соглашений слияния и поглощения без привлечения дополнительных финансовых ресурсов, таких как: снижение риска выхода на новые рынки, экономия времени, сохранение налаженных деловых связей и доверия потребителей, возможность достижения конкурентных преимуществ в процессе слияния или поглощения одного из конкурентов. Обосновано, что процессам консолидации банковских капиталов присуща динамика, колеблющаяся соответственно цикличности развития мировой и национальной экономик. Уточнено толкование понятия «финансовое обеспечение процессов слияния и поглощения банковских учреждений» как процесса управления аккумулированными денежными ресурсами для поступательного их использования при финансировании затрат по реализации соглашений слияния и поглощения с целью достижения эффекта масштаба и получения экономической выгоды. Расширены научно-методические основы при составлении соглашений слияния и поглощения относительно ориентации на приоритетность создания правовых, благоприятных и стимулирующих условий для национальных банковских учреждений. Усовершенствован научно-методический подход к построению модели жизненного цикла финансового обеспечения процессов слияния и поглощения банков, который содержит четыре основных этапа. Такой подход позволяет систематизировать подготовку к процессу интеграции (определение необходимых объемов финансирования); планирование (распределение объемов финансирования согласно этапам процесса слияния и поглощения); внедрение (поэтапное осуществление финансирования); посттрансформационные мероприятия (финансирование непредвиденных затрат). Разработана структурно-функциональная модель финансового обеспечения процессов слияния и поглощения банковских учреждений с применением совокупности соответствующих методов, инструментов, рычагов и видов финансирования, состав которых не является постоянным во времени, поэтому может совершенствоваться при изменении внутренних и внешних условий деятельности банков. Доработан научно-методический подход к повышению эффективности управления финансовым обеспечением процесса слияния и поглощения банков путем определения эффективности способов финансирования; выбора экономических агентов, которые больше подходят для реализации указанного процесса; обеспечения финансовыми ресурсами, необходимыми для эффективной реализации процесса; системного использования материального стимулирования для повышения эффективности реализации процесса слияния и поглощения. Предложены меры по повышению эффективности мониторинга деятельности банков-участников рынка слияний и поглощений, в частности, предложены мероприятия по устранению правовой асимметрии, связанной с государственной регистрацией консолидации банков; по стимулированию банков путем предоставления налоговых льгот банку-продавцу и осуществлению консолидированного налогообложения. Идентифицированы политические, экономические и правовые факторы, способствующие признанию процессов слияния и поглощения банков эффективным инструментом продвижения бизнес-процессов в мировом экономическом пространстве. Обоснован научный подход к определению стратегии развития новообразованных банков при участии иностранного капитала, который, в отличие от существующих, состоит в признании иностранных банков стратегическими инвесторами, заинтересованными в долгосрочном присутствии на национальных рынках.<br>In the dissertation theoretical principles are developed and methodological approaches of financial support for mergers and acquisitions processes of banks are improved. The concept «financial support of mergers and acquisitions processes of banks» is specified. The regularities and contradictions of mergers and acquisitions of banks implementation in order to increase their competitiveness in Ukraine are identified. The structural and functional model of financial support of mergers and acquisitions processes of banks with the use of appropriate methods, tools, mechanisms and types of financing is developed. The role of government regulation to improve the effectiveness of monitoring activities of banks participating in mergers and acquisitions market is found out. Scientific and methodical approach to enhancement the management effectiveness of financial support of mergers and acquisitions processes of banks is improved The main stages of financing the mergers and acquisitions processes of banks are determined. Scientific and methodological principles of expediency to consolidate banks to achieve profitability of banking activities, scientific and methodological approach to financing considering the specificity of business processes at every stage of mergers and acquisitions processes of banks, scientific approach to improvement of the level of new bank corporate management and determination of strategy development of newly established banks with participation of foreign capital are developed.
APA, Harvard, Vancouver, ISO, and other styles
38

Beisler, Matthias Werner. "Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2011. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-71564.

Full text
Abstract:
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results<br>Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben
APA, Harvard, Vancouver, ISO, and other styles
39

Хлопотов, Д. С., та D. S. Khlopotov. "Факторы повышения конкурентоспособности международного банка на российском рынке : магистерская диссертация". Master's thesis, б. и, 2020. http://hdl.handle.net/10995/95063.

Full text
Abstract:
Актуальность исследования обуславливается тем, что банковская сфера в наше время является высоко конкурентной средой и в свете протекающей глобализации и консолидации финансового сектора наблюдается снижение количества частных банков, что вызвано, в свою очередь, процессами слияния, поглощения и неспособностью противостоять крупным агентам рынка, поддерживая необходимые требования, которые выставляются регуляторами. Финансовые институты вынуждены наращивать свой арсенал конкурентных преимуществ, активно использовать нераспределенную прибыль на формирование отличительных конкурентных факторов и заниматься мониторингом существующих предложений на рынке. Как следствие, кредитным организациям необходимо критически подходить к успехам и неудачам иностранных коллег, повышая эффективность банковского бизнеса, формируя собственные уникальные конкурентные преимущества. Целью диссертационной работы является оценка факторов повышения конкурентоспособности международного банка в разрезе сложившегося отечественного финансового сектора. Объектом диссертационного исследования выступает отечественный филиал АО «Райффайзенбанк». Предметом исследования является процесс формирования конкурентных преимуществ АО «Райффайзенбанк», система экономических, организационных и финансовых механизмов повышения эффективности деятельности банка. Результатами выпускной квалификационной работы являются разработка многофакторной модели формирования чистой прибыли, рекомендации по оптимизации существующих процессов и экономическая оценка эффектов от внедрения предлагаемых инструментов.<br>The relevance of the study stems from the fact that banking is a highly competitive environment and in light of the proceeding of globalization and consolidation of the financial sector, a decrease in the number of private banks, which caused, in turn, processes of mergers, acquisitions, and the failure to counter major market agents, while maintaining the necessary requirements that are set by regulators. Financial institutions are forced to increase their arsenal of competitive advantages, actively use retained earnings to form distinctive competitive factors and monitor existing offers on the market. As a result, credit institutions need to take a critical approach to the successes and failures of foreign colleagues, increasing the efficiency of the banking business, forming their own unique competitive advantages.
APA, Harvard, Vancouver, ISO, and other styles
40

Rydberg, William. "Att förstå arbetssättet med identifiering av hållbarhetsmål : En studie i kvalitetsteknik utförd vid LKAB." Thesis, Uppsala universitet, Institutionen för samhällsbyggnad och industriell teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-445570.

Full text
Abstract:
Syftet med denna studie är att lämna förslag till Luossavaara-Kiirunavaara aktiebolags (LKAB i fortsättningen) arbetssätt för framtagning av hållbarhetsmål på koncernnivå. Två frågeställningar har konstruerats för att kunna besvara studiens syfte. Frågeställningarna undersöker hur LKAB arbetar idag med att ta fram hållbarhetsmål på koncernnivå och hur ett optimerat arbetssätt med framtagning av hållbarhetsmål skulle kunna se ut. Datainsamlingen består av intervjuer med olika medarbetare som på ett eller annat sätt har varit involverade i framtagning av förslag till hållbara mål. Resultatet visar att LKAB har arbetet med identifieringen av hållbarhetsmål utifrån ett projektbaserat arbetssätt. Detta är studiens första slutsats. Analysen visar att arbetssättet med framtagning av koncernmål skulle kunna genomföras utifrån ett processbaserat arbetssätt. Analysen och diskussionen i den här studien visar att ett optimerat arbetssätt skulle kunna uppnås genom att dela huvudprocessen till två och utveckla ett processbaserat arbetssätt för varje huvudprocess. Detta är studiens andra slutsats.<br>The purpose of this study is to submit proposals to the company Luossavaara-Kiirunavaara (LKAB) and its mythology of identifying sustainability goals at group level. two questions have been created in order to be able to answer the study´s purpose. The questions examine LKAB´s mythology of identifying sustainability goals at group level and how an optimized mythology of identifying sustainability goals could look like. The data collection consists of interviews with various employees who have been involved in the identifying of sustainability goals. The results shows that LKAB´s mythology of identifying sustainability goals has formed on a project-based approach. This is the first conclusion of this study. The analysis shows that the mythology of identifying sustainability goals could be implemented with a process-based mythology. The analysis and the discussion in this study show that an optimized mythology could be achieved by dividing the main process into two main processes and developing a process-based mythology for each main process. This is the second conclusion of this study.
APA, Harvard, Vancouver, ISO, and other styles
41

Magerle, Tobias. "Enhancement of an implemented rating model and impact on existing risk management processes: a hands-on approach from financial supplier risk management within the automotive industry." Master's thesis, 2014. http://hdl.handle.net/10362/14600.

Full text
Abstract:
The purpose of this paper is to conduct a methodical drawback analysis of a financial supplier risk management approach which is currently implemented in the automotive industry. Based on identified methodical flaws, the risk assessment model is further developed by introducing a malus system which incorporates hidden risks into the model and by revising the derivation of the most central risk measure in the current model. Both methodical changes lead to significant enhancements in terms of risk assessment accuracy, supplier identification and workload efficiency.<br>NSBE - UNL
APA, Harvard, Vancouver, ISO, and other styles
42

Shen, Chun-Cheng, and 沈俊誠. "Integrating Risk Assessment and Credit Rating Model for Financial Institutions." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/60898054775382102741.

Full text
Abstract:
碩士<br>國立交通大學<br>工業工程與管理系所<br>92<br>The risk assessment and credit rating are two important indicators for financial institutions to evaluate the payment capability of the loan applicants. However, the raising ratio of the bad loans drives financial institutions to review and reconstruct their credit risk assessment models to make right and efficient decisions of loaning under the economic depression. In general, most of the models are built based upon five dimensions: applicant’s personality, payment capacity, capital, business condition, and collaterals. Many studies proposed several credit rating models for evaluating the listed companies, but these credit rating models would be invalid when they are employed for evaluating the credit of small and median enterprise(SME). Therefore, an integrated risk assessment and credit rating model is developed for the loan applicants of small business enterprise. The proposed procedure has five stages: (1) selecting variables and collecting data, (2) finding the appropriate weight of variables by using Analytic Hierarchy Procedure, (3) constructing a model of loan risk assessment to discriminate between good and bad loaners using Multivariate Discriminant analysis, Logistic regression and Kernel method separately, (4) constructing a multi-level credit rating model for good and bad loans, (5) constructing a prediction model of survival for bad loaners and default probability table which can provide financial institutions information to make decision about the proper length of payment periods. Response Surface method(RSM) is also used in this study to find optimal parameter setting level of Kernel method. Finally, a real case is provided to demonstrate the effectiveness of the proposed procedure.
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Wen-Tsang, and 劉文倉. "A Risk Assessment Model for IT Outsourcing in Corporate Financial Services." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/49898997104824805432.

Full text
Abstract:
碩士<br>世新大學<br>資訊管理學研究所(含碩專班)<br>102<br>In the recent years, more and more enterprise concentrate in the core businesses, therefore the companies have started to do technology information outsourcing, in order to costs down and increase efficiency. Hence, the risk of information outsourcing has become increasingly important as a result. Research for the study of IT outsourcing in the past, mostly focus on the success factors of IT outsourcing, IT outsourcing research vendor selection decisions relations, etc., the relative risk in IT outsourcing how it happened on the causes, effects, issues, and practices study occurred in response to IT outsourcing risk is seldom. This study analyzes the potential risk factors of Information Systems outsourcing in bank financial business by reviewing literature from Taiwan and abroad, and the reference from the practical work. The results of this study found that changes in demand, the lack of effective project management, the lack of a sense of customer participation, high complexity and information technology outsourcing risk positive relationship exists.
APA, Harvard, Vancouver, ISO, and other styles
44

Thang, Nguyen Toan, and 阮全勝. "Financial Assessment in State Administrative Units, Research Model at Project Management Unit." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/8z4wt5.

Full text
Abstract:
碩士<br>國立雲林科技大學<br>企業管理系<br>107<br>Ensuring financial autonomy is an essential requirement of the State non-business units in Vietnam. According to the policy of the Government of Vietnam (Decree No.55/2012/ ND-CP on June 28th, 2012 focus on regulations on the establishment, reorganization and dissolution of public non-business units), State non-business units are established and dissolved according to the requirements of career activities and ensure efficiency in operations, in which financial efficiency is the key . The project management Unit under control of the Ministry of Culture, Sports and Tourism is a financial autonomous unit established in 2016 under Decree No.59/2015/ ND-CP on June 18th, 2015 focus on management of construction investment projects and Decree No.16/2015/ ND-CP on February 14th , 2015 focus on regulating the autonomy mechanism of public non-business units of the Government of Vietnam. The motivations of this study is ensuring the requirement of stable operation and sustainable development in order to implement the task of Managing projects of particular nature in the fields of culture, sports and tourism, the Project Management Unit needs a effective financial management. By the method of financial analysis, the research project offers financial management solutions to ensure the autonomy of the Project Management Unit to implement and manage construction investment projects under the Ministry of Culture, Sports and Tourism and other projects commissioned. The outcome of the study (NPV, IRR) are results can also be applied to agencies and departments under the Ministry of Culture, Sports and Tourism in allocating investment capital effectively. Those governmental regulations can serve as a boundary conditions.
APA, Harvard, Vancouver, ISO, and other styles
45

Sharma, Abhinav 1985. "Assessment of polymer injectivity during chemical enhanced oil recovery processes." Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-12-2586.

Full text
Abstract:
Polymers play a key role in several EOR processes such as polymer flooding, surfactant-polymer flooding and alkaline-surfactant-polymer flooding due to their critical importance of mobility control in achieving high oil recovery from these processes. Numerical simulators are used to predict the performance of all of these processes and in particular the injection rate of the chemical solutions containing polymer; since the economics is very sensitive to the injection rates. Injection rates are governed by the injection viscosity, thus, it is very important to model the polymer viscosity accurately. For the predictions to be accurate, not only the viscosity model must be accurate, but also the calculation of equivalent shear rate in each gridblock must be accurate because the non-Newtonian viscosity models depend on this shear rate. As the size of the gridblock increases, the calculation of this velocity becomes less numerically accurate, especially close to wells. This research presents improvements in polymer viscosity model. Using the improvements in shear thinning model, the laboratory polymer rheology data was better matched. For the first time, polymer viscosity was modeled for complete range of velocity using the Unified Viscosity Model for published laboratory data. New models were developed for relaxation time, time constant and high shear viscosity during that match. These models were then used to match currently available HPAM polymer's laboratory data and predict its viscosity for various concentrations for full flow velocity range. This research presents the need for injectivity correction when large grid sizes are used. Use of large grid sizes to simulate large reservoir due to computation constraints induces errors in shear rate calculations near the wellbore and underestimate polymer solution viscosity. Underestimated polymer solution viscosities lead to incorrect injectivity calculation. In some cases, depending on the well grid block size, this difference between a fine scale and a coarse simulation could be as much as 100%. This study focuses on minimizing those errors. This methodology although needs some more work, but can be used in accurate predictions of reservoir simulation studies of chemical enhanced oil recovery processes involving polymers.<br>text
APA, Harvard, Vancouver, ISO, and other styles
46

Chang, Mao-I., and 張貿易. "Study of Financial Products Investment Risk Assessment-Using VaR Model-based Historical Simulation Method." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/82065021383799933144.

Full text
Abstract:
碩士<br>中原大學<br>會計研究所<br>91<br>Dissertation Summary Many enterprises at home or overseas either face bankruptcy or reorganization due to failure in practicing sufficient risk control in a time seeing drastic increase of transaction of financial products. Emergence of the concept of VaR, the risk value, has brought the risk control for the market of financial products into a new milestone. Domestic laws subject “disclosure” involved in VaR only to that as proposed in No. 27 Publication of Financial Accounting Standards. On one hand, this study attempts entering from the angle of the enterprise to help domestic financial industry select the proper VaR model to predict the enterprise specific VaR using the selected VaR model, analyze VaR significance, and determine whether the VaR model is appropriate or not; present an overall series of the application of VaR for the financial industry; and assist the financial industry in internal control and assessment of investment risks. On the other hand, this study seeks to help those involved in the market on understanding and practical application of information about VaR in making the quantitative VaR useful for decision making. Finally, this study also serves the purposes of making supplements to those requirements set forth in No.27 Publication and reference by domestic financial control authorities in preparing related standards. Following the pragmatic analysis on the listed bankers and OTC securities in Taiwan and documentary search, the Historical Simulation Method is chosen as the modal for estimating the VaR to solve and analysis quantitative VaR information before being tested for its feasibility to assess the VaR of domestic financial industry with Back Testing, Uncovered Loss Ratio, Uncovered Loss Distance and LRuc Test proposed by Kupiec (1995). Results indicate that either in the banking stocks or securities stocks the Historical Simulation method gives a certain estimate efficiency and accuracy for most of the banking stocks and securities stocks when tested with four VaR models: (1)Regarding the Back Testing method, the Uncovered Loss number of both types of stocks is comparatively higher for FY 2000 given with 99% and 95% Confidence Level respectively; however, the Uncovered Loss number significantly drops for FY 2001, showing that the VaR of FY 2001 has been affected by those Extreme Events taking place in FY 2000 and protruding the importance in the selection of the Moving Window. (2)As for the Uncovered Loss Ratio, the Historical Simulation method maintains good efficiency in the estimate of VaR in both types of banking and securities stocks, and the best efficiency is observed in the risk estimation for the Historical Simulation method given with a Confidence Level of 99%. (3)In terms of Uncovered Loss Distance, the better estimation efficiency for the Historical Simulation method is found with FY 2001 than FY 2000 by comparing between the risk values of both types of stocks in FY 2001 and FY 2000. (4)Whether with a 99% or 95% Confidence Level, the LRuc Test method proves its accuracy for both banking and securities industries; however, the test value is comparatively lower to suggest that the Historical Simulation method significantly tends to be conservative in estimating the VaR. Key words: VaR, historical simulation method, back testing
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Tsung-Han, and 李宗翰. "Applying Self-Organizing Map to Construct a Dynamic Assessment Model of Corporate Financial Structure." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/a7q8z6.

Full text
Abstract:
碩士<br>臺中技術學院<br>資訊工程系碩士班<br>99<br>After Asia Crisis, many unhealthy companies busted out emerging financial crisis and led to the crash of the stock market. The evaluation method of early stock trading evaluation theory and enterprise constitution examination model are too simplified, the structure of using single linear relationship to derive relationship between risk and reward is too simplified or have several constraints. However the real financial market is dynamic and highly complicated, the constraints on evaluation cannot be satisfied, hence this study uses Self-Organizing Map (SOM) to build an unsupervised learning and visualized model to analysis the status of different enterprise physical constitution changes. Forecasting the most probable direction of change without the constraints of traditional models and using the advantage of unsupervised and visualized clustering of SOM, enterprise’s financial constitution can be clustered dynamically for examination, the gradual change of enterprise physical constitution is projected on two-dimensional surface and its placement position is analyzed along with the trend of moving orbit of physical constitution Hence, enterprise physical constitution is differentiated into good or not good for investment decision accordingly. Final experiment indicated that SOM effectively ditinguished whether the physical constitution of enterprise is excellent or awful, and the enterprise is differentiated into growing or declining trend based on its placement position and moving orbit for investors to make investment decision.
APA, Harvard, Vancouver, ISO, and other styles
48

"Advances in Bayesian Modelling and Computation: Spatio-Temporal Processes, Model Assessment and Adaptive MCMC." Diss., 2009. http://hdl.handle.net/10161/1609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ji, Chunlin. "Advances in Bayesian Modelling and Computation: Spatio-Temporal Processes, Model Assessment and Adaptive MCMC." Diss., 2009. http://hdl.handle.net/10161/1609.

Full text
Abstract:
<p>The modelling and analysis of complex stochastic systems with increasingly large data sets, state-spaces and parameters provides major stimulus to research in Bayesian nonparametric methods and Bayesian computation. This dissertation presents advances in both nonparametric modelling and statistical computation stimulated by challenging problems of analysis in complex spatio-temporal systems and core computational issues in model fitting and model assessment. The first part of the thesis, represented by chapters 2 to 4, concerns novel, nonparametric Bayesian mixture models for spatial point processes, with advances in modelling, computation and applications in biological contexts. Chapter 2 describes and develops models for spatial point processes in which the point outcomes are latent, where indirect observations related to the point outcomes are available, and in which the underlying spatial intensity functions are typically highly heterogenous. Spatial intensities of inhomogeneous Poisson processes are represented via flexible nonparametric Bayesian mixture models. Computational approaches are presented for this new class of spatial point process mixtures and extended to the context of unobserved point process outcomes. Two examples drawn from a central, motivating context, that of immunofluorescence histology analysis in biological studies generating high-resolution imaging data, demonstrate the modelling approach and computational methodology. Chapters 3 and 4 extend this framework to define a class of flexible Bayesian nonparametric models for inhomogeneous spatio-temporal point processes, adding dynamic models for underlying intensity patterns. Dependent Dirichlet process mixture models are introduced as core components of this new time-varying spatial model. Utilizing such nonparametric mixture models for the spatial process intensity functions allows the introduction of time variation via dynamic, state-space models for parameters characterizing the intensities. Bayesian inference and model-fitting is addressed via novel particle filtering ideas and methods. Illustrative simulation examples include studies in problems of extended target tracking and substantive data analysis in cell fluorescent microscopic imaging tracking problems.</p><p>The second part of the thesis, consisting of chapters 5 and chapter 6, concerns advances in computational methods for some core and generic Bayesian inferential problems. Chapter 5 develops a novel approach to estimation of upper and lower bounds for marginal likelihoods in Bayesian modelling using refinements of existing variational methods. Traditional variational approaches only provide lower bound estimation; this new lower/upper bound analysis is able to provide accurate and tight bounds in many problems, so facilitates more reliable computation for Bayesian model comparison while also providing a way to assess adequacy of variational densities as approximations to exact, intractable posteriors. The advances also include demonstration of the significant improvements that may be achieved in marginal likelihood estimation by marginalizing some parameters in the model. A distinct contribution to Bayesian computation is covered in Chapter 6. This concerns a generic framework for designing adaptive MCMC algorithms, emphasizing the adaptive Metropolized independence sampler and an effective adaptation strategy using a family of mixture distribution proposals. This work is coupled with development of a novel adaptive approach to computation in nonparametric modelling with large data sets; here a sequential learning approach is defined that iteratively utilizes smaller data subsets. Under the general framework of importance sampling based marginal likelihood computation, the proposed adaptive Monte Carlo method and sequential learning approach can facilitate improved accuracy in marginal likelihood computation. The approaches are exemplified in studies of both synthetic data examples, and in a real data analysis arising in astro-statistics.</p><p>Finally, chapter 7 summarizes the dissertation and discusses possible extensions of the specific modelling and computational innovations, as well as potential future work.</p><br>Dissertation
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Guan-Jhih, and 吳冠鋕. "Constructing a Credit Risk Assessment Model for Financial Institution by eXtreme Gradient Boosting Decision Tree." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/4sgzcr.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography