To see the other types of publications on this topic, follow the link: The maximum price.

Dissertations / Theses on the topic 'The maximum price'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 32 dissertations / theses for your research on the topic 'The maximum price.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Vardar, Ceren. "On the Correlation of Maximum Loss and Maximum Gain of Stock Price Processes." Bowling Green State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1224274306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Thernström, Taina. "Maximum price paid in captive bush dogs (Speothos venaticus)." Thesis, Linköpings universitet, Institutionen för fysik, kemi och biologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-77923.

Full text
Abstract:
One way to investigate what animals in captivity   might need is to conduct preference and motivational tests. These types of   tests can help facilitate the animals to express different priorities. The   motivation can be assessed by having the animals “pay an entry cost” (e.g.   push a weighted door) that increases with time to get access to a resource.   The highest price that the animals are willing to pay for this resource is   called “the maximum price paid”. This study intends to test the maximum price   paid to access for food in a group of bush dogs kept at Kolmården Wildlife   Park. A simple choice test consisting of four different food items (meat,   fish, vegetables and fruit) was first conducted to establish which resource   the bush dogs preferred. The results showed that meat and fish were the   preferred food items. Secondly, a push-door test was conducted to measure the   maximum price paid for the preferred food item. At the most, one individual   was willing to lift 11 kg (twice its weight) to get access to meat.
APA, Harvard, Vancouver, ISO, and other styles
3

Warrier, Deepak. "A branch, price, and cut approach to solving the maximum weighted independent set problem." Texas A&M University, 2003. http://hdl.handle.net/1969.1/5814.

Full text
Abstract:
The maximum weight-independent set problem (MWISP) is one of the most well-known and well-studied NP-hard problems in the field of combinatorial optimization. In the first part of the dissertation, I explore efficient branch-and-price (B&P) approaches to solve MWISP exactly. B&P is a useful integer-programming tool for solving NP-hard optimization problems. Specifically, I look at vertex- and edge-disjoint decompositions of the underlying graph. MWISP’s on the resulting subgraphs are less challenging, on average, to solve. I use the B&P framework to solve MWISP on the original graph G using these specially constructed subproblems to generate columns. I demonstrate that vertex-disjoint partitioning scheme gives an effective approach for relatively sparse graphs. I also show that the edge-disjoint approach is less effective than the vertex-disjoint scheme because the associated DWD reformulation of the latter entails a slow rate of convergence. In the second part of the dissertation, I address convergence properties associated with Dantzig-Wolfe Decomposition (DWD). I discuss prevalent methods for improving the rate of convergence of DWD. I also implement specific methods in application to the edge-disjoint B&P scheme and show that these methods improve the rate of convergence. In the third part of the dissertation, I focus on identifying new cut-generation methods within the B&P framework. Such methods have not been explored in the literature. I present two new methodologies for generating generic cutting planes within the B&P framework. These techniques are not limited to MWISP and can be used in general applications of B&P. The first methodology generates cuts by identifying faces (facets) of subproblem polytopes and lifting associated inequalities; the second methodology computes Lift-and-Project (L&P) cuts within B&P. I successfully demonstrate the feasibility of both approaches and present preliminary computational tests of each.
APA, Harvard, Vancouver, ISO, and other styles
4

Ting, Wah. "The impact of the interdisciplinary efforts on the receptivity of guarantee maximum price (GMP) project." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B36789161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ting, Wah, and 丁華. "The impact of the interdisciplinary efforts on the receptivity of guarantee maximum price (GMP) project." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B36789161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Půžová, Kateřina. "Efektivnost regulace cen léčiv ze společenského pohledu v České republice." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-204971.

Full text
Abstract:
The aim of this thesis is to determine the effectiveness of price regulation of drugs from a social point of view in the Czech Republic. The first part focuses on the theoretical basis for subsequent processing practical part. Theoretical solutions include a description of the objectives and instruments of health and drug policies, and a more detailed description of price regulation in the Czech Republic as a tool of drug policy. The practical part is composed of two parts. The first of them covers three models that show how you can set a maximum price of producer. This first part is the starting point for the second part. Second part deals with the analysis of administrative proceedings in which it exercised its right to express their views to the documents MAH medicine. The analysis shows that price controls are not efficient in these cases, and it burdens the society's interests. The thesis also recommendations for increasing efficiency.
APA, Harvard, Vancouver, ISO, and other styles
7

Ruotimaa, Jenny. "Are seals willing to pay for access to artificial kelp and live fish?" Thesis, Linköping University, The Department of Physics, Chemistry and Biology, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10340.

Full text
Abstract:

Environmental enrichment (EE) is used to improve the wellbeing of animals in human care. One way of testing what resources an animal prefers to have access to, is to make it pay a price. The price is in the form of time or energy spent to get access to the resource. When measuring the motivation of animals it is useful to compare the resource which is to be evaluated to a resource with a known value. Food is often the comparator. The maximum price paid approach measures the highest price an animal is willing to pay for access to a

resource. In this study the motivation of a grey seal (Halichoerus grypus) for getting access to artificial kelp and live fish was measured. Food was used as the comparator. A large net cage with a weighted entrance and a nonweighted exit gate was used as the test arena. The seal had to enter it by opening the entrance gate which had increasing weights every day, in 10 steps up to 65 kg. The seal was not willing to pay any price for the live fish. The maximum price paid for the food was 60kg, and for the artificial kelp 10kg, i.e. 17% of the maximum price paid for food. The results suggest that neither

live fish nor artificial kelp was an attractive EE for this seal. However, the study also shows that spring (reproductive period) is not a good time to test motivation in grey seals.

APA, Harvard, Vancouver, ISO, and other styles
8

Holmgren, Mary. "A method to evaluate environmental enrichments for Asian elephants (Elephas maximus) in zoos." Thesis, Linköping University, The Department of Physics, Chemistry and Biology, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11902.

Full text
Abstract:

Environmental enrichment (EE) is used to improve the life of captive animals by giving them more opportunities to express species-specific behaviours. Zoo elephants are one of the species that is in great need of EE because their environment is often barren. Before making EE permanent, however, it is wise to test first if it works as intended, to save time and money. Maximum price paid is one measure that can be used to assess if an animal has any interest in a resource at all. Food is often used as a comparator against EEs in these kinds of studies. The aim was to investigate if the maximum price paid concept could be used to measure the value of EEs for the two female Asian elephants at Kolmården and to find an operant test suitable for them for the experimental trials. Three series of food trials were done with each elephant, where they had to lift weights by pulling a rope with their mouth to get access to 5kg hay. The elephants paid a maximum price of 372 and 227kg, respectively. However, the maximum price the elephants paid for access to the hay was not stable across the three series of trials. Hence it is recommended that the comparator trials are repeated close in time to the EEs to be tested. The readiness by which these elephants performed the task makes it worthwhile to further pursue this approach as one of the means to improve the well-being of zoo elephants.

APA, Harvard, Vancouver, ISO, and other styles
9

Sachdeva, Sandeep. "Development of a branch and price approach involving vertex cloning to solve the maximum weighted independent set problem." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/3251.

Full text
Abstract:
We propose a novel branch-and-price (B&P) approach to solve the maximum weighted independent set problem (MWISP). Our approach uses clones of vertices to create edge-disjoint partitions from vertex-disjoint partitions. We solve the MWISP on sub-problems based on these edge-disjoint partitions using a B&P framework, which coordinates sub-problem solutions by involving an equivalence relationship between a vertex and each of its clones. We present test results for standard instances and randomly generated graphs for comparison. We show analytically and computationally that our approach gives tight bounds and it solves both dense and sparse graphs quite quickly.
APA, Harvard, Vancouver, ISO, and other styles
10

Lovreta, Lidija. "Structural Credit Risk Models: Estimation and Applications." Doctoral thesis, Universitat Ramon Llull, 2010. http://hdl.handle.net/10803/9180.

Full text
Abstract:
El risc de crèdit s'associa a l'eventual incompliment de les obligacions de pagament per part dels creditors. En aquest cas, l'interès principal de les institucions financeres és mesurar i gestionar amb precisió aquest risc des del punt de vista quantitatiu. Com a resposta a l'interès esmentat, aquesta tesi doctoral, titulada "Structural Credit Risk Models: Estimation and Applications", se centra en l'ús pràctic dels anomenats "models estructurals de risc de crèdit". Aquests models es caracteritzen perquè estableixen una relació explícita entre el risc de crèdit i diverses variables fonamentals, la qual cosa permet un ventall ampli d'aplicacions. Concretament, la tesi analitza el contingut informatiu tant del mercat d'accions com del mercat de CDS sobre la base dels models estructurals esmentats.

El primer capítol, estudia la velocitat distinta amb què el mercat d'accions i el mercat de CDS incorporen nova informació sobre el risc de crèdit. L'anàlisi se centra a respondre dues preguntes clau: quin d'aquests mercats genera una informació més precisa sobre el risc de crèdit i quins factors determinen el diferent contingut informatiu dels indicadors respectius de risc, és a dir, les primes de crèdit implícites en el mercat d'accions enfront del de CDS. La base de dades utilitzada inclou 94 empreses (40 d'europees, 32 de nordamericanes i 22 de japoneses) durant el període 2002-2004. Entre les conclusions principals destaquen la naturalesa dinàmica del procés de price discovery, una interconnexió més gran entre ambdós mercats i un major domini informatiu del mercat d'accions, associat a uns nivells més elevats del risc de crèdit, i, finalment, una probabilitat més gran de lideratge informatiu del mercat de CDS en els períodes d'estrès creditici.

El segon capítol se centra en el problema de l'estimació de les variables latents en els models estructurals. Es proposa una nova metodologia, que consisteix en un algoritme iteratiu aplicat a la funció de versemblança per a la sèrie temporal del preu de les accions. El mètode genera estimadors de pseudomàxima versemblança per al valor, la volatilitat i el retorn que s'espera obtenir dels actius de l'empresa. Es demostra empíricament que aquest nou mètode produeix, en tots els casos, valors raonables del punt de fallida. A més, aquest mètode és contrastat d'acord amb les primes de CDS generades. S'observa que, en comparació amb altres alternatives per fixar el punt de fallida (màxima versemblança estàndard, barrera endògena, punt d'impagament de KMV i nominal del deute), l'estimació per pseudomàxima versemblança proporciona menys divergències.

El tercer i darrer capítol de la tesi tracta la qüestió relativa a components distints del risc de crèdit a la prima dels CDS. Més concretament, estudia l'efecte del desequilibri entre l'oferta i la demanda, un aspecte important en un mercat on el nombre de compradors (de protecció) supera habitualment el de venedors. La base de dades cobreix, en aquest cas, 163 empreses en total (92 d'europees i 71 de nord-americanes) per al període 2002- 2008. Es demostra que el desequilibri entre l'oferta i la demanda té, efectivament, un paper important a l'hora d'explicar els moviments a curt termini en els CDS. La influència d'aquest desequilibri es detecta després de controlar l'efecte de variables fonamentals vinculades al risc de crèdit, i és més gran durant els períodes d'estrès creditici. Aquests resultats il·lustren que les primes dels CDS reflecteixen no tan sols el cost de la protecció, sinó també el cost anticipat per part dels venedors d'aquesta protecció per tancar la posició adquirida.
El riesgo de crédito se asocia al potencial incumplimiento por parte de los acreedores respecto de sus obligaciones de pago. En este sentido, el principal interés de las instituciones financieras es medir y gestionar con precisión dicho riesgo desde un punto de vista cuantitativo. Con objeto de responder a este interés, la presente tesis doctoral titulada "Structural Credit Risk Models: Estimation and Applications", se centra en el uso práctico de los denominados "Modelos Estructurales de Riesgo de Crédito". Estos modelos se caracterizan por establecer una conexión explícita entre el riesgo de crédito y diversas variables fundamentales, permitiendo de este modo un amplio abanico de aplicaciones. Para ser más explícitos, la presente tesis explora el contenido informativo tanto del mercado de acciones como del mercado de CDS sobre la base de los mencionados modelos estructurales.

El primer capítulo de la tesis estudia la distinta velocidad con la que el mercado de acciones y el mercado de CDS incorporan nueva información sobre el riesgo de crédito. El análisis se centra en contestar dos preguntas clave: cuál de estos mercados genera información más precisa sobre el riesgo de crédito, y qué factores determinan en distinto contenido informativo de los respectivos indicadores de riesgo, esto es, primas de crédito implícitas en el mercado de acciones frente a CDS. La base de datos utilizada engloba a 94 compañías (40 europeas, 32 Norteamericanas y 22 japonesas) durante el periodo 2002-2004. Entre las principales conclusiones destacan la naturaleza dinámica del proceso de price discovery, la mayor interconexión entre ambos mercados y el mayor dominio informativo del mercado de acciones asociados a mayores niveles del riesgo de crédito, y finalmente la mayor probabilidad de liderazgo informativo del mercado de CDS en los periodos de estrés crediticio.

El segundo capítulo se centra en el problema de estimación de variables latentes en modelos estructurales. Se propone una nueva metodología consistente en un algoritmo iterativo aplicado a la función de verosimilitud para la serie temporal del precio de las acciones. El método genera estimadores pseudo máximo verosímiles para el valor, volatilidad y retorno esperado de los activos de la compañía. Se demuestra empíricamente que este nuevo método produce en todos los casos valores razonables del punto de quiebra. El método es además contrastado en base a las primas de CDS generadas. Se observa que, en comparación con otras alternativas para fijar el punto de quiebra (máxima verosimilitud estándar, barrera endógena, punto de impago de KMV, y nominal de la deuda), la estimación por pseudo máxima verosimilitud da lugar a las menores divergencias.

El tercer y último capítulo de la tesis aborda la cuestión relativa a componentes distintos al riesgo de crédito en la prima de los CDS. Se estudia más concretamente el efecto del desequilibrio entre oferta y demanda, un aspecto importante en un mercado donde el número de compradores (de protección) supera habitualmente al de vendedores. La base de datos cubre en este caso un total de 163 compañías (92 europeas y 71 norteamericanas) para el periodo 2002-2008. Se demuestra que el desequilibrio entre oferta y demanda tiene efectivamente un papel importante a la hora de explicar los movimientos de corto plazo en los CDS. La influencia de este desequilibrio se detecta una vez controlado el efecto de variables fundamentales ligadas al riesgo de crédito, y es mayor durante los periodos de estrés crediticio. Estos resultados ilustran que las primas de los CDS reflejan no sólo el coste de la protección, sino el coste anticipado por parte de los vendedores de tal protección de cerrar la posición adquirida.
Credit risk is associated with potential failure of borrowers to fulfill their obligations. In that sense, the main interest of financial institutions becomes to accurately measure and manage credit risk on a quantitative basis. With the intention to respond to this task this doctoral thesis, entitled "Structural Credit Risk Models: Estimation and Applications", focuses on practical usefulness of structural credit risk models that are characterized with explicit link with economic fundamentals and consequently allow for a broad range of application possibilities. To be more specific, in essence, the thesis project explores the information on credit risk embodied in the stock market and market for credit derivatives (CDS market) on the basis of structural credit risk models. The issue addressed in the first chapter refers to relative informational content of stock and CDS market in terms of credit risk. The overall analysis is focused on answering two crucial questions: which of these markets provides more timely information regarding credit risk, and what are the factors that influence informational content of credit risk indicators (i.e. stock market implied credit spreads and CDS spreads). Data set encompasses international set of 94 companies (40 European, 32 US and 22 Japanese) during the period 2002-2004. The main conclusions uncover time-varying behaviour of credit risk discovery, stronger cross market relationship and stock market leadership at higher levels of credit risk, as well as positive relationship between the frequency of severe credit deterioration shocks and the probability of the CDS market leadership.

Second chapter concentrates on the problem of estimation of latent parameters of structural models. It proposes a new, maximum likelihood based iterative algorithm which, on the basis of the log-likelihood function for the time series of equity prices, provides pseudo maximum likelihood estimates of the default barrier and of the value, volatility, and expected return on the firm's assets. The procedure allows for credit risk estimation based only on the readily available information from stock market and is empirically tested in terms of CDS spread estimation. It is demonstrated empirically that, contrary to the standard ML approach, the proposed method ensures that the default barrier always falls within reasonable bounds. Moreover, theoretical credit spreads based on pseudo ML estimates offer the lowest credit default swap pricing errors when compared to the other options that are usually considered when determining the default barrier: standard ML estimate, endogenous value, KMV's default point, and principal value of debt.

Final, third chapter of the thesis, provides further evidence of the performance of the proposed pseudo maximum likelihood procedure and addresses the issue of the presence of non-default component in CDS spreads. Specifically, the effect of demand-supply imbalance, an important aspect of liquidity in the market where the number of buyers frequently outstrips the number of sellers, is analyzed. The data set is largely extended covering 163 non-financial companies (92 European and 71 North American) and period 2002-2008. In a nutshell, after controlling for the fundamentals reflected through theoretical, stock market implied credit spreads, demand-supply imbalance factors turn out to be important in explaining short-run CDS movements, especially during structural breaks. Results illustrate that CDS spreads reflect not only the price of credit protection, but also a premium for the anticipated cost of unwinding the position of protection sellers.
APA, Harvard, Vancouver, ISO, and other styles
11

Al, Rababa'A Abdel Razzaq. "Uncovering hidden information and relations in time series data with wavelet analysis : three case studies in finance." Thesis, University of Stirling, 2017. http://hdl.handle.net/1893/25961.

Full text
Abstract:
This thesis aims to provide new insights into the importance of decomposing aggregate time series data using the Maximum Overlap Discrete Wavelet Transform. In particular, the analysis throughout this thesis involves decomposing aggregate financial time series data at hand into approximation (low-frequency) and detail (high-frequency) components. Following this, information and hidden relations can be extracted for different investment horizons, as matched with the detail components. The first study examines the ability of different GARCH models to forecast stock return volatility in eight international stock markets. The results demonstrate that de-noising the returns improves the accuracy of volatility forecasts regardless of the statistical test employed. After de-noising, the asymmetric GARCH approach tends to be preferred, although that result is not universal. Furthermore, wavelet de-noising is found to be more important at the key 99% Value-at-Risk level compared to the 95% level. The second study examines the impact of fourteen macroeconomic news announcements on the stock and bond return dynamic correlation in the U.S. from the day of the announcement up to sixteen days afterwards. Results conducted over the full sample offer very little evidence that macroeconomic news announcements affect the stock-bond return dynamic correlation. However, after controlling for the financial crisis of 2007-2008 several announcements become significant both on the announcement day and afterwards. Furthermore, the study observes that news released early in the day, i.e. before 12 pm, and in the first half of the month, exhibit a slower effect on the dynamic correlation than those released later in the month or later in the day. While several announcements exhibit significance in the 2008 crisis period, only CPI and Housing Starts show significant and consistent effects on the correlation outside the 2001, 2008 and 2011 crises periods. The final study investigates whether recent returns and the time-scaled return can predict the subsequent trading in ten stock markets. The study finds little evidence that recent returns do predict the subsequent trading, though this predictability is observed more over the long-run horizon. The study also finds a statistical relation between trading and return over the long-time investment horizons of [8-16] and [16-32] day periods. Yet, this relation is mostly a negative one, only being positive for developing countries. It also tends to be economically stronger during bull-periods.
APA, Harvard, Vancouver, ISO, and other styles
12

Frizzell, Tabitha Jane. "Maximum affordable quota prices, concepts and scenarios for the Ontario dairy industry." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ56325.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Xiaodong. "Econometrics on interactions-based models methods and applications /." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180283230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Čeňková, Lydie. "Analýza nejvyššího a nejlepšího využití objektu bývalé restaurace v Kroměříži." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-233095.

Full text
Abstract:
This diploma thesis deals with the analysis of the highest and best use of the property, which is practically applied to a specific example. The analysis is performed on the buildings of the former restaurant Slovan in Kroměříž that the currently unused. On logically probable recovery are performed four tests (legal admissibility, physical possibility, financial merits and maximum profitability), on the basis of which will be the highest and best use.
APA, Harvard, Vancouver, ISO, and other styles
15

Cheng, Wei. "Factor Analysis for Stock Performance." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-050405-180040/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Babykina, Evgénia. "Modélisation statistique d'événements récurrents. Exploration empirique des estimateurs, prise en compte d'une covariable temporelle et application aux défaillances des réseaux d'eau." Thesis, Bordeaux 2, 2010. http://www.theses.fr/2010BOR21750/document.

Full text
Abstract:
Dans le contexte de la modélisation aléatoire des événements récurrents, un modèle statistique particulier est exploré. Ce modèle est fondé sur la théorie des processus de comptage et est construit dans le cadre d'analyse de défaillances dans les réseaux d'eau. Dans ce domaine nous disposons de données sur de nombreux systèmes observés durant une certaine période de temps. Les systèmes étant posés à des instants différents, leur âge est utilisé en tant qu'échelle temporelle dans la modélisation. Le modèle tient compte de l'historique incomplet d'événements, du vieillissement des systèmes, de l'impact négatif des défaillances précédentes sur l'état des systèmes et des covariables. Le modèle est positionné parmi d'autres approches visant à l'analyse d'événements récurrents utilisées en biostatistique et en fiabilité. Les paramètres du modèle sont estimés par la méthode du Maximum de Vraisemblance (MV). Une covariable dépendante du temps est intégrée au modèle. Il est supposé qu'elle est extérieure au processus de défaillance et constante par morceaux. Des méthodes heuristiques sont proposées afin de tenir compte de cette covariable lorsqu'elle n'est pas observée. Des méthodes de simulation de données artificielles et des estimations en présence de la covariable temporelle sont proposées. Les propriétés de l'estimateur (la normalité, le biais, la variance) sont étudiées empiriquement par la méthode de Monte Carlo. L'accent est mis sur la présence de deux directions asymptotiques : asymptotique en nombre de systèmes n et asymptotique en durée d'observation T. Le comportement asymptotique de l'estimateur MV constaté empiriquement est conforme aux résultats théoriques classiques. Il s'agit de l'asymptotique en n. Le comportement T-asymptotique constaté empiriquement n'est pas classique. L'analyse montre également que les deux directions asymptotiques n et T peuvent être combinées en une unique direction : le nombre d'événements observés. Cela concerne les paramètres classiques du modèle (les coefficients associés aux covariables fixes et le paramètre caractérisant le vieillissement des systèmes). Ce n'est en revanche pas le cas pour le coefficient associé à la covariable temporelle et pour le paramètre caractérisant l'impact négatif des défaillances précédentes sur le comportement futur du système. La méthodologie développée est appliquée à l'analyse des défaillances des réseaux d'eau. L'influence des variations climatiques sur l'intensité de défaillance est prise en compte par une covariable dépendante du temps. Les résultats montrent globalement une amélioration des prédictions du comportement futur du processus lorsque la covariable temporelle est incluse dans le modèle
In the context of stochastic modeling of recurrent events, a particular model is explored. This model is based on the counting process theory and is built to analyze failures in water distribution networks. In this domain the data on a large number of systems observed during a certain time period are available. Since the systems are installed at different dates, their age is used as a time scale in modeling. The model accounts for incomplete event history, aging of systems, negative impact of previous failures on the state of systems and for covariates.The model is situated among other approaches to analyze the recurrent events, used in biostatistics and in reliability. The model parameters are estimated by the Maximum Likelihood method (ML). A method to integrate a time-dependent covariate into the model is developed. The time-dependent covariate is assumed to be external to the failure process and to be piecewise constant. Heuristic methods are proposed to account for influence of this covariate when it is not observed. Methods for data simulation and for estimations in presence of the time-dependent covariate are proposed. A Monte Carlo study is carried out to empirically assess the ML estimator's properties (normality, bias, variance). The study is focused on the doubly-asymptotic nature of data: asymptotic in terms of the number of systems n and in terms of the duration of observation T. The asymptotic behavior of the ML estimator, assessed empirically agrees with the classical theoretical results for n-asymptotic behavior. The T-asymptotics appears to be less typical. It is also revealed that the two asymptotic directions, n and T can be combined into one unique direction: the number of observed events. This concerns the classical model parameters (the coefficients associated to fixed covariates, the parameter characterizing aging of systems). The presence of one unique asymptotic direction is not obvious for the time-dependent covariate coefficient and for a parameter characterizing the negative impact of previous events on the future behavior of a system.The developed methodology is applied to the analysis of failures of water networks. The influence of climatic variations on failure intensity is assessed by a time-dependent covariate. The results show a global improvement in predictions of future behavior of the process when the time-dependent covariate is included into the model
APA, Harvard, Vancouver, ISO, and other styles
17

Sonono, Masimba Energy. "Applications of conic finance on the South African financial markets /| by Masimba Energy Sonono." Thesis, North-West University, 2012. http://hdl.handle.net/10394/9206.

Full text
Abstract:
Conic finance is a brand new quantitative finance theory. The thesis is on the applications of conic finance on South African Financial Markets. Conic finance gives a new perspective on the way people should perceive financial markets. Particularly in incomplete markets, where there are non-unique prices and the residual risk is rampant, conic finance plays a crucial role in providing prices that are acceptable at a stress level. The theory assumes that price depends on the direction of trade and there are two prices, one for buying from the market called the ask price and one for selling to the market called the bid price. The bid-ask spread reects the substantial cost of the unhedgeable risk that is present in the market. The hypothesis being considered in this thesis is whether conic finance can reduce the residual risk? Conic finance models bid-ask prices of cashows by applying the theory of acceptability indices to cashows. The theory of acceptability combines elements of arbitrage pricing theory and expected utility theory. Combining the two theories, set of arbitrage opportunities are extended to the set of all opportunities that a wide range of market participants are prepared to accept. The preferences of the market participants are captured by utility functions. The utility functions lead to the concepts of acceptance sets and the associated coherent risk measures. The acceptance sets (market preferences) are modeled using sets of probability measures. The set accepted by all market participants is the intersection of all the sets, which is convex. The size of this set is characterized by an index of acceptabilty. This index of acceptability allows one to speak of cashows acceptable at a level, known as the stress level. The relevant set of probability measures that can value the cashows properly is found through the use of distortion functions. In the first chapter, we introduce the theory of conic finance and build a foundation that leads to the problem and objectives of the thesis. In chapter two, we build on the foundation built in the previous chapter, and we explain in depth the theory of acceptability indices and coherent risk measures. A brief discussion on coherent risk measures is done here since the theory of acceptability indices builds on coherent risk measures. It is also in this chapter, that some new acceptability indices are introduced. In chapter three, focus is shifted to mathematical tools for financial applications. The chapter can be seen as a prerequisite as it bridges the gap from mathematical tools in complete markets to incomplete markets, which is the market that conic finance theory is trying to exploit. As the chapter ends, models used for continuous time modeling and simulations of stochastic processes are presented. In chapter four, the attention is focussed on the numerical methods that are relevant to the thesis. Details on obtaining parameters using the maximum likelihood method and calibrating the parameters to market prices are presented. Next, option pricing by Fourier transform methods is detailed. Finally a discussion on the bid-ask formulas relevant to the thesis is done. Most of the numerical implementations were carried out in Matlab. Chapter five gives an introduction to the world of option trading strategies. Some illustrations are used to try and explain the option trading strategies. Explanations of the possible scenarios at the expiration date for the different option strategies are also included. Chapter six is the appex of the thesis, where results from possible real market scenarios are presented and discussed. Only numerical results were reported on in the thesis. Empirical experiments could not be done due to limitations of availabilty of real market data. The findings from the numerical experiments showed that the spreads from conic finance are reduced. This results in reduced residual risk and reduced low cost of entering into the trading strategies. The thesis ends with formal discussions of the findings in the thesis and some possible directions for further research in chapter seven.
Thesis (MSc (Risk Analysis))--North-West University, Potchefstroom Campus, 2013.
APA, Harvard, Vancouver, ISO, and other styles
18

Attia, Joël. "Recherche du rôle des yeux et du système nerveux dans l'expression des rythmes circadiens de déplacement et de prise alimentaire chez le mollusque gastéropode hélix aspersa maxima." Saint-Etienne, 1996. http://www.theses.fr/1996STET4018.

Full text
Abstract:
Ce travail porte sur la recherche du rôle des yeux et du système nerveux central dans le contrôle des rythmes circadiens de locomotion et de prise alimentaire de l'escargot helix aspersa maxima (mollusque gastéropode pulmoné). L'ablation des deux tentacules oculaires n'empêche pas la synchronisation du rythme locomoteur par les cycles lumineux (ld:12,12) ni sa manifestation en conditions constantes (faible lumière rouge constante), avec une période propre de l'ordre de 25h30, semblable à celle d'animaux intacts. La composante endogène ou une partie de celle-ci se situerait donc en dehors des yeux. Chez la plupart des animaux privés d'un seul tentacule oculaire, le périodogramme détecte plusieurs périodes en conditions constantes qui correspondent nettement dans quelques cas à un éclatement de l'activité locomotrice en plusieurs composantes. Ces résultats sont compatibles avec l'existence d'un système pacemaker constitué de plusieurs oscillateurs. L'ablation des ganglions cérébroïdes entraine la disparition du rythme locomoteur chez des animaux préalablement tentaculectomisés tandis que le seul individu ayant survécu à la double ablation des tentacules oculaires et d'un seul ganglion cérébroïde (droit) présente un rythme locomoteur sous cycles ld 12:12 qui persiste en conditions constantes. Ces résultats suggèrent l'implication des ganglions cérébroïdes dans le contrôle du rythme locomoteur. Nous avons alors recherché des territoires au sein des ganglions cérébroïdes dont le fonctionnement pourrait être en rapport avec les rythmes circadiens étudiés. En l'état, nous avons recherche un éventuel contrôle endocrine. Une étude histologique, faisant appel à des techniques d'analyse d'image, a permis de mettre en évidence des variations nycthemerales de la quantité de neurosécrétions accumulées au sein de certaines cellules neurosécrétrices mésocérébrales. Chez la plupart des animaux, ces variations sont corrélées avec les pics des activités locomotrice et de prise alimentaire. Des expériences de microdestruction pourraient confirmer la participation des cellules mésocérébrales au contrôle de l'activité locomotrice
APA, Harvard, Vancouver, ISO, and other styles
19

Jagelka, Tomáš. "Preferences, Ability, and Personality : Understanding Decision-making Under Risk and Delay." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX028/document.

Full text
Abstract:
Les préférences, les aptitudes et la personnalité prédisent un large éventail de réalisations économiques. Je les mets en correspondance dans un cadre structurel de prise de décision en utilisant des données expérimentales uniques collectées sur plus de 1 200 personnes prenant chacune plus de 100 décisions à enjeu financier.J’estime conjointement les distributions des préférences pour le risque et le temps dans la population, leur stabilité au niveau individuel et la tendance des gens à faire des erreurs. J’utilise le modèle à préférences aléatoires (RPM) dont il a été récemment démontré que ses propriétés théoriques sont supérieures à celles des modèles précédemment employés. Je montre que le RPM a une forte validité interne. Les cinq paramètres structurels estimés dominent un large éventail de variables démographiques et socio-économiques lorsqu'il s'agit d'expliquer des choix individuels observés.Je démontre l’importance économique et économétrique de l’utilisation des chocs aux préférences et de l’incorporation du paramètre dit de « la main tremblante ». Les erreurs et l’instabilité des préférences sont liées à des capacités différentes. Je propose un indice de rationalité qui les condense en un indicateur unique prédictif des pertes de bien-être.J'utilise un modèle à facteurs pour extraire la capacité cognitive et les « Big Five » traits de la personnalité à partir de nombreuses mesures. Ils expliquent jusqu’à 50% de la variation des préférences des gens et de leur capacité à faire des choix rationnels. La conscienciosité explique à elle seule 45% et 10% de la variation transversale du taux d'actualisation et de l'aversion au risque, ainsi que 20% de la variation de leur stabilité individuelle. En outre, l'aversion au risque est liée à l'extraversion et les erreurs dépendent des capacités cognitives, de l’effort, et des paramètres des tâches. Les préférences sont stables pour l'individu médian. Néanmoins, une partie de la population a une certaine instabilité des préférences qui est indicative d’une connaissance de soi imparfaite.Ces résultats ont des implications à la fois pour la spécification des modèles économiques de forme réduite et structurels, et aussi pour l’explication des inégalités et de la transmission intergénérationnelle du statut socio-économique
Preferences, ability, and personality predict a wide range of economic outcomes. I establish a mapping between them in a structural framework of decision-making under risk and delay using unique experimental data with information on over 100 incentivized choice tasks for each of more than 1,200 individuals.I jointly estimate population distributions of risk and time preferences complete with their individual-level stability and of people’s propensity to make mistakes. I am the first to do so using the Random Preference Model (RPM) which has been recently shown to have desirable theoretical properties over previously used frameworks. I show that the RPM has high internal validity. The five estimated structural parameters largely dominate a wide range of demographic and socio-economic variables when it comes to explaining observed individual choices between risky lotteries and time-separated payments.I demonstrate the economic and econometric significance of appending shocks directly to preferences and of incorporating the trembling hand parameter - their necessary complement in this framework. Mistakes and preference instability are not only separately identified but they are also linked to different cognitive and non-cognitive skills. I propose a Rationality Index which condenses them into a single indicator predictive of welfare loss.I use a factor model to extract cognitive ability and Big Five personality traits from noisy measures. They explain up to 50% of the variation in both average preferences and in individuals’ capacity to make consistent rational choices. Conscientiousness explains 45% and 10% respectively of the cross-sectional variation discount rates and risk aversion respectively as well as 20% of the variation in their individual-level stability. Furthermore, risk aversion is related to extraversion and mistakes are a function of cognitive ability, task design, and of effort. Preferences are stable for the median individual. Nevertheless, a part of the population exhibits some degree of preference instability consistent with imperfect self-knowledge.These results have implications both for specifying reduced form and structural economic models, and for explaining inequality and the inter-generational transmission of socioeconomic status
APA, Harvard, Vancouver, ISO, and other styles
20

Čížek, Ondřej. "Makroekonometrický model měnové politiky." Doctoral thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-165290.

Full text
Abstract:
First of all, general principals of contemporary macroeconometric models are described in this dissertation together with a brief sketch of alternative approaches. Consequently, the macroeconomic model of a monetary policy is formulated in order to describe fundamental relationships between real and nominal economy. The model originated from a linear one by making some of the parameters endogenous. Despite this nonlinearity, I expressed my model in a state space form with time-varying coefficients, which can be solved by a standard Kalman filter. Using outcomes of this algorithm, likelihood function was then calculated and maximized in order to obtain estimates of the parameters. The theory of identifiability of a parametric structure is also described. Finally, the presented theory is applied on the formulated model of the euro area. In this model, the European Central Bank was assumed to behave according to the Taylor rule. The econometric estimation, however, showed that this common assumption in macroeconomic modeling is not adequate in this case. The results from econometric estimation and analysis of identifiability also indicated that the interest rate policy of the European Central Bank has only a very limited effect on real economic activity of the European Union. Both results are influential, as monetary policy in the last two decades has been modeled as interest rate policy with the Taylor rule in most macroeconometric models.
APA, Harvard, Vancouver, ISO, and other styles
21

hua, Linjian, and 林建樺. "Study on Market Price and Economic Quantity of Maximum Profit." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/06420015731677419656.

Full text
Abstract:
碩士
國立臺灣科技大學
管理技術研究所
86
The paper presents a geometric programming (GP) approach to find a profit-maximizing selling price and economic quality for a retailer. Demand is treated as a nonlinear function of price with a constant elasticity. The paper considers this selling price/economic quality issue in the context of linking production and marketing decisions. More importantly, the paper provides sensitivity for profit. These sensitivity results provide additional important managerial implications on pricing and economic quality.
APA, Harvard, Vancouver, ISO, and other styles
22

Yang, Jian. "Stochastic volatility models : option price approximation, asymptotics and maximum likelihood estimation /." 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3223755.

Full text
Abstract:
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.
Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3841. Advisers: Richard B. Sowers; Neil D. Pearson. Includes bibliographical references (leaves 67-70) Available on microfilm from Pro Quest Information and Learning.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Kuo-Ssu, and 陳國司. "The Pricing and Applications of Guarantee Maximum Price Clause and Target Price Clause in Cost Plus Contract." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/34557419953530571713.

Full text
Abstract:
碩士
國立臺灣大學
土木工程學研究所
91
In project contracting, there are two major categories in terms of contract pricing: one is the fixed price type, and the other is the cost plus type. Generally speaking, the effectiveness of cost control in cost plus type contract is worse than that in fixed price type contract. Since owners take all the risk of cost variations in cost plus contract, and therefore the contractors will have no motivations in controlling the contract cost. Thus, owners sometimes use GMP (Guarantee Maximum Price) or TP (Target Price) clauses to ensure that contractors will be motivated to effectively control the cost in a cost plus contract. However, in practice, the financial impacts of GMP and TP clauses are evaluated mainly based on experiences, instead of quantitative methodologies. As a result, it is difficult for both owner and contractor to fairly and properly use the GMP and TP clauses. This research develops a quantitative GMP and TP pricing model based on the Option Pricing Method and Real Options. By balancing the values or financial impacts of GMP and TP, strategies for using GMP and TP clauses for better risk allocation and management are proposed.
APA, Harvard, Vancouver, ISO, and other styles
24

Correa, Jose R., Andreas S. Schulz, and Moses Nicolas E. Stier. "Computational Complexity, Fairness, and the Price of Anarchy of the Maximum Latency Problem." 2004. http://hdl.handle.net/1721.1/5051.

Full text
Abstract:
We study the problem of minimizing the maximum latency of flows in networks with congestion. We show that this problem is NP-hard, even when all arc latency functions are linear and there is a single source and sink. Still, one can prove that an optimal flow and an equilibrium flow share a desirable property in this situation: all flow-carrying paths have the same length; i.e., these solutions are "fair," which is in general not true for the optimal flow in networks with nonlinear latency functions. In addition, the maximum latency of the Nash equilibrium, which can be computed efficiently, is within a constant factor of that of an optimal solution. That is, the so-called price of anarchy is bounded. In contrast, we present a family of instances that shows that the price of anarchy is unbounded for instances with multiple sources and a single sink, even in networks with linear latencies. Finally, we show that an s-t-flow that is optimal with respect to the average latency objective is near optimal for the maximum latency objective, and it is close to being fair. Conversely, the average latency of a flow minimizing the maximum latency is also within a constant factor of that of a flow minimizing the average latenc
APA, Harvard, Vancouver, ISO, and other styles
25

Sie, Zong-Sian, and 謝宗憲. "A research of the relationship between price and leadtime with maximum profit in a single production line with one product." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/98113902927000066060.

Full text
Abstract:
碩士
雲林科技大學
工業工程與管理研究所碩士班
99
Price and delivery date are two important factors that affect demand behavior of customers. When price is low and delivery date is near, the product is more competitive and there are more demands. This research studies how to set price and delivery date of each type of customers to maximize system profit under the nonpreemption, sole production line and two types of customer setting. Besides, how the optimal system profit and optimal prices and delivery dates are affected by system paramters are studied.
APA, Harvard, Vancouver, ISO, and other styles
26

BYRNE, DAVID P. R. "An Empirical Study of the Causes and Consequences of Mergers in the Canadian Cable Television Industry." Thesis, 2010. http://hdl.handle.net/1974/6240.

Full text
Abstract:
This dissertation consists of three essays that study mergers and consolidation in the Canadian cable television industry. The first essay provides a historical overview of regulatory and technical change in the industry, and presents the dataset that I constructed for this study. The basic pattern of interest in the data is regional consolidation, where dominant cable companies grow over time by acquiring the cablesystems of small cable operators. I perform a reduced-form empirical analysis that formally studies the determinants of mergers, and the effect that acquisitions have on cable bundles offered to consumers. The remaining essays develop and estimate structural econometric models to further study the determinants and welfare consequences of mergers in the industry. The second essay estimates an empirical analogue of the Farrell and Scotchmer (1988) coalition- formation game. I use the estimated model to measure the equilibrium impact that economies of scale and agglomeration has on firms’ acquisition incentives. I also study the impact entry and merger subsidies have on consolidation and long-run market structure. The final chapter estimates a variant of the Rochet and Stole (2002) model of multi-product monopoly with endogenous quality and prices. Using the estimated model I compute the impact mergers have on welfare. I find that both consumer and producer surplus rise with acquisitions. I also show that accounting for changes both in prices and products (i.e., cable bundle quality) is important for measuring the welfare impact of mergers.
Thesis (Ph.D, Economics) -- Queen's University, 2010-12-09 14:39:15.431
APA, Harvard, Vancouver, ISO, and other styles
27

Ying, Zhi Wei. "The relation between maximal prime divisors and maximal members of Krull associated primes." 2003. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0021-2603200719135501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Wei, Ying Zhi, and 應志偉. "The relation between maximal prime divisors and maximal members of Krull associated primes." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/03067057469234277437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sidibé, Abdoul Karim. "Three essays in microeconomic theory." Thesis, 2020. http://hdl.handle.net/1866/24659.

Full text
Abstract:
Cette thèse est un recueil de trois articles sur la théorie microéconomique. Les deux premiers traitent de la question de la course vers le bas lorsque les gouvernements se livrent à la concurrence pour certains facteurs mobiles. Le troisième article propose une extension du problème d'appariement plusieurs-à-un en y introduisant des agents de tailles différentes. Dans le premier article, nous montrons comment le résultat standard de course vers le bas (race-to-the-bottom) peut être évité en introduisant du bien public dans un modèle de compétition fiscale. Notre économie comporte deux juridictions peuplées par de la main-d’œuvre parfaitement mobile répartie en deux catégories : qualifiée et non-qualifiée. Les gouvernements, en poursuivant un objectif Rawlsien (max-min), annoncent simultanément leur projet d'investissement en bien public avant d'adopter une politique de taxation non-linéaire du revenu. Les travailleurs, après avoir observé la politique de taxation des différents gouvernements et leurs promesses d'investissement en bien publique, choisissent chacun un lieu de résidence et une offre de travail. Ainsi, les gouvernements atteignent leurs objectifs de redistribution en cherchant à attirer de la main-d’œuvre productive à travers la fourniture de bien public en plus d'une politique de taxation favorable. Nous montrons qu'il existe des équilibres où les travailleurs qualifiés paient une taxe strictement positive. En outre, lorsque l'information sur le type des travailleurs est privée, il existe, pour certaines valeurs des paramètres, des équilibres où la main-d’œuvre non-qualifiée bénéficie d'un transfert net (ou subvention) de la part du gouvernement. Dans le second article, nous étudions comment le modèle standard de compétition des prix à la Bertrand avec des produits différenciés pourrait fournir des informations utiles pour les programmes de citoyenneté par investissement dans les Caraïbes. Nous montrons que lorsque les pays peuvent être classés en deux types en fonction de la taille de leur demande, l'imposition d'un prix minimum uniforme et d'un quota maximum appropriés amène les pays à un résultat efficace qui Pareto domine l'équilibre de Nash non coopératif. Enfin, le troisième article explore une extension du problème standard d'appariement plusieurs-à-un en y incorporant des agents de tailles différentes (familles de réfugiés) d'un côté, à assigner à des foyers de capacités différentes de l'autre. La taille d'une famille de réfugiés représente le nombre de membres qui la compose. Une caractéristique spécifique à ce modèle est qu'il n'autorise pas de répartir les membres d'une même famille entre différents foyers. Il est bien connu que, dans ces conditions, bon nombre de propriétés désirables des règles d'appariement s'effondrent. Nous faisons donc l'hypothèse des priorités croissantes avec la taille pour chaque foyer, c'est-à-dire qu'une famille d'accueil préférerait toujours un plus grand nombre de réfugiés tant que la capacité de son foyer le permet. Nous montrons qu'un appariement stable par paire existe toujours sous cette hypothèse et nous proposons un mécanisme pour le trouver. Nous montrons que notre mécanisme est non-manipulable du point de vue des réfugiés : aucun groupe de réfugiés ne pourrait tirer profit d'une déclaration truquée de leurs préférences. Notre mécanisme est également optimal pour les réfugiés en ce sens qu'il n'existe aucun autre mécanisme stable par paire qui serait plus profitable à tous les réfugiés.
This thesis is a collection of three articles on microeconomic theory. The first two articles are concerned with the issue of race-to-the-bottom when governments engage in competition for some mobile factor. The third article proposes an extension for the many-to-one matching problem by introducing different-size agents. In the first article, we show how the standard race-to-the-bottom result can be avoided by introducing public good into a tax competition model. Our economy has two jurisdictions populated by perfectly mobile workers divided into two categories: skilled and unskilled. Governments, in pursuit of a Rawlsian objective (max-min), simultaneously announce their plans for investing in public good before deploying a nonlinear income tax schedule. After observing the tax schedules of the governments and their promises to invest in public good, each worker chooses a place of residence and a supply of labour. Thus, governments achieve their redistribution objectives by seeking to attract productive labour through the provision of public goods in addition to favorable taxation policy. We show that there exist equilibria where skilled workers pay a strictly positive tax. In addition, when information on the type of workers is private, there are equilibria for certain parameter values in which unqualified workers receive a net transfer (or subsidy) from the government. In the second article, we investigate how the Bertrand standard price competition with differentiated products could provide useful insight for Citizenship By Investment programs in the Caribbean. We show that when countries can be classified into two types according to the size of their demand, imposing appropriate uniform minimum price and maximum quota brings countries to an efficient outcome that Pareto dominates the Non-Cooperative Nash Equilibrium. Finally. in the third article, we explore an extension of the standard many-to-one matching problem by incorporating different-size agents (refugee families) on the many side of the market, to be assigned to entities (homes) with different capacities on the other side. A specific feature of this model is that it does not allow refugee families to be split between several homes. It is well known that many of the desirable properties of matching rules are unachievable in this framework. We introduce size-monotonic priority ranking over refugee families for each home, that is, a host family (home) would always prefer a greater number of members of refugee families until its capacity constraint binds. We show that a pairwise stable matching always exists under this assumption and we propose a mechanism to find it. We show that our mechanism is strategy-proof for refugees: no refugee family could benefit from misrepresenting his preferences. Our mechanism is also refugees optimal pairwise stable in the sense that there is no other pairwise stable mechanism that would be more profitable to all refugees.
APA, Harvard, Vancouver, ISO, and other styles
30

Šubáková, Dominika. "Caps on Loan-to-Value ratio: Can they reduce housing bubble and credit growth?" Master's thesis, 2015. http://www.nusl.cz/ntk/nusl-347801.

Full text
Abstract:
An increasing trend of using macroprudential instrument, caps on loan-to-value (LTV) ratio, requires a full understanding of how the instrument works in practice. As the empirical research is still scant, this thesis attempts to contribute with a new evidence on LTV effectiveness in context of six developed economies, namely Netherlands, Sweden, Ireland, Hungary, Latvia and Lithuania. To achieve this objective we analyse the impact of caps on LTV on credit growth, mortgage credit-to- GDP ratio and price growth. LTV limits are not a harmonised measure and its national-level implementation includes numerous specificities that can hinder cross-country comparisons. As a result, this thesis proposes a construction of LTV index reflecting specific aspects of the measure. Using the LTV Index we confirmed a slowdown of credit, mortgage and price growth. JEL Classification E44, E51, E52, E58, G21 Key words caps on loan-to-value ratio, maximum LTV ratio, macroprudential policy, credit-related instruments, LTV Index, house price growth, credit growth, financial stability.
APA, Harvard, Vancouver, ISO, and other styles
31

Fernandes, Mário Jorge Correia. "Three essays on modeling energy prices with time-varying volatility and jumps." Doctoral thesis, 2021. http://hdl.handle.net/10071/22640.

Full text
Abstract:
This thesis addresses the modeling of energy prices with time-varying volatility and jumps in three separate and self-contained papers: A. Modeling energy futures volatility through stochastic volatility processes with Markov chain Monte Carlo This paper studies the volatility dynamics of futures contracts on crude oil, natural gas and electricity. To accomplish this purpose, an appropriate Bayesian model comparison exercise between seven stochastic volatility (SV) models and their counterpart GARCH models is performed, with both classes of time-varying volatility processes being estimated through a Markov chain Monte Carlo technique. A comparison exercise for hedging purposes is also considered by computing the extreme risk measures (using the Conditional Value-at-Risk) of simulated returns from the SV model with the best performance - i.e., the SV model with a t-distribution - and the standard GARCH(1,1) model for the hedging of crude oil, natural gas and electricity positions. Overall, we find that: (i) volatility plays an important role in energy futures markets; (ii) SV models generally outperform their GARCH-family counterparts; (iii) a model with t-distributed innovations generally improves the fitting performance of both classes of time-varying volatility models; (iv) the maturity of futures contracts matters; and (v) the correct specification for the stochastic behavior of futures prices impacts the extreme market risk measures of hedged and unhedged positions. B. How does electrification under energy transition impact the portfolio management of energy firms? This paper presents a novel approach for structuring dependence between electricity and natural gas prices in the context of energy transition: a copula of meanreverting and jump-diffusion processes. Based on historical day-ahead prices of the Nord Pool electricity market and the Henry Hub natural gas market, a stochastic model is estimated via the maximum likelihood approach and considering the dependency structure between the innovations of these two-dimensional returns. Given the role of natural gas in the global policy for energy transition, different copula functions are fit to electricity and natural gas returns. Overall, we find that: (i) using an out-of-sample forecasting exercise, we show that it is important to consider both mean-reversion and jumps; (ii) modeling correlation between the returns of electricity and natural gas prices, assuring nonlinear dependencies are satisfied, leads us to the adoption of Gumbel and Student-t copulas; and (iii) without government incentive schemes in renewable electricity projects, the usual maximization of the risk-return trade-off tends to avoid a high exposure to electricity assets. C. Modeling commodity prices under alternative jump processes and fat tails dynamics The recent fluctuations in commodity prices affected significantly Oil Gas (O&G) companies’ returns. However, integrated O&G companies are not only exposed to the downturn of oil prices since a high level of integration allows these firms to obtain non-perfectly positive correlated portfolio. This paper aims to test several different stochastic processes to model the main strategic commodities in integrated O&G companies: brent, natural gas, jet fuel and diesel. The competing univariate models include the log-normal and double exponential jump-diffusion model, the Variance-Gamma process and the geometric Brownian motion with nonlinear GARCH volatility. Given the effect of correlation between these assets, we also estimate multivariate models, such as the Dynamic Conditional Correlation (DCC) GARCH, DCC-GJR-GARCH and the DCC-EGARCH models. Overall, we find that: (i) the asymmetric conditional heteroskedasticity model substantially improves the performance of the univariate jump-diffusion models; and (ii) the multivariate approaches are the best models for our strategic energy commodities, in particular the DCC-GJR-GARCH model.
APA, Harvard, Vancouver, ISO, and other styles
32

Hasselberger, Hannes. "The existence of infinitely many closed geodesics on a riemannian manifold, containing an isolated prime closed geodesic with maximal index growth." 2012. https://ul.qucosa.de/id/qucosa%3A16551.

Full text
Abstract:
There are two main approaches to solve the problem of finding closed geodesics on a Riemannian manifold M. The variational approach views a closed geodesic as a closed curve which happens to be a geodesic and it looks for critical points of the energy functional, while the dynamical systems approach views a closed geodesic as a geodesic which happens to close up and looks for periodic orbits of the geodesic ow on the unit tangent bundle.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography