Siga este enlace para ver otros tipos de publicaciones sobre el tema: Expected.

Tesis sobre el tema "Expected"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Expected".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Irvine, Michael. "Expected satiation and expected satiety : an exploration of their correlates and causes". Thesis, University of Bristol, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.535171.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Gee, Max. "Rationality and Expected Utility". Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3733384.

Texto completo
Resumen

We commonly make a distinction between what we simply tend to do and what we would have done had we undergone an ideal reasoning process — or, in other words, what we would have done if we were perfectly rational. Formal decision theories, like Expected Utility Theory or Risk-Weighted Expected Utility Theory, have been used to model the considerations that govern rational behavior.

But questions arise when we try to articulate what this kind of modeling amounts to. Firstly, it is not clear how the components of the formal model correspond to real-world psychological or physical facts that ground judgments about what we ought to do. Secondly, there is a great deal of debate surrounding what an accurate model of rationality would look like. Theorists disagree about how much flexibility a rational agent has in weighing the risk of a loss against the value of potential gains, for example.

The goal of this project is to provide an interpretation of Expected Utility Theory whereby it explicates or represents the pressure that fundamentally governs how human agents ought to behave. That means both articulating how the components of the formal model correspond to real-world facts, and defending Expected Utility Theory against alternative formal models of rationality.

Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Dardanoni, V. "Implications of expected utility maximisation". Thesis, University of York, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383880.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Malamatos, Theocharis. "Expected-case planar point location /". View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20MALAMA.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Birsel, Murat H. "Expected Utility and Intraalliance War". Thesis, North Texas State University, 1987. https://digital.library.unt.edu/ark:/67531/metadc504224/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Edberg, Patrik y Benjamin Käck. "Non-parametricbacktesting of expected shortfall". Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207009.

Texto completo
Resumen
Since the Basel Committee on Banking Supervision first suggested a transition to Expected Shortfall as the primary risk measure for financial institutions, the question on how to backtest it has been widely discussed. Still, there is a lack of studies that compare the different proposed backtesting methods. This thesis uses simulations and empirical data to evaluate the performance of non-parametric backtests under different circumstances. An important takeaway from the thesis is that the different backtests all use some kind of trade-off between measuring the number of Value at Risk exceedances and their magnitudes. The main finding of this thesis is a list, ranking the non-parametric backtests. This list can be used to choose backtesting method by cross-referencing to what is possible to implement given the estimation method that the financial institution uses.
Sedan Baselkommittén föreslog införandet av Expected Shortfall som primärt riskmått för finansiella institutioner, har det debatteras vilken backtesting metod som är bäst. Trots detta råder det brist på studier som utvärderar olika föreslagna backtest. I studien används simuleringar och historisk data för att utvärdera icke-parametriska backtests förmåga att under olika omständigheter upptäcka underskattad Expected Shortfall. En viktig iakttagelse är att alla de undersökta testen innebär ett avvägande i vilken utsträckning det skall detektera antalet och/eller storleken på Value at Risk överträdelserna. Studien resulterar i en prioriterad lista över vilka icke-parametriska backtest som är bäst. Denna lista kan sedan användas för att välja backtest utefter vad varje finansiell institution anser är möjligt givet dess estimeringsmetod.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Wong, Ka Chun. "Optimal expected-case planar point location /". View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?COMP%202005%20WONG.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Galron, Daniel A. "Expected robustness in dining philosophers algorithms". Connect to resource, 2006. http://hdl.handle.net/1811/6479.

Texto completo
Resumen
Thesis (Honors)--Ohio State University, 2006.
Title from first page of PDF file. Document formatted into pages: contains iv, 103.; also includes graphics. Includes bibliographical references (p. 103). Available online via Ohio State University's Knowledge Bank.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kapadia, Nishad Ghysels Eric. "Skewness, idiosyncratic volatility and expected returns". Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2007. http://dc.lib.unc.edu/u?/etd,1128.

Texto completo
Resumen
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2007.
Title from electronic title page (viewed Mar. 27, 2008). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Kenan-Flagler Business School Finance." Discipline: Business Administration; Department/School: Business School, Kenan-Flagler.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Widekind, Sven von. "Evolution of non-expected utility preferences". Berlin Heidelberg Springer, 2007. http://d-nb.info/986059773/04.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Widekind, Sven von. "Evolution of non-expected utility preferences /". Berlin [u.a.] : Springer, 2008. http://www.gbv.de/dms/bs/toc/547648979.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Zhang, Xinyi. "Expected lengths of minimum spanning trees". Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 139 p, 2008. http://proquest.umi.com/pqdweb?did=1597617641&sid=6&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Dancík, Vladimír. "Expected length of longest common subsequences". Thesis, University of Warwick, 1994. http://wrap.warwick.ac.uk/107547/.

Texto completo
Resumen
A longest common subsequence of two sequences is a sequence that is a subsequence of both the given sequences and has largest possible length. It is known that the expected length of a longest common subsequence is proportional to the length of the given sequences. The proportion, denoted by 7k, is dependent on the alphabet size k and the exact value of this proportion is not known even for a binary alphabet. To obtain lower bounds for the constants 7k, finite state machines computing a common subsequence of the inputs are built. Analysing the behaviour of the machines for random inputs we get lower bounds for the constants 7k. The analysis of the machines is based on the theory of Markov chains. An algorithm for automated production of lower bounds is described. To obtain upper bounds for the constants 7k, collations pairs of sequences with a marked common subsequence - are defined. Upper bounds for the number of collations of ‘small size’ can be easily transformed to upper bounds for the constants 7k. Combinatorial analysis is used to bound the number of collations. The methods used for producing bounds on the expected length of a common subsequence of two sequences are also used for other problems, namely a longest common subsequence of several sequences, a shortest common supersequence and a maximal adaptability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Martin, Philip. "Optimal Expected Values for Cribbage Hands". Scholarship @ Claremont, 2000. https://scholarship.claremont.edu/hmc_theses/122.

Texto completo
Resumen
The game of Cribbage has a complex way of counting points in the hands that are dealt to each player. Each player has a choice of what cards to keep and what cards to throw into an extra hand, called the crib, that one of the players gets to count towards his score. Ideally, you could try to keep the most points possible in your hand and your crib, or, conversely, the most points in your hand with the fewest points in your opponent’s crib. To add to the fun, a final card is randomly chosen that all three hands share. This thesis deals with finding optimal expected values for each player’s hand and the crib. Unfortunately, finding the exact optimal values is very difficult. However, I have managed to get bounds on the optimal values.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Isaksson, Daniel. "Robust portfolio optimization with Expected Shortfall". Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187888.

Texto completo
Resumen
This thesis project studies robust portfolio optimization with Expected Short-fall applied to a reference portfolio consisting of Swedish linear assets with stocks and a bond index. Specifically, the classical robust optimization definition, focusing on uncertainties in parameters, is extended to also include uncertainties in log-return distribution. My contribution to the robust optimization community is to study portfolio optimization with Expected Shortfall with log-returns modeled by either elliptical distributions or by a normal copula with asymmetric marginal distributions. The robust optimization problem is solved with worst-case parameters from box and ellipsoidal un-certainty sets constructed from historical data and may be used when an investor has a more conservative view on the market than history suggests. With elliptically distributed log-returns, the optimization problem is equivalent to Markowitz mean-variance optimization, connected through the risk aversion coefficient. The results show that the optimal holding vector is almost independent of elliptical distribution used to model log-returns, while Expected Shortfall is strongly dependent on elliptical distribution with higher Expected Shortfall as a result of fatter distribution tails. To model the tails of the log-returns asymmetrically, generalized Pareto distributions are used together with a normal copula to capture multivariate dependence. In this case, the optimization problem is not equivalent to Markowitz mean-variance optimization and the advantages of using Expected Shortfall as risk measure are utilized. With the asymmetric log-return model there is a noticeable difference in optimal holding vector compared to the elliptical distributed model. Furthermore the Expected Shortfall in-creases, which follows from better modeled distribution tails. The general conclusions in this thesis project is that portfolio optimization with Expected Shortfall is an important problem being advantageous over Markowitz mean-variance optimization problem when log-returns are modeled with asymmetric distributions. The major drawback of portfolio optimization with Expected Shortfall is that it is a simulation based optimization problem introducing statistical uncertainty, and if the log-returns are drawn from a copula the simulation process involves more steps which potentially can make the program slower than drawing from an elliptical distribution. Thus, portfolio optimization with Expected Shortfall is appropriate to employ when trades are made on daily basis.
Examensarbetet behandlar robust portföljoptimering med Expected Shortfall tillämpad på en referensportfölj bestående av svenska linjära tillgångar med aktier och ett obligationsindex. Specifikt så utvidgas den klassiska definitionen av robust optimering som fokuserar på parameterosäkerhet till att även inkludera osäkerhet i log-avkastningsfördelning. Mitt bidrag till den robusta optimeringslitteraturen är att studera portföljoptimering med Expected Shortfall med log-avkastningar modellerade med antingen elliptiska fördelningar eller med en norma-copul med asymmetriska marginalfördelningar. Det robusta optimeringsproblemet löses med värsta tänkbara scenario parametrar från box och ellipsoid osäkerhetsset konstruerade från historiska data och kan användas när investeraren har en mer konservativ syn på marknaden än vad den historiska datan föreslår. Med elliptiskt fördelade log-avkastningar är optimeringsproblemet ekvivalent med Markowitz väntevärde-varians optimering, kopplade med riskaversionskoefficienten. Resultaten visar att den optimala viktvektorn är nästan oberoende av vilken elliptisk fördelning som används för att modellera log-avkastningar, medan Expected Shortfall är starkt beroende av elliptisk fördelning med högre Expected Shortfall som resultat av fetare fördelningssvansar. För att modellera svansarna till log-avkastningsfördelningen asymmetriskt används generaliserade Paretofördelningar tillsammans med en normal-copula för att fånga det multivariata beroendet. I det här fallet är optimeringsproblemet inte ekvivalent till Markowitz väntevärde-varians optimering och fördelarna med att använda Expected Shortfall som riskmått används. Med asymmetrisk log-avkastningsmodell uppstår märkbara skillnader i optimala viktvektorn jämfört med elliptiska fördelningsmodeller. Därutöver ökar Expected Shortfall, vilket följer av bättre modellerade fördelningssvansar. De generella slutsatserna i examensarbetet är att portföljoptimering med Expected Shortfall är ett viktigt problem som är fördelaktigt över Markowitz väntevärde-varians optimering när log-avkastningar är modellerade med asymmetriska fördelningar. Den största nackdelen med portföljoptimering med Expected Shortfall är att det är ett simuleringsbaserat optimeringsproblem som introducerar statistisk osäkerhet, och om log-avkastningar dras från en copula så involverar simuleringsprocessen flera steg som potentiellt kan göra programmet långsammare än att dra från en elliptisk fördelning. Därför är portföljoptimering med Expected Shortfall lämpligt att använda när handel sker på daglig basis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Engvall, Johan. "Backtesting expected shortfall: A quantitative evaluation". Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-198471.

Texto completo
Resumen
How to measure risk is an important question in finance and much work has been done on how to quantitatively measure risk. An important part of this measurement is evaluating the measurements against the outcomes a procedure known as backtesting. A common risk measure is Expected shortfall for which how to backtest has been debated. In this thesis we will compare four different proposed backtests and see how they perform in a realistic setting. The main finding in this thesis is that it is possible to find backtests that perform well but it is important to investigate them thoroughly as small errors in the model can lead to large errors in the outcome of the backtest
Hur man mäter risk är en viktig fråga inom den finansiella industrin och det finns mycket skrivet om hur man kvantifierar finansiell risk. En viktig del i att mäta risk är att i efterhand kontrollera så att modellerna har gett rimliga estimeringar av risken denna procedur brukar kallas backtesting. Ett vanligt mått på risk är Expected shortfall där hur detta ska göras har debatterats. Vi presenterar fyra olika metoder att utföra detta och se hur dessa presterar i en verklighetstrogen situation. Det vi kommer fram till är att det är möjligt att hitta metoder som fungerar väl men att det är viktigt att testa dessa noga eftersom små fel i metoderna kan ge stora fel i resultatet.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Holmsäter, Sara y Emelie Malmberg. "Applying Multivariate Expected Shortfall on High Frequency Foreign Exchange Data". Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191004.

Texto completo
Resumen
This thesis aims at implementing and evaluating the performance of multivariate Expected Shortfall models on high frequency foreign exchange data. The implementation is conducted with a unique portfolio consisting of five foreign exchange rates; EUR/SEK, EUR/NOK, EUR/USD, USD/SEK and USD/NOK. High frequency is in this context defined as observations with time intervals from second by second up to minute by minute. The thesis consists of three main parts. In the first part, the exchange rates are modelled individually with time series models for returns and realized volatility. In the second part, the dependence between the exchange rates is modelled with copulas. In the third part, Expected Shortfall is calculated, the risk contribution of each exchange rate is derived and the models are backtested. The results of the thesis indicate that three of the five final models can be rejected at a 5% significance level if the risk is measured by Expected Shortfall (ES0:05). The two models that cannot be rejected are based on the Clayton and Student’s t copulas, the only two copulas with heavy left tails. The rejected models are based on the Gaussian, Gumbel-Hougaard and Frank copulas. The fact that some of the copula models are rejected emphasizes the importance of choosing an appropriate dependence structure. The risk contribution calculations show that the risk contributions are highest from EUR/NOK and USD/NOK, and that EUR/USD has the lowest risk contribution and even decreases the portfolio risk in some cases. Regarding the underlying models, it is concluded that for the data used in this thesis, the final combined time series and copula models perform quite well, given that the purpose is to measure the risk. However, the most important parts to capture seem to be the fluctuations in the volatilities as well as the tail dependencies between the exchange rates. Thus, the predictions of the return mean values play a less significant role, even though they still improve the results and are necessary in order to proceed with other parts of the modelling. As future research, we first and foremost recommend including the liquidity aspect in the models.
Syftet med denna masteruppsats är att implementera och utvärdera multidimensionella Expected Shortfall-modeller på högfrekvent växelkursdata. Implementeringen och utvärderingen utförs med en unik portfölj bestående av fem växelkurser; EUR/SEK, EUR/NOK, EUR/USD, USD/SEK och USD/NOK. Högfrekventa observationer är i denna uppsats definierade som sekundvisa upp till minutvisa observationer. Uppsatsen består av tre huvuddelar. I den första delen modelleras växelkurserna individuellt med tidsseriemodeller för växelkursförändringarna i form av avkastning och realiserad volatilitet. I del två modelleras beroendestrukturerna mellan de olika växelkurserna med hjälp av copulas. I den tredje och sista delen beräknas Expected Shortfall och riskbidragen från de enskilda växelkurserna, varefter modellerna utfallstestas. De slutgiltiga resultaten indikerar att tre av de fem föreslagna modellerna kan förkastas vid en signifikansnivå på 5% om risken mäts med Expected Shortfall (ES0:05). De två modeller som inte kan förkastas är baserade på Clayton och Student’s t copulas, vilka särskiljer sig från övriga copulas genom att de har tjocka vänstersvansar. De modeller som förkastas är baserade på Gaussian, Gumbel-Hougaard och Frank copulas. Det faktum att några copula-modeller förkastas betonar vikten av att välja en lämplig beroendestruktur. Riskbidragsberäkningarna visar att EUR/NOK och USD/NOK bidrar mest till den totala risken i portföljen och att EUR/USD har det lägsta riskbidraget, där EUR/USD till och med minskar risken i vissa fall. Vad gäller underliggande modeller så visas det att för den tillgängliga datan i den här uppsatsen så fungerar tidsseriemodeller i kombination med copulas bra, givet att syftet är att mäta risk. Dock tyder resultaten på att volatilitetsfluktuationer samt svansberoenden mellan växelkurserna är de mest väsentliga delarna att fånga. Väntevärdesprognoserna för avkastningarna har mindre inverkan på de slutgiltiga beräkningarna, även om de fortfarande förbättrar resultaten och i sig är nödvändiga för fortsatt modellering. För framtida studier rekommenderar vi först och främst att inkludera likviditetsaspekter i modellerna.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Imazeki, Toyokazu. "Idiosyncratic Risk and Expected Returns in REITs". Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/real_estate_diss/12.

Texto completo
Resumen
The Modern Portfolio Theory (MPT) argues that all unsystematic risk can be diversified away thus there should be no relationship between idiosyncratic risk and return. Ooi, Wang and Webb (2009) employ the Fama-French (1993) three-factor model (FF3) to estimate the level of nonsystematic return volatility in REITs as a proxy for idiosyncratic risk. They find a significant positive relationship between expected returns and conditionally estimated idiosyncratic risk contrary to the MPT. In this research, I examine other potential sources of systematic risk in REITs which may explain the seeming violation of the MPT found by Ooi et al (2009). I re-examine the proportion of idiosyncratic risk in REITs with Carhart’s (1997) momentum factor, which is largely applied on the FF3 to control for the persistency of stock returns as supplemental risk in the finance literature. Next, I conduct cross-sectional regression and test the significance of the relationship between idiosyncratic risk and expected returns. I further analyze the role of property sector on idiosyncratic risk as well as on its relationship with expected returns. I argue three conclusions. First, momentum has a relatively minor effect on the idiosyncratic risk consistent with the financial literature. Second, the effect of momentum is not strong enough to cause a significant change in the relationship between idiosyncratic risk and expected returns. Third, a REIT portfolio diversified across property sectors neutralizes the relationship between idiosyncratic risk and expected returns, though the contribution of each property sector is not statistically significant.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Affleck, Ian Andrew. "Minimizing expected broadcast time in unrealiable networks". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ61619.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Ni, Hao. "The expected signature of a stochastic process". Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:e0b9e045-4c09-4cb7-ace9-46c4984f16f6.

Texto completo
Resumen
The signature of the path provides a top down description of a path in terms of its eects as a control. It is a group-like element in the tensor algebra and is an essential object in rough path theory. When the path is random, the linear independence of the signatures of different paths leads one to expect, and it has been proved in simple cases, that the expected signature would capture the complete law of this random variable. It becomes of great interest to be able to compute examples of expected signatures. In this thesis, we aim to compute the expected signature of various stochastic process solved by a PDE approach. We consider the case for an Ito diffusion process up to a fixed time, and the case for the Brownian motion up to the first exit time from a domain. We manage to derive the PDE of the expected signature for both cases, and find that this PDE system could be solved recursively. Some specific examples are included herein as well, e.g. Ornstein-Uhlenbeck (OU) processes, Brownian motion and Levy area coupled with Brownian motion.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Schreder, Max Josef. "Idiosyncratic information and expected rate of returns". Thesis, King's College London (University of London), 2018. https://kclpure.kcl.ac.uk/portal/en/theses/idiosyncratic-information-and-expected-rate-of-returns(62f1488b-f9ba-44b7-a224-a39cb7b1cabe).html.

Texto completo
Resumen
This thesis is situated at the interface between asset pricing and market microstructure theory. Motivated by seminal work of David Easley and Maureen O’Hara (2004), who present a cohesive framework in which idiosyncratic information impact firms’ cost of equity, I contribute three interrelated research papers to this stream of research. My first paper “Idiosyncratic Information and Expected Rate of Returns: A Meta-Analytic Review of the Literature” provides a quantitative review of the literature examining the associa-tion between firm-specific information and expected rate of returns. Findings therein mo-tivate my other two empirical papers. My second paper “The Impact of Idiosyncratic Information on Expected Rate of Re-turns: A Structural Equation Modelling Approach” relates to work which tests the empir-ical validity of information-based return models and examines the question as to what extent a firm’s information environment affects its cost of equity (CoE). Using a structural equation modelling approach—which is novel—it is shown that companies with high (low) quality information environments enjoy relatively lower (higher) CoE than other-wise identical firms; however, findings also show that the significance of this impact de-creases with firm size, maturity and profitability as well as market competition. My third paper “Implied Cost of Capital and Cross-Sectional Earnings Forecasting Models: Evidence from Newly Listed Firms” pertains to work on implied cost of capital (ICC)—which is part of a greater literature on expected rate of returns—and analyses the degree to which earnings forecasting models can be used to derive valid ICC estimates for newly listed firms. Results show that combining the earnings model of Hou et al. (2012, HVZ) with the earnings persistence (EP) model of Li and Mohanram (2014) into one forecasting solution generates less forecast bias, higher earnings response coefficients and more valid ICC estimates vis-à-vis the HVZ, EP and RI (residual income) models stand-alone. This suggests that for smaller and younger firms more complex forecasting solutions are required to ensure reliability of model-based ICC estimates. The concluding chapter synthesizes the main findings of this thesis, indicates potential avenues for future research and discusses implications for practice.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Sherrick, Bruce John. "Option based assessments of expected price distributions /". The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487672631597961.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Reina, Livia. "From Subjective Expected Utility Theory to Bounded Rationality". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2006. http://nbn-resolving.de/urn:nbn:de:swb:14-1140624885934-50567.

Texto completo
Resumen
As mentioned in the introduction, the objective of this work has been to get a more realistic understanding of economic decision making processes by adopting an interdisciplinary approach which takes into consideration at the same time economic and psychological issues. The research in particular has been focused on the psychological concept of categorization, which in the standard economic theory has received until now no attention, and on its implications for decision making. The three experimental studies conducted in this work provide empirical evidence that individuals don not behave according to the perfect rationality and maximization assumptions which underly the SEUT, but rather as bounded rational satisfiers who try to simplify the decision problems they face through the process of categorization. The results of the first experimental study, on bilateral integrative negotiation, show that most of the people categorize a continuum of outcomes in two categories (satisfying/not satisfying), and treat all the options within each category as equivalent. This process of categorization leads the negotiators to make suboptimal agreements and to what I call the ?Zone of Agreement Bias? (ZAB). The experimental study on committees? decision making with logrolling provides evidence of how the categorization of outcomes in satisfying/not satisfying can affect the process of coalition formation in multi-issue decisions. In the first experiment, involving 3-issues and 3-parties decisions under majority rule, the categorization of outcomes leads most of the individuals to form suboptimal coalitions and make Pareto-dominated agreements. The second experiment, aimed at comparing the suboptimizing effect of categorization under majority and unanimity rule, shows that the unanimity rule can lead to a much higher rate of optimal agreements than the majority rule. The third experiment, involving 4-issues and 4-parties decisions provides evidence that the results of experiments 1 and 2 hold even when the level of complexity of the decision problem increases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Distel, Felix y Daniel Borchmann. "Expected Numbers of Proper Premises and Concept Intents". Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-71153.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Yuan, Huang. "Calculation of Expected Shortfall via Filtered Historical Simulation". Thesis, Uppsala universitet, Analys och tillämpad matematik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-154741.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Visockas, Vilius. "Comparing Expected and Real–Time Spotify Service Topology". Thesis, KTH, Kommunikationssystem, CoS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-96352.

Texto completo
Resumen
Spotify is a music streaming service that allows users to listen to their favourite music. Due to the rapid growth in the number of users, the amount of processing that must be provided by the company’s data centers is also growing. This growth in the data centers is necessary, despite the fact that much of the music content is actually sourced by other users based on a peer-to-peer model. Spotify’s backend (the infrastructure that Spotify operates to provide their music streaming service) consists of a number of different services, such as track search, storage, and others. As this infrastructure grows, some service may behave not as expected. Therefore it is important not only for Spotify’s operations (footnote: Also known as the Service Reliability Engineers Team (SRE)) team, but also for developers, to understand exactly how the various services are actually communicating. The problem is challenging because of the scale of the backend network and its rate of growth. In addition, the company aims to grow and expects to expand both the number of users and the amount of content that is available. A steadily increasing feature-set and support of additional platforms adds to the complexity. Another major challenge is to create tools which are useful to the operations team by providing information in a readily comprehensible way and hopefully integrating these tools into their daily routine. The ultimate goal is to design, develop, implement, and evaluate a tool which would help the operations team (and developers) to understand the behavior of the services that are deployed on Spotify’s backend network. The most critical information is to alert the operations staff when services are not operating as expected. Because different services are deployed on different servers the communication between these services is reflected in the network communication between these servers. In order to understand how the services are behaving when there are potentially many thousands of servers we will look for the patterns in the topology of this communication, rather than looking at the individual servers. This thesis describes the tools that successfully extract these patterns in the topology and compares them to the expected behavior.
Spotify är en växande musikströmningstjänst som möjliggör för dess användare att lyssna på sin favoritmusik. Med ett snabbt växande användartal, följer en tillväxt i kapacitet som måste tillhandahållas genom deras datacenter. Denna växande kapacitet är nödvändig trots det faktum att mycket av deras innehåll hämtas från andra användare via en peer-to-peer modell. Spotifys backend (den infrastruktur som kör Spotifys tjänster) består av ett antal distinkta typer som tillhandahåller bl.a. sökning och lagring. I takt med att deras backend växer, ökar risken att tjänster missköter sig. Därför är det inte bara viktigt för Spotifys driftgrupp, utan även för deras utvecklare, att förstå hur dessa kommunicerar. Detta problem är en utmaning p.g.a. deras storskaliga infrastruktur, och blir större i takt med att den växer. Företaget strävar efter tillväxt och förväntar detta i både antalet användare och tillgängligt innehåll. Stadigt ökande funktioner och antalet distinkta plattformar bidrar till komplexitet. Ytterligare en utmaning är att bidra med verktyg som kan användas av driftgrupp för att tillhandahålla information i ett tillgängligt och överskådligt format, och att förhoppningsvis integrera dessa i en daglig arbetsrutin. Det slutgiltiga målet är att designa, utveckla, implementera och utvärdera ett verktyg som låter deras driftgrupp (och utvecklare) förstå beteenden i olika tjänster som finns i Spotifys infrastruktur. Då dessa tjänster är utplacerade på olika servrar, reflekteras kommunikationen mellan dem i deras nätverketskommunikation. För att förstå tjänsternas beteende när det potentiellt kan finnas tusentals servrar bör vi leta efter mönster i topologin, istället för beteenden på individuella servrar.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Mehadhebi, Karim. "Linear expected time algorithms for nearest neighbor problems". Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22774.

Texto completo
Resumen
This thesis presents and analyses a bucketing algorithm that finds a proximity graph in linear expected time when the points are independent identically distributed with a Lipschitz density f, provided f satisfies a weak assumption. From this proximity graph one can either find the minimum spanning tree in $O(n$ log* $n)$ time or solve the all nearest neighbors problem in $O(n)$ time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Solcà, Tatiana. "Expected risk-adjusted return for insurance based models". Zürich : Swiss Federal Institute of Technology Zurich, Department of Mathematics, 2000. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=21.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Nouri, Suhila Lynn. "Expected maximum drawdowns under constant and stochastic volatility". Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-050406-151319/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Dargenidou, Christina. "Accounting conservatism in expected earnings : a European study". Thesis, Bangor University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.432055.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Baird, Sierra Marie. "Expected Profiles and Temporal Stability of The LOOK". BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5470.

Texto completo
Resumen
The LOOK is an iOS based iPad app designed to measure viewing time as an estimate of sexual interest. Participants used a 7-point Likert scale to rate 154 images based on sexual attractiveness. The images belonged to 14 differentiated gender and age categories from infants to elderly adults. Before rating each image participants were asked to complete an additional task of locating and touching a small dot found in one of the four corners of the screen. This was included to make sure that participants we attending to each image, and to add another level of information to the results.The purpose of this study was to establish the expected reference group viewing time expected patterns and temporal stability using the LOOK, for nonpedophilic, exclusively heterosexual, college-age males and females. 56 male and 75 female undergraduate students from BYU psychology classes participated. The expected patterns were established and are similar to previously established sexual attraction patterns with slight difference due to the additional categories in the LOOK. The results are broken up into three different sections: dot time (the time from when the image appears to when the dot is touched), rate time (the time from when the dot is touched to when the image is rated), and total time (the combined dot and rate time). Results of the analysis indicate that dot time stability is 96.43% for males and 100% for females. Rate time stability is 64.29% for males, and 73.33% for females. And the total temporal stability is 98.21% for males and 100% for females.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Chen, Harr. "The expected metric principle for probabilistic information retrieval". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/38672.

Texto completo
Resumen
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (leaves 125-128).
Traditionally, information retrieval systems aim to maximize the number of relevant documents returned to a user within some window of the top. For that goal, the Probability Ranking Principle, which ranks documents in decreasing order of probability of relevance, is provably optimal. However, there are many scenarios in which that ranking does not optimize for the user's information need. One example is when the user would be satisfied with some limited number of relevant documents, rather than needing all relevant documents. We show that in such a scenario, an attempt to return many relevant documents can actually reduce the chances of finding any relevant documents. In this thesis, we introduce the Expected Metric Principle, which generalizes the Probability Ranking Principle in a way that intimately connects the evaluation metric and the retrieval model. We observe that given a probabilistic model of relevance, it is appropriate to rank so as to directly optimize these metrics in expectation.
(cont.) We consider a number of metrics from the literature, such as the rank of the first relevant result, the %no metric that penalizes a system only for retrieving no relevant results near the top, and the diversity of retrieved results when queries have multiple interpretations, as well as introducing our own new metrics. While direct optimization of a metric's expected value may be computationally intractable, we explore heuristic search approaches, and show that a simple approximate greedy optimization algorithm produces rankings for TREC queries that outperform the standard approach based on the probability ranking principle.
by Harr Chen.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Sursock, Jean-Paul 1974. "The cross section of expected stock returns revisited". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9218.

Texto completo
Resumen
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2000.
Also available online at the DSpace at MIT website.
Includes bibliographical references (leaves 60-61).
We review and extend two important empirical financial studies: Fama and MacBeth [1973] and Fama and French [1992]. Fama and MacBeth [1973] sort stocks on the New York Stock Exchange into 20 portfolios based on their market [beta]. They test for, and conclude that, [beta] does in fact explain the cross-sectional variation in average stock returns for the 1926-1968 period. After we replicate the results in their study we extend their work to the most current data. The coefficients and t-statistics for five-year sub-periods exhibit roughly the same properties during the last half of the century as they did during the period originally studied. Fama and MacBeth report statistically significant results for their overall period (1935-1968) as well. When we run the same test on the all the data currently available (1935-1998) we find that the t-statistics are lower, instead of higher, than they were for the 1935-1968 period. We run several variations on the Fama and MacBeth [1973] paper. For example, we vary the exchanges (NYSE, AMEX, and/or NASDAQ) and indexes (value-weighted or equally-weighted) employed. We also study the effect of using robust (least absolute deviation) regressions instead of ordinary least squares. In all cases, the results are similar to those described above. Fama and French [1993] show that, when size is controlled for, market [beta] does not explain the cross-sectional variation in returns for the 1963-1990 period. They find that two other variables, size (market equity) and book-to-market equity, combine to capture the cross-sectional variation in average stock returns during the same period. After replicating their results, we update the study to the most current data. We find that the t-statistics for size and book-to-market equity are more significant during the 1963-1998 period than they were for the 1963-1990 period. We also confirm that [beta] is statistically insignificant during the 1963-1998 period.
by Jean-Paul Sursock.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Fennell, John. "An expected utility theory that matches human performance". Thesis, University of Bristol, 2012. http://hdl.handle.net/1983/f1a39859-1cb0-4978-8fcf-d56d0d3fca40.

Texto completo
Resumen
Maximising expected utility has long been accepted as a valid model of rational behaviour, however, it "has limited descriptive accuracy sim- ply because, in practice, people do not always behave in the prescribed way. This is considered evidence that either people are not rational, expected utility is not an appropriate characterisation of rationality, or combination of these. This thesis proposes that a modified form of expected utility hypothesis is normative, suggesting how people ought to behave and descriptive of how they actually do behave, provided that: a) most utility has no meaning unless it is in the presence of potential competitors; b) there is uncertainty in the nature of com- petitors; c) statements of probability are associated with uncertainty; d) utility is marginalised over uncertainty, with framing effects pro- viding constraints; and that e) utility is sensitive to risk, which, taken with reward and uncertainty suggests a three dimensional representa- tion. The first part of the thesis investigates the nature of reward in four experiments and proposes that a three dimensional reward struc- ture (reward, risk, and uncertainty) provides a better description of utility than reward alone. It also proposes that the semantic differ- ential, a well researched psychological instrument, is a representation or description of the reward structure. The second part of the thesis provides a mathematical model of a value function and a probabil- ity weighting function, testing them together against extant problem cases for decision making. It is concluded that utility, perhaps more accurately described as advantage in the present case, when construed as three dimensions and the result of a competition, provides a good explanation of many of the problem cases that are documented in the decision making literature.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Liu, Chung-Shin. "Impact of Product Market Competition on Expected Returns". Thesis, University of Oregon, 2011. http://hdl.handle.net/1794/12143.

Texto completo
Resumen
x, 94 p. : ill. (some col.)
This paper examines how competition faced by firms affects asset risk and expected returns. Contrary to Hou and Robinson's (2006) findings, I find that cross-industry variation in competition, as measured by the concentration ratio, is not a robust determinant of unconditional expected stock returns. In contrast, within-industry competition, as measured by relative price markup, is positively related to expected stock returns. Moreover, this relation is not captured by commonly used models of expected returns. When using the Markov regime-switching model advocated by Perez-Quiros and Timmermann (2000), I test and find support for Aguerrevere's (2009) recent model of competition find risk dynamics. In particular, systematic risk is greater in more competitive industries during bad times and greater in more concentrated industries during good times. In addition, real investment by firms facing greater competition leads real investment by firms facing less competition, supporting Aguerrevere's notion that less competition results in higher growth options and hence higher risk in good times.
Committee in charge: Dr. Roberto Gutierrez, Chair; Dr. Roberto Gutierrez, Advisor; Dr. Diane Del Guercio, Inside Member; Dr. John Chalmers, Inside Member; Dr. Bruce Blonigen, Outside Member
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Evans, Donald C. III. "Measuring Expected Returns in a Fluid Economic Environment". Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/9733.

Texto completo
Resumen
This paper examines the components of the Capital Asset Pricing Model and the model's uses to analyze portfolios returns. It also looks at subsequent versions of the CAPM including a multi-variable CAPM with the inclusion of selected macro-variables as well as a non-stationary beta CAPM to estimate portfolio returns. A new model is proposed that combines the multi-variable component together with the non-stationary beta component to derive a new CAPM that is more effective at capturing current market conditions than the traditional CAPM with the fixed beta coefficient. The multi-variable CAPM with non-stationary beta is applied, together with the select macro-variables, to estimate the returns of a portfolio of assets in the oil-sector of the economy. It looks at returns during the period of 1995-2001 when the economy exhibited a wide range of variation in market returns. This paper tests the hypothesis that adapting the traditional CAPM to include beta non-stationarity will better estimate portfolio returns in a fluid market environment. The empirical results suggest that the new model is statistically significant at measuring portfolio returns. This model is estimated with an Ordinary Least Square (OLS) estimations process and identifies three factors that are statistically significant. These include quarterly changes in the Gross Domestic Product (GDP), the Unemployment Rate and the Consumer Price Index (CPI).
Master of Arts
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Lee, Hwayoung. "Portfolio liquidity risk management with expected shortfall constraints". Thesis, University of Essex, 2016. http://repository.essex.ac.uk/17762/.

Texto completo
Resumen
In this thesis we quantify the potential cost of liquidity constraints on a long equity portfolio using the liquidity risk framework of Acerbi and Scandolo (2008). The model modifies the classical mark-to-market valuation model, and incorporates the impact of liquidity policies of portfolios on the liquidity adjustment valuation (LVA). Also, we suggest a quantitative indicator that scores market liquidity ranging from 0 to 1 (perfect liquidity) for a portfolio with possible liquidity constraints. The thesis consists of three major studies. In the first one, we compute LVA given the cash, minimum weight and portfolio expected shortfall (ES) liquidity policies on a long equity portfolio. Several numerical examples in the results demonstrate the importance associated the incorporation of the liquidity policy in the liquidity risk valuation. In the second study, we quantify the execution costs and the revenue risk when implementing trading strategies over multiple periods by employing the transaction costs measure of Garleanu and Pedersen (2013). The portfolio liquidity costs estimated from the model of Garleanu and Pedersen (2013) are compared with the costs estimated from the liquidity risk measure of Finger (2011). In the third study, we estimate the liquidity-adjusted portfolio ES for a long equity portfolio with the liquidity constraints. Portfolio pure market P&L scenarios are based on initial positions, and the liquidity adjustments are based on positions sold, which depend on the specified liquidity constraints. Portfolio pure market P&L scenarios and state-dependent liquidity adjustments are integrated to obtain liquidity-adjusted P&L scenarios. Then, we apply the liquidity score method (Meucci, 2012) on the liquidity-plus-market P&L distribution to quantify the market liquidity for the portfolio. The results show the importance of pricing liquidity risk with liquidity constraints. The liqiii uidity costs can vary greatly on different liquidity policies, portfolio MtM values, market situation and time to liquidation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Iakovleva, Anna. "Pricing of CDO Tranches by Means of Implied Expected Loss". Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-2198.

Texto completo
Resumen

In this thesis an approach to CDO tranche valuation is described.

This approach allows to check market quotes for arbitrage opportunities,

to obtain expected portfolio losses from the market quotes

and to price CDO tranches with non-standard maturities and attachment/

detachment points. A significant advantage of this approach is

the possibility to avoid the necessity of construction of a correlation

structure between names in the reference basket. Standard approaches

to CDO valuation, based on copula functions are also considered.

Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Li, Xin. "Computer viruses: The threat today and the expected future". Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1998.

Texto completo
Resumen

This Master’s Thesis within the area computer security concerns ”Computer viruses: The threat today and the expected future”.

Firstly, the definitions of computer virus and the related threats are presented; Secondly, current situation of computer viruses are discussed, the working and spreading mechanisms of computer viruses are reviewed in details, simplistic attitude of computer world in computer virus defence is analyzed; Thirdly, today’s influencing factors for near future computer virus epidemics are explained, then it further predicts new possible types of computer viruses in the near future; Furthermore, currently available anti-virus technologies are analyzed concerning both advantages and disadvantages; Finally, new promising trends in computer virus defence are explored in details.

Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Sundgren, David. "Distribution of expected utility in second-order decision analysis". Licentiate thesis, Kista : Data- och systemvetenskap, Kungliga Tekniska högskolan, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4442.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Fredlund, Richard. "A Bayesian expected error reduction approach to Active Learning". Thesis, University of Exeter, 2011. http://hdl.handle.net/10036/3170.

Texto completo
Resumen
There has been growing recent interest in the field of active learning for binary classification. This thesis develops a Bayesian approach to active learning which aims to minimise the objective function on which the learner is evaluated, namely the expected misclassification cost. We call this approach the expected cost reduction approach to active learning. In this form of active learning queries are selected by performing a `lookahead' to evaluate the associated expected misclassification cost. \paragraph{} Firstly, we introduce the concept of a \textit{query density} to explicitly model how new data is sampled. An expected cost reduction framework for active learning is then developed which allows the learner to sample data according to arbitrary query densities. The model makes no assumption of independence between queries, instead updating model parameters on the basis of both which observations were made \textsl{and} how they were sampled. This approach is demonstrated on the probabilistic high-low game which is a non-separable extension of the high-low game presented by \cite{Seung_etal1993}. The results indicate that the Bayes expected cost reduction approach performs significantly better than passive learning even when there is considerable overlap between the class distributions, covering $30\%$ of input space. For the probabilistic high-low game however narrow queries appear to consistently outperform wide queries. We therefore conclude the first part of the thesis by investigating whether or not this is always the case, demonstrating examples where sampling broadly is favourable to a single input query. \paragraph{} Secondly, we explore the Bayesian expected cost reduction approach to active learning within the pool-based setting. This is where learning is limited to a finite pool of unlabelled observations from which the learner may select observations to be queried for class-labels. Our implementation of this approach uses Gaussian process classification with the expectation propagation approximation to make the necessary inferences. The implementation is demonstrated on six benchmark data sets and again demonstrates superior performance to passive learning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Erik, Wikström. "Expected Damage of Projectile-Like Spell Effects in Games". Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16672.

Texto completo
Resumen
Background. Many video games make use of particle effects to portray magic abilities known as spells. Different spells may have large variation in behaviour and colour. Aside from their different appearance, the spells often deal a different amount of damage. Objectives. The aim of this paper is to evaluate how velocity, scale, and direction, as well as the colour orange and blue affect the expected damage of a projectile-like spell.Methods. A perceptual experiment with a 2AFC was conducted where participants compared various spells with different values of velocity, scale, direction, and colour. The participants were asked to select the spell that they expect to deal the most damage. Results. Scale had a larger impact on the expected damage of a spell than velocity. The largest and fastest spells with an added sinus based direction in the x-axis were expected to cause the most damage. However, the difference between these spells and the largest and fastest spells without the added direction was not found to be statistically significant. The orange spells were rated as more damage causing in all cases compared to the blue spells. The difference between the blue and orange preference in two of these cases were however not large enough to be statistically significant. Conclusions. The results showed that the visual attributes of a particle-based spell affect its perceived damage with the scale having a greater impact than velocity and orange being the colour most often associated with higher damage. The effect of an added direction could not be evaluated due the result from the direction spells not being statistically significant.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Riedener, Stefan. "Maximising expected value under axiological uncertainty : an axiomatic approach". Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:42856f0c-dfa1-421f-999b-40db7a8120a6.

Texto completo
Resumen
The topic of this thesis is axiological uncertainty - the question of how you should evaluate your options if you are uncertain about which axiology is true. As an answer, I defend Expected Value Maximisation (EVM), the view that one option is better than another if and only if it has the greater expected value across axiologies. More precisely, I explore the axiomatic foundations of this view. I employ results from state-dependent utility theory, extend them in various ways and interpret them accordingly, and thus provide axiomatisations of EVM as a theory of axiological uncertainty. Chapter 1 defends the importance of the problem of axiological uncertainty. Chapter 2 introduces the most basic theorem of this thesis, the Expected Value Theorem. This theorem says that EVM is true if the betterness relation under axiological uncertainty satisfies the von Neumann-Morgenstern axioms and a Pareto condition. I argue that, given certain simplifications and modulo the problem of intertheoretic comparisons, this theorem presents a powerful means to formulate and defend EVM. Chapter 3 then examines the problem of intertheoretic comparisons. I argue that intertheoretic comparisons are generally possible, but that some plausible axiologies may not be comparable in a precise way. The Expected Value Theorem presupposes that all axiologies are comparable in a precise way. So this motivates extending the Expected Value Theorem to make it cover less than fully comparable axiologies. Chapter 4 then examines the concept of a probability distribution over axiologies. In the Expected Value Theorem, this concept figures as a primitive. I argue that we need an account of what it means, and outline and defend an explication for it. Chapter 5 starts to bring together the upshots from the previous three chapters. It extends the Expected Value Theorem by allowing for less than fully comparable axiologies and by dropping the presupposition of probabilities as given primitives. Chapter 6 provides formal appendices.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Mayorga, Rodrigo de Oliveira. "An application of value at risk and expected shortfall". reponame:Repositório Institucional da UFC, 2016. http://www.repositorio.ufc.br/handle/riufc/23104.

Texto completo
Resumen
MAYORGA, Rodrigo de Oliveira. An application of value at risk and expected shortfall / Rodrigo de Oliveira Mayorga. - 2016. 60f. Tese (Doutorado) - Universidade Federal do Ceará, Programa de Pós Graduação em Economia, CAEN, Fortaleza, 2016.
Submitted by Mônica Correia Aquino (monicacorreiaaquino@gmail.com) on 2017-06-07T18:33:28Z No. of bitstreams: 1 2016_tese_romayorga.pdf: 23551041 bytes, checksum: c9a78d3b82daf878118fea8674fe02e8 (MD5)
Approved for entry into archive by Mônica Correia Aquino (monicacorreiaaquino@gmail.com) on 2017-06-07T18:33:45Z (GMT) No. of bitstreams: 1 2016_tese_romayorga.pdf: 23551041 bytes, checksum: c9a78d3b82daf878118fea8674fe02e8 (MD5)
Made available in DSpace on 2017-06-07T18:33:45Z (GMT). No. of bitstreams: 1 2016_tese_romayorga.pdf: 23551041 bytes, checksum: c9a78d3b82daf878118fea8674fe02e8 (MD5) Previous issue date: 2016
The last two decades have been characterized by significant volatilities in financial world marked by few major crises, market crashes and bankruptcies of large corporations and liquidations of major financial institutions. In this context, this study considers the Extreme Value Theory (EVT), which provides well established statistical models for the computation of extreme risk measures like the Value at Risk (VaR) and Expected Shortfall (ES) and examines how EVT can be used to model tail risk measures and related confidence interval, applying it to daily log-returns on four market indices. These market indices represent the countries with greater commercial trade with Brazil for last decade (China, U.S. and Argentina). We calculate the daily VaR and ES for the returns of IBOV, SPX, SHCOMP and MERVAL stock markets from January 2nd 2004 to September 8th 2014, combining the EVT with GARCH models. Results show that EVT can be useful for assessing the size of extreme events and that it can be applied to financial market return series. We also verified that MERVAL is the stock market that is most exposed to extreme losses, followed by the IBOV. The least exposed to daily extreme variations are SPX and SHCOMP.
As duas últimas décadas têm sido caracterizadas por volatilidades significativas no mundo financeiro em grandes crises, quebras de mercado e falências de grandes corporações e liquidações de grandes instituições financeiras. Neste contexto, este estudo considera a evolução da Teoria do Valor Extremo (EVT), que proporciona modelos estatísticos bem estabelecidos para o cálculo de medidas de risco extremos, como o Value at Risk (VaR) e Espected Shortfall (ES) e examina como a EVT pode ser usada para modelar medidas de risco raros, estabelecendo intervalos de confiança, aplicando-a aos log-retornos diários a quatro índices de mercado. Estes mercados representam os países com maior intercâmbio comercial com o Brasil (China, U.S. e Argentina). Calculamos o VaR e ES diários dos índices IBOV, SPX, SHCOMP e MERVAL, com dados diários entre de 02 de janeiro de 2004 e 08 de setembro de 2014, combinando a EVT com modelos GARCH. Os resultados mostram que EVT pode ser útil para avaliar o tamanho de eventos extremos e que ele pode ser aplicado a séries de retorno do mercado financeiro. Verifica-se ainda que MERVAL é o mercado de ações que está mais exposta a perdas extremas, seguido do IBOV. Os menos expostos a variações extremas diárias são SPX e SHCOMP.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Testing, Tester. "Checking to verify that everything is works as expected". John Carroll University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=jcu1627918947279879.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Xia, Shujie. "IPO Underpricing: The Role of Expected Future Business Conditions". Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/cmc_theses/2050.

Texto completo
Resumen
In this paper, I explore whether the expected economic condition plays a role in determining the degree of IPO underpricing. My hypothesis is that given the current condition, the IPO underpricing is higher when the expected economic condition is worse. I test the hypothesis on the aggregate level and the industry level. I find no evidence that supports my hypothesis on both levels. On the aggregate level, I find the “hot” market, a period when the underpricing is significantly higher than other periods, exists when both the current and expected economic conditions are good. On the industry level, I find that the underpricing pattern of technology industry IPOs prior to the dot-com crash is consistent with my hypothesis. It seems that insiders see signs of imminent bubble burst and they rush to take companies public by accepting higher underpricing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Tanaka, Hiroyuki. "Essays on Comparative Statics on Non-expected Utility Models". Kyoto University, 2019. http://hdl.handle.net/2433/242455.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Yu, Liyang. "Expected modeling errors and low cost response surface methods /". The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488194825668827.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Vestin, Alexander y Frank Movin Sequeira. "Expected and Achieved Outcomes of Reshoring: A Swedish Perspective". Thesis, Högskolan i Jönköping, Tekniska Högskolan, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-36238.

Texto completo
Resumen
Purpose: Over the last couple of decades, globalization has impacted market competition. This results in that companies heavily offshore to low-wage countries to enhance its competitiveness through lower costs. Offshoring constitutes relocation of manufacturing activities to other existing manufacturing sites in foreign countries. In recent years, low-wage countries have grown and developed. Studies show that low cost environments are increasing in cost, eliminating the benefits of offshoring. This phenomenon has sparked a new trend, ‘reshoring’- to bring back manufacturing to the home country, which has been acknowledged by both researchers and practitioners. The trend has become more distinct due to the increasing numbers of cases where companies that previously offshored manufacturing activities are returning them to the home country. The research done on reshoring focuses mainly on a “why” perspective, with drivers that cause reshoring and barriers that prevent them. However, research concerning the outcomes of what a company expected from reshoring and what they achieved afterwards is limited, especially in the high cost environment of Sweden. The purpose of this study is to explore the expected and achieved outcomes of the reshoring process through a multiple case study including four companies from Sweden that have reshored manufacturing back to Sweden. Method: The method used in this thesis was a systematic literature review to gain knowledge of the phenomenon. With the help of the systematic literature review an interview guide was created to assist in the data collection. This thesis used a multiple case study, the data was collected through semi-structured interviews and documents. The findings were analyzed within each case, cross-case, and in comparison to literature. Findings: To analyze the outcomes on the same premises, the researcher had to create a framework. All the outcomes from the literature were categorized based on firms’ operational and competitive capabilities namely cost, quality, delivery, flexibility, service, innovation, environment, culture, risk mitigation, reputation and trust, and government legislations. It was found that all the case companies had a successful reshoring process and all their expected outcomes were achieved. However, in comparison to the expected outcomes found in the literature, the companies expected less from reshoring. The companies were unaware of the full extent of reshoring, since their expectations were limited. The most expected outcome of reshoring, found in all the cases and in theory were: to decrease total cost, increase delivery speed, increase reputation and trust, and use the comfort of the home culture. A thorough analysis of achieved outcomes, in case and in literature, showed that all the case companies have achieved lower total cost, increased delivery speeds and higher reputation and trust. On comparing all the achieved outcomes in case and in literature it is evident that researchers have studied the process of reshoring from a why and theoretical perspective, aside from the effects after the reshoring, which this thesis has accomplished. A comparison between expected and achieved outcomes of all the case companies collectively, showed that they have achieved lower costs, higher quality, better service and higher reputation and trust, beyond what they expected. Implications: Reshoring back to Sweden would bring back more manufacturing jobs and encourage further local sourcing within the country. Strategical collaboration within the supply chain in the home country would make companies more responsive to customer demand. Geographically, the literature lacks case studies from Sweden, therefore, this thesis will contribute to theory by presenting successful reshoring case studies from Sweden
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Kristensson, Lars. "Estimation of Expected Lowest Fare in Flight Meta Search". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-108475.

Texto completo
Resumen
This thesis explores the possibility of estimating the outcome of a flight ticket fare comparison search, also called flight meta search, before it has been performed, as being able to do thiscould be highly useful in improving the flight meta search technology used today. The algorithm explored in this thesis is a distance weighted k-nearest neighbour, where the distance metric is a linear equation with sixteen features of first degree extracted from the input of the search. It is found that while the approach may have potential, the distance metric used in this thesis isnot sufficient to capture the similarities needed, and the end algorithm performs only slightly better than random. At the end of this thesis a series of possible further improvements are presented, that could potentially help improve the performance of the algorithm to a level that would be more useful.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía