To see the other types of publications on this topic, follow the link: Variance ratio.

Dissertations / Theses on the topic 'Variance ratio'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Variance ratio.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lains, João Luís da Silva. "Testing the random walk hypothesis with variance ratio statistics." Master's thesis, Instituto Superior de Economia e Gestão, 2015. http://hdl.handle.net/10400.5/11801.

Full text
Abstract:
Mestrado em Finanças
Esta dissertação tem como objetivo testar a hipótese de passeio aleatório na curva das yields relativa ás obrigações do tesouro dos Estados Unidos da América para o period entre 1980 e 2014. Para alcançar este objetivo e após revisão da literatura foram efectuados testes de variância e de raiz unitária considerados os mais indicados e poderosos. Os dados necessários para a realização deste estudo foram recolhidos tendo por base um estudo da Reserva Federal dos Estados Unidos da América, que efectua cálculo das yields desde 1961 até ao presente. O método escolhido para obter os resultados referentes à raiz unitária foi o Augmented Dickey-Fuller Unit Root Test e para os testes de variância foram usados: Chow Denning (1993) multiple variance test, Joint wright multiple version of Wrights rank and sign tests e Choi (1999) Automatic Variance ratio. A amostra inclui mais de 8000 observações para cada uma das yields estudadas(1,5,10 e 20 anos Zero-Coupon e Par Yields) durante um período de 34 anos. Os resultados permitiram a detecção de diversos periodos em que o passeio aleatório nas yields das obrigações do tesouro Norte-Americano é real mas também outros em que isso não se verificou. Para isso efectuámos uma análise comparativa entre os resultados dos testes de variância e eventos marcantes na economia americana entres os quais decidimos destacar 3 períodos: a década de 80, a expansao económica dos anos 90 até inicio do século XXI e o pós-crise de 2008 onde é implementado o quantitative Easing.
The random-walk hypothesis in the U.S. treasury yield curve was not previous studied and is surprising that researchers do not filled that void by testing it. However, the U.S treasury securities market is a benchmark, as the U.S treasury is considered to be risk-free. This benchmark is used to forecast economic development, to analyse securities in other markets, to price other fixed-income securities and to hedge positions taken in other markets. This study applies Chow Denning (1993) multiple variance test, Joint wright multiple version of Wright?s rank and sign tests, Choi (1999) Automatic Variance ratio Test and we also use the well-known Augmented Dickey-Fuller unit roots test to enable us to define the methodology to be used in the study. The database used permits the estimation of relative daily variation on U.S. treasury yield curve from January 1980 to December 2014. We hope that this analysis can provide useful information to traders and investors and will make a contribution in assisting to understand the pattern and behaviour of yields movement.
APA, Harvard, Vancouver, ISO, and other styles
2

Ladak, Al-Karim Madatally. "Resampling-based variance estimators in ratio estimation with application to weigh scaling." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29195.

Full text
Abstract:
Weigh scaling is a method of estimating the total volume of timber harvested from a given region. The implementation of statistical sampling techniques in weigh scaling is described, along with related issues. A review of ratio estimators, along with variance estimators of the classical ratio estimator is conducted. The estimation of the variance of the estimated total volume is considered using jackknife- and bootstrap-based variance estimators. Weighted versions of the jackknife and bootstrap variance estimators are derived using influence functions and Fisher Information matrices. Empirical studies of analytic and resampling-based variance estimators are conducted, with particular emphasis on small sample properties and on robustness with respect to both the homoscedastic variance and zero-intercept population characteristics. With a squared error loss function, the resampling-based variance estimators are shown to perform very well at all sample sizes in finite populations with normally distributed errors. These estimators are found to have small negative biases for small sample sizes and to be robust with respect to heteroscedasticity.
Science, Faculty of
Statistics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
3

Kougoulis, Periklis Markos. "Essays on a generalized variance-ratio statistic and the comovement of stock returns." Thesis, University of Essex, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Paříková, Adéla. "Hedge Ratio Estimation: Comparison of Constant OLS, ARCH and GARCH Approaches." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-206944.

Full text
Abstract:
Volatile prices of commodities relate to financial risk faced by individuals or economic subjects exposed to them. One way to minimize the impact of change in market price is to use its hedging by futures contracts. The optimal hedge ratio estimation (ratio between units of spot and futures contracts) is the focus of this study. Its objective is to compare hedge ratios based on minimum variance methodology using three methods - OLS, ARCH and GARCH, by measuring their hedging effectiveness using variance and value at risk reduction. The results differ across commodities, however several conclusions can be made. The ARCH-based hedge ratios do not perform significantly worse than the GARCH-based hedge ratios. The same estimation method can be used for assets having similar returns development and a well performing hedge can be expected. Results of hedge ratios of strongly correlated assets estimated by different methods tend to have very similar values to one another and to the related correlation coefficient. More generally, the best performing hedge ratios are those having very similar values to correlation between spot and futures 1-day returns.
APA, Harvard, Vancouver, ISO, and other styles
5

MacQuillan, Anthony Howard Felix. "The variance of nerve axon to muscle fibre ratio and its effect on outcome in functional muscle transfer." Thesis, University College London (University of London), 2007. http://discovery.ucl.ac.uk/1444995/.

Full text
Abstract:
The results of functional muscle transfer for the treatment of facial palsy are varied. Surgical technique in such cases remains constant with only the selected ramus of the buccal branch of the facial nerve changing. Differing sized branches of the facial nerve in the rabbit were used to reinnervate a constant sized muscle transfer to see if this might explain the spectrum of clinical results seen and additionally provide some insight into the phenomenon of "late onset tightening" seen in some cases. Peripheral limb reconstruction using functional muscle transfer following injury or tumour resection has been widely reported in the literature. The results of such procedures often fail to deliver the physiological strength that might be hoped for in relation to the size of the transferred muscle. Differing sized pure motor nerves were used to reinnervate a constant sized muscle transfer to see if functional results could be improved in an experimental model analogous to peripheral limb reconstruction. The rectus femoris muscle in the New Zealand White rabbit was used as a standardised muscle transfer for investigation into how the reinnervating axonal load affects outcome, defined in terms of physiological force developed by the muscle post-operatively, looking at both the central and peripheral nervous systems. Corroboratory investigations were also undertaken to determine the reinnervating characteristics of the nerves studied and those of reinnervated muscle.
APA, Harvard, Vancouver, ISO, and other styles
6

Shen, Paul. "Empirical Likelihood Tests For Constant Variance In The Two-Sample Problem." Bowling Green State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1544187568883762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mårtensson, Jonathan. "Portfolio optimisation : improved risk-adjusted return?" Thesis, Uppsala University, Department of Economics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6397.

Full text
Abstract:

In this thesis, portfolio optimisation is used to evaluate if a specific sample of portfolios have

a higher risk level or lower expected return, compared to what may be obtained through

optimisation. It also compares the return of optimised portfolios with the return of the original

portfolios. The risk analysis software Aegis Portfolio Manager developed by Barra is used for

the optimisations. With the expected return and risk level used in this thesis, all portfolios can

obtain a higher expected return and a lower risk. Over a six-month period, the optimised

portfolios do not consistently outperform the original portfolios and therefore it seems as

though the optimisation do not improve the return of the portfolios. This might be due to the

uncertainty of the expected returns used in this thesis.

APA, Harvard, Vancouver, ISO, and other styles
8

Söderberg, Gustav, and Rikard Nyström. "Insider Trading - An Efficiency Contributor?" Thesis, Umeå universitet, Företagsekonomi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-73596.

Full text
Abstract:
This research has studied the relationship between insider trading activity and its effect on the level of informational efficiency. The authors have used insider data from Finansinspektionen and data regarding stock prices, market capitalization and GDP from Thomson Reuters Datastream. The sample includes 193 companies on the Swedish stock exchange for a period of 10 years. A Variance Ratio test employed on moving sub-sample windows was used to establish the level of time-varying informational efficiency, which subsequently was used in an OLS-regression as a dependent variable. The result of the regression implies a negative effect on firm price information efficiency by insider purchasing, while selling has a positive effect. This can be concluded using a confidence level of 99%. The results are interesting since they imply an asymmetrical effect of insider trading on informational efficiency, while current insider legislation treats buying and selling by insiders equal. Thus, the results are of interest in future adjustments of laws regulating insider trading.
APA, Harvard, Vancouver, ISO, and other styles
9

Nyqvist, Vidar, and Mario Milic. "Bitcoins roll i en Investeringsportfölj : A Mean-Variance Analysis of the Diversification Benefits." Thesis, Linnéuniversitetet, Institutionen för ekonomistyrning och logistik (ELO), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-104722.

Full text
Abstract:
The aim of this thesis is to explore the role of bitcoin in an investment portfolio. The paper examines the nature of bitcoin and additionally how bitcoin compares to gold when included in an investment portfolio. This report uses the historical value of bitcoin and investigates with a Mean-Variance model how the risk-adjusted return of an optimized portfolio is affected when bitcoin is a constituent. By comparing Sharpe Ratios from the optimized portfolios, a conclusion can be drawn as to whether bitcoin affects the maximum Sharpe ratio or the global minimum variance point. Our study suggests that including bitcoin in an investment portfolio increases the risk-adjusted return of the portfolio. In addition, portfolios optimized with bitcoin outperform the market. Further, we conclude that bitcoin has a relatively high correlation as compared to gold with the assets in the study. Hence, bitcoin is not the new gold.
APA, Harvard, Vancouver, ISO, and other styles
10

Sundqvist, Daniel. "Hedge Funds in a Traditional Portfolio : A Quantitative Case Study Made on the Swedish Hedge Fund Market." Thesis, Umeå University, Umeå School of Business, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-23363.

Full text
Abstract:

Hedge funds are a debated subject in today’s financial industry. During 2008, despite hedge funds absolute return target, the global hedge fund industry showed a negative performance whilst the Swedish hedge fund market performed relatively well in comparison. Many studies have been made investigating the effect on incorporating hedge funds in a traditional portfolio though none focused separately on the Swedish market. In a global perspective it is quite easy to invest in hedge fund portfolios due to the existence of investable indices. To invest on the Swedish market is a more complex matter. SIX Harcourt HFXS Index is a Swedish hedge fund index representing the Swedish hedge fund market though it is not investable. Hence it would be interesting to see if it is possible to create an investable version of SIX Harcourt HFXS. When creating an investable index, several administrative costs will arise and in order to cover these costs it would be interesting to see whether or not it possible to optimize SIX Harcourt HFXS Index in purpose of achieving a outperformance which could cover any administrative costs for setting up the investable version. Also, since the optimized version must replicate the standard SIX Harcourt HFXS Index it must maintain a certain level of correlation.

This thesis, which is based on a positivistic epistemology, is built upon a quantitative case study where SIX Harcourt HFXS Index is optimized in purpose of achieving an outperformance in terms of the risk-adjusted return. The optimization uses an adjusted mean-variance methodology and is limited to a maintained correlation above 0,9 towards the standard SIX Harcourt HFXS Index. The optimization is created through the use of an Excel application created by Harcourt Investment Consulting.

Also, based on the outperformance by Swedish hedge funds compared to global hedge funds, this study aims to show the effect of incorporating Swedish hedge funds in a traditional portfolio consisting of equities and bonds. This effect is analyzed by the use of several performance-and risk measures.

The study shows that it is possible to optimize SIX Harcourt HFXS Index and produce an outperformance of approximately 1,5% per annum with a maintained correlation above 0,9. It also shows that the effect of incorporating Swedish hedge funds to a traditional portfolio is positive in regards to both risk and return.

APA, Harvard, Vancouver, ISO, and other styles
11

Adams, William Mark 1961. "APPLICATION OF THE VARIANCE-TO-MEAN RATIO METHOD FOR DETERMINING NEUTRON MULTIPLICATION PARAMETERS OF CRITICAL AND SUBCRITICAL REACTORS (REACTOR NOISE, FEYNMAN-ALPHA)." Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/275438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zou, Shanshan. "Empirical analysis on random walk behavior of foreign exchange rates." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33836.

Full text
Abstract:
This thesis conducts a comprehensive examination on the random walk behavior of 29 foreign exchange rates over the period of floating exchange regime, using variance-ratio tests. The cross-country and time-series test show that random walk model cannot be rejected on majority, and the random walk behavior is quite volatile across the whole floating exchange regime period. It then goes further to explore possible factors that can explain the probability of rejection/ non-rejections on random walk model using linear as well as nonlinear probability models, and find that the factors such as capital openness and investment-to-trade ratio significantly increases the chance of its exchange rate exhibiting random walk behavior.
APA, Harvard, Vancouver, ISO, and other styles
13

Xu, Weijun Banking &amp Finance Australian School of Business UNSW. "Optimal hedging strategy in stock index future markets." Awarded by:University of New South Wales. Banking & Finance, 2009. http://handle.unsw.edu.au/1959.4/43728.

Full text
Abstract:
In this thesis we search for optimal hedging strategy in stock index futures markets by providing a comprehensive comparison of variety types of models in the related literature. We concentrate on the strategy that minimizes portfolio risk, i.e., minimum variance hedge ratio (MVHR) estimated from a range of time series models with different assumptions of market volatility. There are linear regression models assuming time-invariant volatility; GARCH-type models capturing time-varying volatility, Markov regime switching (MRS) regression models assuming state-varying volatility, and MRS-GARCH models capturing both time-varying and state-varying volatility. We use both Maximum Likelihood Estimation (MLE) and Bayesian Gibbs-Sampling approach to estimate the models with four commonly used index futures contracts: S&P 500, FTSE 100, Nikkei 225 and Hang Seng index futures. We apply risk reduction and utility maximization criterions to evaluate hedging performance of MVHRs estimated from these models. The in-sample results show that the optimal hedging strategy for the S&P 500 and the Hang Seng index futures contracts is the MVHR estimated using the MRS-OLS model, while the optimal hedging strategy for the Nikkei 225 and the FTSE 100 futures contracts is the MVHR estimated using the Asymmetric-Diagonal-BEKK-GARCH and the Asymmetric-DCC-GARCH model, respectively. As in the out-of sample investigation, the time-varying models such as the BEKK-GARCH models especially the Scalar-BEKK model outperform those state-varying MRS models in majority of futures contracts in both one-step- and multiple-step-ahead forecast cases. Overall the evidence suggests that there is no single model that can consistently produce the best strategy across different index futures contracts. Moreover, using more sophisticated models such as MRS-GARCH models provide some benefits compared with their corresponding single-state GARCH models in the in-sample case but not in the out-of-sample case. While comparing with other types of models MRS-GARCH models do not necessarily improve hedging efficiency. Furthermore, there is evidence that using Bayesian Gibbs-sampling approach to estimate the MRS models provides investors more efficient hedging strategy compared with the MLE method.
APA, Harvard, Vancouver, ISO, and other styles
14

Dominicus, Annica. "Latent variable models for longitudinal twin data." Doctoral thesis, Stockholm : Mathematical statistics, Dept. of mathematics, Stockholm university, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Talbot, Denis. "Estimation de la variance et construction d'intervalles de confiance pour le ratio standardisé de mortalité avec application à l'évaluation d'un programme de dépistage du cancer." Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27373/27373.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Santos, Filipe Caldeira. "Measuring hedging performance of futures for non main european indices." Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/17665.

Full text
Abstract:
Mestrado em Finanças
A atividade de cobertura de risco na ausência de liquidez nos mercados de contratos de futuros e de opções financeiras implica ou a utilização de instrumentos "Over-the-Counter" assumindo-se o risco de contra-parte associado, ou em alternativa a aplicação de técnicas de cobertura de risco indiretas, "cross-hedging", implicando nesta caso risco de correlação. Esta temática é de extrema importância para os "index-trackers" que necessitam de cobrir o risco das suas exposições na situação em que não existem contratos de futuros relevantes (como é o caso dos correspondentes aos índices ASE, BEL20 e CYSMMAPA). Mesmo quando estes contratos existem, os insuficientes níveis de liquidez (como é o caso dos índices ATX e PSI20) tornam a cobertura de risco por esta via não eficiente, especialmente no "hedging" de curto-prazo. Consequentemente nestes casos, a cobertura de risco indireta normalmente definida como "cross-hedging", pode ser uma alternativa viável. Esta dissertação estuda a eficiência da aplicação de técnicas de "cross-hedging" na cobertura de risco de carteiras que integram Índices Europeus (alguns dos não principais) utilizando contratos de futuros mais líquidos, isto é, os que existem sobre os principais índices europeus. Concluímos que nos casos estudados a eficiência da cobertura de risco indireta depende da técnica de "cross-hedging" aplicada bem como da medida de eficiência utilizada. Adicionalmente testa-se empiricamente a hipótese explicativa entre os resultados encontrados e a integração das economias respetivas.
The exercise of hedging in the absence of a liquid futures or options market requires either the use of over-the-counter contracts with counterparty risk, or the practice of cross-hedging with mature and liquid contracts associated with correlation risk. This is a significant issue for index trackers that need to hedge their exposure while facing no relevant futures contract on the underlying stock index they are long (such as ASE,BEL20, and CYSMMAPA). Even if they exist, the severe illiquidity of these contracts (such as the ones written on ATX and PSI20) turns the exercise of opening and closing positions on a short period of time, into higher troubles than the simple speculation. Therefore, cross-hedging could with stock index futures on other markets be a possible solution. This thesis explores the goodness of cross-hedging in Europe for non-main stock indices using liquid contracts written on the main European indices. We found that the hedging performance depends on the hedging technique under scope as well as on the hedging effectiveness measure undertaken. We also hypothesize if the findings are related with the economic integration of the economies in the cross-hedge exercise.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
17

Sghaier, Nadia. "Les cycles de souscription en assurance non vie : Étude de la dynamique du ratio combiné et des déterminants des primes." Paris 10, 2011. http://www.theses.fr/2011PA100046.

Full text
Abstract:
Malgré le nombre considérable des travaux théoriques et empiriques visant à analyser les cycles de souscription en assurance non vie. Aucune conclusion claire et unique ne semble ressortir. L’absence de consensus nous a semblé découler de l’insuffisance de la modélisation linéaire adaptée et de l’oubli des propriétés de cointégration appliquées aussi bien dans le cadre des séries temporelles que des données de panels. Cette thèse s’est alors attachée à réexaminer les cycles de souscription tout en mettant l’accent sur la dynamique du ratio combiné et sur les déterminants des primes pour le cas de la France et pour d’autres pays. Dans le premier chapitre, nous avons présenté une revue de la littérature sur le sujet. Dans le deuxième chapitre, nous avons analysé les cycles de souscription et les déterminants des primes pour le secteur agrégé français en faisant appel à l’économétrie des séries temporelles non linéaires. Dans le troisième chapitre, nous avons eu recours aux développements récents de l’économétrie des données de panel en tenant compte de la non stationnarité et de la non linéarité, d’une part, pour mener une analyse désagrégée par ligne française des cycles de souscription et des déterminants des primes et, d’autre part, pour effectuer une analyse comparative des déterminants des primes dans un cadre international. Les résultats obtenus pour le secteur agrégé français nous ont conduit à conclure que le phénomène cyclique a disparu en France dès 1989 et que la dynamique du ratio combiné est plutôt modélisée par un modèle de régression à transition lisse (STR). La tarification des primes semble changer à partir de 1985 et le taux de croissance des primes paraît représenté par un modèle à correction d’erreur à transition lisse (STECM). Ensuite, l’estimation des panels statiques et dynamiques nous a permis de détecter des similitudes quant à la dynamique des ratios combinés des lignes de l’assurance non vie. De même, l’estimation des relations de cointégration sur données panel nous a permis de dégager des différences quant aux déterminants des primes des lignes de l’assurance non vie. Enfin, l’analyse comparative par pays a montré que les taux de croissance des primes des pays sont reproduits par un modèle à correction d’erreur à transition lisse sur données de panel (PSTECM)
Despite the considerable number of theoretical and empirical studies analyzing the underwriting cycles in non life insurance. No clear conclusion and only seems to highlight. The lack of consensus seemed to us arise from the lack of suitable linear modelling and forgetting the properties of cointegration applied both in the context time series and panel data. This thesis was then attached to review the underwriting cycles while focusing on the dynamics of the combined ratio and the determinants of premiums for the case of France and for other countries. In the first chapter, we presented a review of the literature on the subject. In the second chapter, we analyzed the underwriting cycle and the determinants of premiums for the aggregate sector in French using the econometrics of nonlinear time series. In the third chapter, we applied the recent developments in the econometrics of panel data taking into account the non stationary and the nonlinearity, firstly, to conduct a disaggregated analysis by French line of the underwriting cycle and the determinants of premiums and, secondly, to carry out a comparative analysis of the determinants of premiums in an international framework. The obtained results for the aggregated sector French led us to conclude that the cyclical phenomenon disappeared in France since 1989 and that the dynamics of the combined ratio is rather modelled by a smooth transition regression model (STR). The pricing of the premiums seems to change from 1985 and the rate of growth of premium appears to be represented by a smooth transition error correction model (STECM). Then the estimation of the static and the dynamic panels allowed us to detect similaritires in the dynamics of the combined ratios of the lines of the non life insurance. Similarly, the estimation of cointegration relations in panel data allowed us to identify differences in the determinants of the premiums lines of the non life insurance. Finally, the comparative analysis by country showed that the rate of growth of the premiums of the countries are reproduced by a panel smooth transition error correction panel data (PSTECM)
APA, Harvard, Vancouver, ISO, and other styles
18

Johann, Amanda Dalla Rosa. "Metodologias para a previsão do comportamento mecânico e para a análise da variação da porosidade de um solo siltoso tratado com cal em diferentes tempos de cura." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/72907.

Full text
Abstract:
A técnica de tratamento de solos com cal ou cimento vem sendo empregada com sucesso na engenharia geotécnica, melhorando as características do solo, que por ser um material complexo e muito variável nem sempre satisfaz as necessidades da obra a ser realizada. As últimas pesquisas em solos tratados com cal mostram o desenvolvimento de metodologias de dosagem baseadas em critérios racionais (como a relação água/cimento para o concreto), onde a relação volume de vazios/volume de cal desempenha papel fundamental na obtenção da resistência desejada. O volume de vazios (ou a porosidade) é um fator importante nestas metodologias de dosagem, e ainda não existem técnicas que quantifiquem este fator (como o ensaio de porosimetria por injeção de mercúrio para o concreto) e também modelos que permitam o entendimento do comportamento da porosidade dessas misturas o longo do tempo de cura (como o modelo de Powers para o concreto). Assim, esta pesquisa tem como objetivo verificar a influência da quantidade de cal (Ca), da porosidade (h), do teor de umidade (w) e do tempo de cura (t), sobre a resistência à compressão simples (qu), sobre a resistência à tração (qt) e sobre a rigidez inicial (Go) de um solo siltoso estabilizado com cal (misturas caulim-cal), verificando a adequação do uso da relação vazios/cal na estimativa de qu, qt e Go. Além disso, esta pesquisa busca quantificar a porosidade dessas misturas solo-cal e também desenvolver um modelo, que permita o entendimento do comportamento da sua porosidade ao longo do tempo de cura. Para atingir os objetivos da pesquisa foram realizados ensaios de resistência à compressão simples, ensaios de resistência à tração por compressão diametral, ensaios para a medida de Go, ensaios de sucção matricial e ensaios de porosimetria por injeção de mercúrio. Os resultados dos ensaios de resistência à compressão simples, de resistência à tração e de rigidez inicial demonstram que o aumento da quantidade de Ca, a diminuição da h e o aumento do t provoca o aumento de qu, qt e Go. Sendo que, qu, qt e Go crescem linearmente com o aumento da quantidade de cal e exponencialmente com a redução da sua porosidade. Assim, verifica-se que, a relação vazios/cal (h/Cav), definida pela razão entre a porosidade da mistura compactada e o teor volumétrico de cal, demonstra ser um parâmetro adequado na estimativa de qu, qt e Go. A partir desses mesmos resultados, observase que a w também desempenha um papel fundamental na previsão de qu, qt e Go. Além disso, a partir dos ensaios de resistência à compressão simples, resistência à tração e rigidez inicial, observa-se que a existência de relações únicas e distintas no controle de qu, qt e Go em função da h, do Cav e da w mostrou-se muito eficiente para relações de dosagem. Relações entre qu, qt, Go e h/Cav também, foram muito satisfatórias. Além disso, foram realizadas análises estatísticas dos dados obtidos neste experimento, e os resultados demonstram, a partir da análise da variância, que todos os fatores controláveis escolhidos no experimento são significativos. Os resultados dos ensaios de porosímetro por intrusão de mercúrio demonstram que a porosidade diminui com o tempo de cura. Porém, o modelo de Powers não se adaptou perfeitamente na previsão da variação da porosidade das misturas caulim-cal estudadas.
The technique of treating soil with lime or cement has been used successfully in geotechnical engineering, improving the characteristics of the soil, which is a highly variable and complex material, and does not always meet the needs of the earthwork to be performed. The last researches in soils treated with lime are in the development of dosage methodologies based on rational criteria (such as water/cement ratio for concrete), where the voids/lime ratio plays a fundamental role in the assesstment of the target strength. The void volume (or porosity) is an important factor in these dosage methodologies, and there are not techniques that quantify this factor (as the test porosimetry with intrusion of mercury, for concrete) and also models that allow understanding the behavior of porosity for these mixtures in long curing times (such as the Powers’s model for concrete). Thus, this research aims to determine the influence of the amount of lime (Ca), porosity (h), moisture content (w) and curing time period (t) on the unconfined compression strength (qu), tensile strength (qt) and initial stiffness (Go) of a silty soil stabilized with lime (kaolin-lime mixtures), checking the suitability of the use of voids/lime ratio in estimating qu, qt and Go. Besides, this research aims to quantify the porosity of these soil-lime mixtures and also adjusting a model that allows understanding the behavior of their porosity during the curing time. For that, a number of unconfined compression tests, splitting tensile tests, the measurement of Go, measurement of matric suction and porosimetry with intrusion of mercury tests were carried out in present work. The results of unconfined compression strength, tensile strength and initial stiffness show that increasing the amount of Ca, decreasing of h and increasing of t, causes increased of qu, qt and Go. Further, qu, qt and Go grow linearly with the increased amount of lime and exponentially with reducing its porosity. The voids/lime ratio, defined as the ratio of the compacted mixture porosity and the lime volumetric content, adjusted by an exponent, proves to be an appropriate parameter to estimate the qu, qt and Go. From these results, it is observed that the w also plays a fundamental parameter in predicting the qu, qt and Go. Moreover, it is noted that the existence of distinct and unique relationships in the control of qu, qt and Go according to h, Cav and w proved to be very efficient for dosage relationships. Relations between qu, qt, Go and h/Cav were very satisfactory too. Furthermore, statistical analyzes were performed of the results obtained in this experiment, and results demonstrate, through analysis of variance, that all controllable factors chosen in the experiment are significant. The results of test porosimetry with intrusion of mercury show that the porosity decreases with increasing curing time. However, the Powers’s model has not adapted perfectly to predict the variation of the porosity of kaolin-lime mixtures studied.
APA, Harvard, Vancouver, ISO, and other styles
19

Helmersson, Tobias, Hana Kang, and Robin Sköld. "Gold During Recessions : A study about how gold can improve the performance of a portfolio during recessions." Thesis, Jönköping University, JIBS, Accounting and Finance, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-7746.

Full text
Abstract:

Problem

When choosing topic for this study the economy was on the brink of a recession. Many experts made varying statements regarding this fact, and further readings in this area led us to question: can an in- clusion of gold enhance the performance in an index portfolio dur- ing recessions? And if so, how much should be allocated to gold?

Purpose

The purpose of this thesis is to look back at the historical price de- velopment of gold and DJIA during recessions in order to find out whether an inclusion of gold can improve a DJIA index portfolio held in today’s recession. In addition, by analyzing the risks and pos- sibilities with gold, the optimal allocation of gold in a DJIA portfolio will be investigated in.

 

Method

The methodological approach will be of a quantitative data analysis approach. By using historical data, new empirical findings will be found by using the deductive approach. This method has been cho- sen due to the nature of the purpose and in order to best give a gen- eral answer to our research questions.

Conclusion

The gold price is strongly influenced by uncertainty, and even though an optimal allocation of gold in each recession could be found, no general optimal allocation applicable in today’s recession could be found. Gold has higher risk (higher variance) than DJIA, but is compensated with higher return as well.

APA, Harvard, Vancouver, ISO, and other styles
20

Jonsson, Robin, and Jessica Radeschnig. "Momentum Investment Strategies with Portfolio Optimization : A Study on Nasdaq OMX Stockholm Large Cap." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-24848.

Full text
Abstract:
This report covers a study testing the possibility of adding portfolio optimization by mean-variance analysis as a tool to extend the concept of momentum strategies in contrast to naive allocation formed by Jegadeesh & Titman (1993). Further these active investment strategies are compared with a passive benchmark as well as a randomly selected portfolio over the entire study-period. The study showed that the naive allocation model outperformed the mean-variance model both economically as well as statistically. No indication where obtained for a lagged return effect when letting a mean-variance model choose weights for a quarterly holding period and the resulting investment recommendation is to follow a naive investment strategy within a momentum framework.
APA, Harvard, Vancouver, ISO, and other styles
21

Dalla, Rosa Amanda. "Estudo dos parâmetros-chave no controle da resistência de misturas solo-cinza-cal." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2009. http://hdl.handle.net/10183/17359.

Full text
Abstract:
Na engenharia geotécnica o principal material utilizado, o solo, é complexo e muito variável, sendo que nem sempre satisfaz as necessidades da obra a ser realizada. Com a constante busca de soluções que proporcionem melhoria no solo, redução de custos e preservação dos recursos naturais, o aproveitamento de resíduos vem se destacando, como é o caso da utilização da cinza volante na estabilização de solos com cal. Contudo, ainda não existem metodologias de dosagem dessas misturas baseadas em critérios mais racionais, como a relação água/cimento para o concreto. Assim, esta pesquisa tem como objetivo quantificar a influência das variáveis de interesse quantidade de cal, quantidade de cinza volante, porosidade e do tempo de cura sobre a resistência de um solo estabilizado com cal e cinza volante, verificando a adequação do uso da relação vazios/cal na estimativa da resistência à compressão simples destas misturas. Para isso foram realizados ensaios de compressão simples e medidas de sucção matricial. Os resultados demonstram que o aumento da quantidade de cal e cinza volante, do peso específico aparente seco e do tempo de cura provoca o aumento da sua resistência à compressão simples. Sendo que, a resistência à compressão simples cresce linearmente com o aumento da quantidade de cal e exponencialmente com a redução da sua porosidade. Assim, verifica-se que, a relação vazios/cal, definida pela razão entre a porosidade da mistura compactada e o teor volumétrico de cal, ajustado por um expoente, demonstra ser um parâmetro adequado na estimativa da resistência à compressão simples das misturas estudadas. Além disso, a existência de relações únicas e distintas no controle da resistência à compressão simples do solo estudado em função da porosidade, do teor volumétrico de cal e da quantidade de cinza volante para 28, 60 e 90 dias de cura mostrou-se muito eficiente para relações de dosagem. Contudo, a análise estatística dos dados obtidos em um experimento é de extrema importância. Uma metodologia utilizada para analisar estes dados é a metodologia de Projeto de Experimentos, que é apoiada fortemente em conceitos estatísticos, destinada a otimizar o planejamento, a execução e análise de um experimento. Assim, a análise a ser realizada neste experimento baseia-se em um projeto fatorial completo que investiga todas as combinações de níveis dos fatores controláveis quantidade de cal, quantidade de cinza volante, porosidade e tempo de cura. Os resultados demonstram, a partir da análise da variância, que todos os fatores controláveis escolhidos no experimento são significativos, bem como todas as interações entre eles. A metodologia de Projeto de Experimentos mostrou-se eficiente na determinação de quais fatores intervenientes são importantes para o fenômeno em estudo.
The main material used in the geotechnical engineering, the soil, is complex and highly variable, and does not always meet the needs of the work to be performed. With the constant search for solutions that provide soil improvement, costs reduction and natural resources preservation, the waste recovery has been increasing, such as the use of fly ash in the soils stabilization with lime. However, there are still no dosage methods of these mixtures based on more rational criteria such as the water/cement ratio for the concrete. Thus, this research aims to quantify the influence of variables of interest lime amount, the fly ash quantity, porosity and curing time on the strength of soil stabilized with lime and fly ash, verifying the adaptation of the use of the voids/lime ratio in the estimation of unconfined compression strength of these mixtures. Unconfined compression strength tests and measurement of matric suction were carried out in present work. The results show that increasing lime and/or fly ash amount, dry unit weight and curing time has as consequence the increase of unconfined compression strength. Unconfined compression strength increases linearly with the increase of lime amount and exponentially with the reduction of its porosity. The voids/lime ratio, defined as the ratio of the compacted mixture porosity and the lime volumetric content, adjusted by an exponent, proves to be an appropriate parameter to estimate the unconfined compression strength of the soil-ash-lime studied. Furthermore, the existence of unique and distinct relationships in the control of unconfined compression strength of the soil studied as a function of porosity, lime volumetric content and fly ash quantity for 28, 60 and 90 days of curing was very efficient for dosage relationships. However, the statistical analysis of data from an experiment is utmost importance. A methodology to analyze these data is the methodology of Design of Experiments, which is strongly supported by statistical concepts, designed to optimize the planning, the implementation and analysis of an experiment. The analysis to be performed in this experiment is based on a complete factorial design to investigate all combinations of levels of controllable factors lime amount, fly ash quantity, porosity, and time curing. The results show, from the analysis of variance, that all controllable factors chosen in the experiment and all interactions between them are significant. The methodology of Design of Experiments was efficient in determining which involved factors are important for the phenomenon under study.
APA, Harvard, Vancouver, ISO, and other styles
22

Júnior, José César Cruz. "Modelo de razão de hedge ótima e percepção subjetiva de risco nos mercados futuros." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/11/11132/tde-05082009-075152/.

Full text
Abstract:
O objetivo deste trabalho foi investigar motivos pelos quais os produtores brasileiros de boi gordo e milho fazem relativamente pouco uso dos mercados futuros como ferramenta de gerenciamento de risco de preços. Duas abordagens diferentes foram apresentadas na pesquisa. Para o mercado de boi gordo, onde a presença de hedgers parece ser maior, um modelo de razão de hedge ótima alternativo ao tradicional modelo de mínima variância foi utilizado. O modelo alternativo faz uso de uma função de utilidade com aversão relativa ao risco constante para modelar as preferências dos indivíduos. Esta abordagem é considerada mais realista, por permitir que o nível absoluto de aversão ao risco se altere com a riqueza. Além disso, uma medida de downside risk e o relaxamento das hipóteses do modelo tradicional de mínima variância foram adicionados na análise. De acordo com os resultados, quando consideradas a possibilidade de se realizar investimento em um ativo alternativo ao mercado agropecuário, e a presença de custos de transação, o incentivo ao hedge se reduz acentuadamente. A utilização de uma medida alternativa de risco colaborou para esta redução, que foi mais acentuada para indivíduos menos aversos ao risco. Isto pode ser concluído observando-se que as razões de hedge ótimas, obtidas através da maximização da utilidade esperada dos indivíduos, foram, em grande parte, inferiores àquelas obtidas pelo modelo tradicional. Além disso, na maior parte dos casos, a utilização das razões de hedge ótimas alternativas mostrou-se mais eficiente que a obtida pelo modelo tradicional, pois possibilitou a obtenção de maiores razões retorno/risco no período selecionado para teste. Para o mercado de milho, um questionário foi aplicado a 90 produtores no sul e centro-oeste do Brasil. O questionário teve o objetivo de verificar se existem sinais de excesso de confiança nos preços por parte dos produtores de milho entrevistados. Adicionalmente, perguntas sobre o conhecimento do mercado futuro na BM&FBOVESPA foram também apresentadas. Em relação a este último tema, a maior parte dos produtores respondeu que conhece sobre o mercado futuro na bolsa brasileira, mas não fazem uso do mesmo. O principal motivo apontado pelos produtores foi não possuir informação suficiente sobre os mercados futuros. Associado a este resultado, descobriu-se que existe pouco incentivo para que os produtores realizem proteção de preços da produção, pois, para a maior parte dos entrevistados, as variâncias subjetivas de preços foram significativamente inferiores às variâncias dos preços históricos no mercado físico e futuro. Este resultado permitiu concluir que o excesso de confiança nos preços pode ser considerado uma explicação alternativa para o baixo uso dos mercados futuros como ferramenta de gestão de risco de preços. Como conclusões gerais, ações que visem promover reduções de custos de transação no mercado futuro e uma maior divulgação dos benefícios desta importante ferramenta na redução de risco de preços devem ser mais exploradas pela BM&FBOVESPA. Além disso, a promoção do maior conhecimento a respeito de como se negociar nesse mercado pode ser também uma boa estratégia para se fazer com que um maior número de produtores passe a negociar nesse mercado.
This research aimed to investigate the significant underuse of futures markets as a risk management tool by Brazilian live cattle and corn producers. To this end, the paper used two different approaches. In the live cattle market, where there appears a higher participation of hedgers trading, an alternative hedge ratio model was used instead of the standard minimum variance model. The alternative model uses a constant relative risk aversion utility function to model individual preferences. This approach is considered more realistic as use of the constant relative risk aversion utility function allows for the absolute level of risk aversion to change with wealth. In addition, a downside risk measure was introduced and certain restrictive assumptions to the minimum variance model were relaxed. According to the results, when the possibility of investment in an alternative asset and transaction costs are considered, the incentive to hedge is dramatically reduced. The use of an alternative risk measure also proved important to this reduction, which was higher for less risk averse individuals. This conclusion may be drawn after observing that the optimal hedge ratios obtained from the expected utility maximization are, in most cases, lower than those obtained by the standard model. Moreover, in most cases the use of alternative optimal hedge ratios provides higher return/risk ratios during the test period. For the corn market, a survey questionnaire was conducted of ninety producers in South and Central- West Brazil. The survey was conducted in order to verify the presence of overconfidence in prices among corn producers. The survey also asked questions regarding their knowledge of futures markets at BM&FBOVESPA. Most respondents answered that while they know about futures markets at the Brazilian board of trade, they do not trade on it because they do not have enough information about trading. The results also revealed that there is a low incentive for producers to hedge their production in futures markets because for most producers, subjective price variances are significantly lower than the variance of historical futures and spot prices. Given the results, one may conclude that the overconfidence effect in prices can be considered an alternative explanation to the low use of futures markets as a price risk management tool. Furthermore, actions which promote transaction costs reductions and promote the benefits to producers of using this important risk management tool while trading in the futures markets must be more carefully explored by the BM&FBOVESPA. Moreover, promoting knowledge of trading in futures markets may likely be a successful strategy for the wider adoption of futures trading among corn and live cattle producers.
APA, Harvard, Vancouver, ISO, and other styles
23

Farias, Ana Ester. "Teste da hipótese do caminho aleatório no Brasil e nos Estados Unidos." Universidade Federal de Santa Maria, 2009. http://repositorio.ufsm.br/handle/1/4542.

Full text
Abstract:
The stock market has been objective of many researches that seek to identify the presence of some previsibility degree in the return series. Inside of this context grew the Market Efficiency Theory divided in three forms: weak efficiency, semi-strong and strong. The random walk hypothesis was created to test, empirically, the Market Efficiency in the weak-form. Acceptance or rejection brings implications as the possibility of its to get to foresee, somehow, based in past returns, the future returns, removing advantage of that to gain extraordinary incomes. To test the random walk, specialists in this subject they created, along the years, methods and, among these, they stand out the variance ratio tests that, initially they were applied in developed markets and, nowadays, it has also been used at emerging markets. For the development of the present research, with the intention of testing the random walk hypothesis in an emerging market (Brazil) and in a developed market (United States), were implemented the following variance ratio tests: simple, multiple, based in the ranks and signs. The returns of IBOVESPA were used, as proxy of the Brazilian stock market, and of S&P 500, to the North American market, collected daily and weekly in the period of January 03, 2000 to April 25, 2008. The results demonstrated an acceptance of the random walk hypothesis in most of the made tests appearing for a weak form of market efficiency.
O mercado de ações tem sido alvo de muitas pesquisas que visam identificar a presença de algum grau de previsibilidade nas séries de retornos. Dentro deste contexto desenvolveu-se a Teoria de Eficiência de Mercado dividida em três formas: eficiência fraca, semiforte e forte. A hipótese do caminho aleatório foi criada para testar, empiricamente, a Eficiência de Mercado na forma fraca. Sua aceitação ou rejeição traz implicâncias quanto a possibilidade de se conseguir prever, de alguma maneira, com base em retornos passados, os retornos futuros, tirando proveito disso para auferir rendimentos extraordinários. A fim de testar a hipótese do caminho aleatório estudiosos do assunto criaram, ao longo dos anos, métodos e, dentre estes, destacam-se os testes de quociente de variâncias que, inicialmente foram aplicados em mercados desenvolvidos e, atualmente, também tem sido utilizados em mercados emergentes. Para o desenvolvimento da presente pesquisa, com o intuito de testar a hipótese do caminho aleatório em um mercado emergente (Brasil) e em um mercado desenvolvido (Estados Unidos), foram aplicados os seguintes testes de quociente de variâncias: simples, múltiplas, com base nos postos e com base nos sinais. Foram utilizados os retornos do IBOVESPA, como proxy do mercado acionário brasileiro, e do S&P 500, para o mercado norte-americano, coletados diariamente e semanalmente no período de 03 de janeiro de 2000 a 25 de abril de 2008. Os resultados demonstraram uma aceitação da hipótese do caminho aleatório na maioria dos testes efetuados apontando para uma forma fraca de eficiência de mercado.
APA, Harvard, Vancouver, ISO, and other styles
24

da, Costa Joel. "Online Non-linear Prediction of Financial Time Series Patterns." Master's thesis, Faculty of Science, 2020. http://hdl.handle.net/11427/32221.

Full text
Abstract:
We consider a mechanistic non-linear machine learning approach to learning signals in financial time series data. A modularised and decoupled algorithm framework is established and is proven on daily sampled closing time-series data for JSE equity markets. The input patterns are based on input data vectors of data windows preprocessed into a sequence of daily, weekly and monthly or quarterly sampled feature measurement changes (log feature fluctuations). The data processing is split into a batch processed step where features are learnt using a Stacked AutoEncoder (SAE) via unsupervised learning, and then both batch and online supervised learning are carried out on Feedforward Neural Networks (FNNs) using these features. The FNN output is a point prediction of measured time-series feature fluctuations (log differenced data) in the future (ex-post). Weight initializations for these networks are implemented with restricted Boltzmann machine pretraining, and variance based initializations. The validity of the FNN backtest results are shown under a rigorous assessment of backtest overfitting using both Combinatorially Symmetrical Cross Validation and Probabilistic and Deflated Sharpe Ratios. Results are further used to develop a view on the phenomenology of financial markets and the value of complex historical data under unstable dynamics.
APA, Harvard, Vancouver, ISO, and other styles
25

Vaz, Sónia Melania Oliveira. "How efficient is the Portuguese Stock Market?" Master's thesis, Instituto Superior de Economia e Gestão, 2012. http://hdl.handle.net/10400.5/10329.

Full text
Abstract:
Mestrado em Finanças
Esta Dissertação testa a hipótese de eficiência fraca do mercado aplicada a seis índices de mercado Europeus (França, Alemanha, Reino Unido, Grécia, Portugal e Espanha) no período de Janeiro de 2007 a Janeiro de 2012. Para tal, testámos as correlações, realizámos o teste runs, o teste de raízes unitárias bem como teste de variâncias (variance ratio test). Adicionalmente analisámos se seria possível prever os retornos do PSI-20 recorrendo a data mining e, mais concretamente, aos algoritmos de data mining: k-NN e Redes Neuronais. Os nossos resultados evidenciam que no período de Janeiro de 2007 a Setembro de 2008 os índices de referência de França, Alemanha e Espanha, cumpriram a maioria dos critérios referentes à hipótese de eficiência fraca de mercado. Os nossos resultados evidenciam ainda que esta situação se verifica depois para os índices dos seis países considerados, no período de Setembro de 2008 a Janeiro de 2012. Relativamente à previsão dos retornos do PSI-20, desenhámos uma estratégia baseada nas previsões dadas pelo k-NN e Redes Neuronais e concluímos que, ao implementá-la obteríamos retornos consideravelmente elevados face aos alcançados através de uma simples estratégia de buy-and-hold, comprometendo assim a hipótese de eficiência fraca de mercado.
This dissertation reports the results of tests on the weak-form market efficiency applied to six European market indexes (France, Germany, UK, Greece, Portugal and Spain) from January 2007 to January 2012. For this matter we use a serial correlation test, a runs test, an augmented Dickey-Fuller test and the multiple variance ratio test. In addition we also analyze if it would be possible to forecast the PSI-20 returns resorting data mining, more specifically using k-NN and Neural Network. Our findings show that from January 1997 to September 2008 France, Germany and Spain meet most of the criteria for the weak-form market efficiency hypothesis, a situation that occurs afterwards for all six European market indexes from September 2008 to January 2012. Regarding the forecast of PSI-20 returns we designed a strategy based on the forecast of k-NN and Neural Network and concluded that by implementing it we would obtain relevant higher returns than the ones achieved by a buy-and-hold strategy, which compromises the weak-form market efficiency.
APA, Harvard, Vancouver, ISO, and other styles
26

Hoeltgebaum, Thiago. "Variable compression ratio engines." reponame:Repositório Institucional da UFSC, 2016. https://repositorio.ufsc.br/xmlui/handle/123456789/167873.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Mecânica, Florianópolis, 2016.
Made available in DSpace on 2016-09-20T04:28:56Z (GMT). No. of bitstreams: 1 341772.pdf: 12779632 bytes, checksum: 29a0ffb2f91d302c34bba790cad567ba (MD5) Previous issue date: 2016
Os motores de taxa de compressão variável (VCR) têm se tornado uma oportunidade para a adequação frente às novas leis de redução de consumo de combustível e emissão de poluentes. Acredita-se que os motores VCR são capazes de unir tanto eficiência quanto alto desempenho. É objetivo desta pesquisa investigar as oportunidades de desenvolvimentos futuros no âmbito dos motores de taxa de compressão variável. A seguir, um breve resumo do trabalho e suas seções são apresentadas. Introdução: A introdução tem por objetivo apresentar a tecnologia de motores de taxa de compressão variável, suas características e justificativas para a pesquisa. Além disso, discutem-se os objetivos, as delimitações do trabalho e, brevemente, a metodologia adotada. Revisão de Bibliografia: Neste capítulo apresenta-se a metodologia de desenvolvimento de produtos conhecida por modelo PRODIP (BACK et al.,2008). Todas as etapas e características são discutidas e criticadas em relação ao trabalho que se almeja desenvolver. Metodologias de projeto de mecanismos também são apresentadas focando-se nos trabalhos de Yan (1999) e Tsai (2000). Além disso, discute-se a respeito da metodologia proposta por Murai (2013), a qual foi desenvolvida junto ao Laboratório de Robótica da UFSC e tem se mostrado muito importante para o desenvolvimento de novos mecanismos. Por último, uma metodologia de pesquisa de patentes também é apresentada. Esta metodologia também foi desenvolvida junto ao Laboratório de Robótica da UFSC e está de acordo com os escritórios internacionais de patentes. Juntamente à metodologia de busca de patentes, encontra-se uma breve explicação sobre a estrutura de uma patente e características gerais de uma pesquisa de patentes. Motores de Taxa de Compressão Variável: O terceiro capítulo é dedicado ao levantamento de estado da arte dos motores VCR. Primeiramente mostra-se uma classificação de motores reconfiguráveis e o enquadramentos dos motores VCR nessa classificação. Então, aborda-se a literatura (livros e artigos) para investigar testes experimentais e simulações a respeito do tema além de classificações anteriores deste tipo de motor. O levantamento de estado da arte continua analisando produtos lançados no mercado e as principais empresas por trás desta tecnologia. Por último os resultados da pesquisa de patentes são mostrados. Foram analisadas 1163 patentes resultando em 127 conceitos diferentes de motores VCR. Baseando-se nesta pesquisa e a comparando com outros autores, este trabalho propõe uma nova classificação para os motores VCR, os quais podem ser divididos em 7 grandes classes. As cadeias cinemáticas de todas as classes de motores VCR são analisadas com o objetivo de investigar suas respectivas características estruturais e funcionais. Além disso, a reconfigurabilidade em motores VCR é discutida. Desenvolvimento de Motores VCR: No quarto capítulo são definidos os requisitos estruturais e funcionais por meio do levantamento do estado da arte e por comparação com os trabalhos de Freudenstein and Maki (1983) e Tsai (2000). Os requisitos são então utilizados para enumerar e selecionar cadeias cinemáticas com potencial de se desenvolver motores de taxa de compressão variável. Por fim, discute-se o potencial para inovação destes motores. Estudos de Caso: Neste capítulo, três cadeias cinemáticas em potencial definidas no capítulo anterior são estudadas com o objetivo de exemplificar o desenvolvimento de novos motores VCR de acordo com a abordagem sistemática do Laboratório de Robótica da UFSC.

Abstract : The variable compression ratio (VCR) engine has become an opportunity to overcome the new consumption and emissions laws. Researchers believe that the VCR engine can unite both efficiency and performance. This research aims to investigate the opportunity of further developments within the VCR field. In order to accomplish that, a review of design methodology is provided. First an overview of product development methodology is presented focusing on the PRODIP Model (BACK et al., 2008). Then, it is discussed the mechanism design methodologies such as Yan (1999) and Tsai (2000). Also, the methodology proposed by Murai (2013) is applied. In addition, a patent survey methodology is provided. A state of the art survey analysed the information available in the literature, the market and the patents database. The patent survey was conducted analysing 1163 patents and resulting in 127 different VCR engine designs. Based on that survey and comparing with several authors, this research proposes an enhanced classification of the VCR engines, which contains 7 major classes. The kinematic chains from all classes of VCR engines are analysed in order to investigate the structural and functional characteristics which are compared with previous works from Freudenstein and Maki (1983) and Tsai (2000). This information is used to discuss the reconfigurability of VCR engines, to define the proper design requirements and to generate new potential kinematic chains for innovative designs of VCR engines. At last, three case studies are presented with the objective of exemplifying the development of novel VCR engines using the UFSC Robotics Lab systematic approach.
APA, Harvard, Vancouver, ISO, and other styles
27

Jalles, Diogo Oom de Sousa Tovar. "Weak-form efficiency of equity energy exchange traded funds." Master's thesis, Instituto Superior de Economia e Gestão, 2012. http://hdl.handle.net/10400.5/10865.

Full text
Abstract:
Mestrado em Finanças
O principal objetivo desta dissertação de final de mestrado é aferir se os Exchange Traded Funds (ETF) Energéticos são eficientes na forma fraca. Para o período compreendido entre 2008 e 2012 selecionámos todos os ETFs energéticos que são negociados no mercado de capitais dos Estados Unidos, com uma data de emissão anterior a 2008. A amostra selecionada é composta por 26 ETFs e foram usados os dados históricos dos preços diários para aplicar vários testes: testes de autocorrelação, testes de runs, testes de raízes unitárias admitindo quebras estruturais, análise de raízes unitárias em painel e testes de rácio de variância. Estes testes permitiram-nos concluir que a variação dos preços dos ETFs Energéticos seguem um passeio aleatório e que a hipótese de eficiência fraca não é rejeitada.
The main purpose of this final master dissertation is to assess the weak-form efficiency of Equity Energy Exchange Traded Funds (ETF). For the period of 2008-2012 we selected all equity energy ETFs traded in the U.S. stock market with inception date before 2008. The sample selected, is composed by 26 ETFs and we make use of full daily historical data and apply various tests: autocorrelation tests, runs test, unit roots structural breaks tests, panel unit roots analysis and variance ratio tests. These tests allow us to conclude that equity energy ETFs price changes follow a random walk, and so the weak-form efficiency hypothesis is not rejected.
APA, Harvard, Vancouver, ISO, and other styles
28

Kolks, Giacomo, and Jürgen Weber. "Electro-hydrostatic compact drives with variable transmission ratio." Technische Universität Dresden, 2020. https://tud.qucosa.de/id/qucosa%3A71209.

Full text
Abstract:
Electro-hydrostatic compact drives are an emerging technology within a range of industrially available translational drive solutions, combining the specific advantages of hydraulic and electromechanical screw drives. Compared to electromechanical screw drives, hydrostatic drives can vary their transmission ratio with comparably little effort, giving them the key advantage of downsizing the electric drive components for a given load cycle. This paper provides a guideline on how to calculate the downsizing potential of electric motors and inverters arising from variable transmission ratio based on the load regime of a given application. Furthermore, a comprehensive systematisation of the actual switching process is described for systems that are switched by means of switching valves. The presented set of methodology is applied to demonstrators in order to validate the general findings.
APA, Harvard, Vancouver, ISO, and other styles
29

Lesser, Elizabeth Rochelle. "A New Right Tailed Test of the Ratio of Variances." UNF Digital Commons, 2016. http://digitalcommons.unf.edu/etd/719.

Full text
Abstract:
It is important to be able to compare variances efficiently and accurately regardless of the parent populations. This study proposes a new right tailed test for the ratio of two variances using the Edgeworth’s expansion. To study the Type I error rate and Power performance, simulation was performed on the new test with various combinations of symmetric and skewed distributions. It is found to have more controlled Type I error rates than the existing tests. Additionally, it also has sufficient power. Therefore, the newly derived test provides a good robust alternative to the already existing methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Pradat, Yannick. "Retraite et risque financier." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLED022/document.

Full text
Abstract:
Le premier chapitre examine les caractéristiques statistiques à long terme des rendements financiers en France et aux USA. Les propriétés des différents actifs font apparaître qu’à long terme les actions procurent un risque sensiblement moins élevé. En outre, les propriétés de retour à la moyenne des actions justifient qu’elles soient utilisées dans une stratégie de cycle de vie comme « option par défaut » de plans d’épargne retraite. Le chapitre deux fournit une explication au débat sur l'hypothèse d’efficience des marchés. La cause du débat est souvent attribuée à la petite taille des échantillons et à la faible puissance des tests statistiques dédiés. Afin de contourner ce problème, nous utilisons l'approche développée par Campbell et Viceira (2005) qui utilisent une méthode VAR pour mettre en évidence l’existence de retour vers la moyenne dans le cours des actifs risqués.Le troisième chapitre évalue la vitesse de convergence des cours des actions. Un moyen classique pour caractériser la vitesse de retour vers la moyenne est la « demi-vie ». En comparant les indices boursiers de quatre pays développés (États-Unis, Royaume-Uni, France et Japon) sur la période 1950-2014, nous établissons une vitesse de convergence significative, avec une demi-vie entre 4,0 et 5,8 ans.Le dernier chapitre présente les résultats d'un modèle conçu pour étudier les interactions entre la démographie et les régimes de retraite. Afin d’étudier les risques inhérents à l’utilisation des revenus du capital pour financer les retraites, nous utilisons un « Trending OU process » au lieu d’un MBG classique pour modéliser les rendements. Pour un épargnant averse au risque le marché pourrait concurrencer les régimes par répartition
Chapter one examines the long run statistical characteristics of financial returns in France and the USA for selected assets. This study clearly shows that the returns’ distributions diverge from the Gaussian strategy as regards longholding periods. Thereafter we analyze the consequences of the non-Gaussian nature of stock returns on default-option retirement plans.Chapter two provides a reasonable explanation to the strong debate on the Efficient Market Hypothesis. The cause of the debate is often attributed to small sample sizes in combination with statistical tests for mean reversion that lackpower. In order to bypass this problem, we use the approach developed by Campbell and Viceira (2005) who have settled a vectorial autoregressive methodology (VAR) to measure the mean reversion of asset returns.The third chapter evaluates the speed of convergence of stock prices. A convenient way to characterize the speed of mean reversion is the half-life. Comparing the stock indexes of four developed countries (US, UK, France and Japan) during the period 1950-2014, we establish significant mean reversion, with a half-life lying between 4,0 and 5,8 years.The final chapter provides some results from a model built in order to study the linked impacts of demography and economy on the French pension scheme. In order to reveal the risks that are contained in pension fund investment, we use a Trending Ornstein-Uhlenbeck process instead of the typical GBM for modeling stock returns. We find that funded scheme returns, net of management fees, are slightly lower thanthe PAYG internal rate of return
APA, Harvard, Vancouver, ISO, and other styles
31

Higgs, Helen. "Price and volatility relationships in the Australian electricity market." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16404/.

Full text
Abstract:
This thesis presents a collection of papers that has been published, accepted or submitted for publication. They assess price, volatility and market relationships in the five regional electricity markets in the Australian National Electricity Market (NEM): namely, New South Wales (NSW), Queensland (QLD), South Australia (SA), the Snowy Mountains Hydroelectric Scheme (SNO) and Victoria (VIC). The transmission networks that link regional systems via interconnectors across the eastern states have played an important role in the connection of the regional markets into an efficient national electricity market. During peak periods, the interconnectors become congested and the NEM separates into its regions, promoting price differences across the market and exacerbating reliability problems in regional utilities. This thesis is motivated in part by the fact that assessment of these prices and volatility within and between regional markets allows for better forecasts by electricity producers, transmitters and retailers and the efficient distribution of energy on a national level. The first two papers explore whether the lagged price and volatility information flows of the connected spot electricity markets can be used to forecast the pricing behaviour of individual markets. A multivariate generalised autoregressive conditional heteroskedasticity (MGARCH) model is used to identify the source and magnitude of price and volatility spillovers within (intra-relationship) and across (inter-relationship) the various spot markets. The results show evidence of the fact that prices in one market can be explained by their own price lagged one-period and are independent of lagged spot prices of any other markets when daily data is employed. This implies that the regional spot electricity markets are not fully integrated. However, there is also evidence of a large number of significant ownvolatility and cross-volatility spillovers in all five markets indicating that shocks in some markets will affect price volatility in others. Similar conclusions are obtained when the daily data are disaggregated into peak and off-peak periods, suggesting that the spot electricity markets are still rather isolated. These results inspired the research underlying the third paper of the thesis on modelling the dynamics of spot electricity prices in each regional market. A family of generalised autoregressive conditional heteroskedasticity (GARCH), RiskMetrics, normal Asymmetric Power ARCH (APARCH), Student APARCH and skewed Student APARCH is used to model the time-varying variance in prices with the inclusion of news arrival as proxied by the contemporaneous volume of demand, time-of-day, day-of-week and month-of-year effects as exogenous explanatory variables. The important contribution in this paper lies in the use of two latter methodologies, namely, the Student APARCH and skewed Student APARCH which take account of the skewness and fat tailed characteristics of the electricity spot price series. The results indicate significant innovation spillovers (ARCH effects) and volatility spillovers (GARCH effects) in the conditional standard deviation equation, even with market and calendar effects included. Intraday prices also exhibit significant asymmetric responses of volatility to the flow of information (that is, positive shocks or good news are associated with higher volatility than negative shocks or bad news). The fourth research paper attempts to capture salient feature of price hikes or spikes in wholesale electricity markets. The results show that electricity prices exhibit stronger mean-reversion after a price spike than the mean-reversion in the normal period, suggesting the electricity price quickly returns from some extreme position (such as a price spike) to equilibrium; this is, extreme price spikes are shortlived. Mean-reversion can be measured in a separate regime from the normal regime using Markov probability transition to identify the different regimes. The fifth and final paper investigates whether interstate/regional trade has enhanced the efficiency of each spot electricity market. Multiple variance ratio tests are used to determine if Australian spot electricity markets follow a random walk; that is, if they are informationally efficient. The results indicate that despite the presence of a national market only the Victorian market during the off-peak period is informationally (or market) efficient and follows a random walk. This thesis makes a significant contribution in estimating the volatility and the efficiency of the wholesale electricity prices by employing four advanced time series techniques that have not been previously explored in the Australian context. An understanding of the modelling and forecastability of electricity spot price volatility across and within the Australian spot markets is vital for generators, distributors and market regulators. Such an understanding influences the pricing of derivative contracts traded on the electricity markets and enables market participants to better manage their financial risks.
APA, Harvard, Vancouver, ISO, and other styles
32

Sousa, José Raimundo Pereira. "Análise de estratégias long-short trading com rácios de variâncias." Master's thesis, Instituto Superior de Economia e Gestão, 2013. http://hdl.handle.net/10400.5/6341.

Full text
Abstract:
Mestrado em Finanças
Neste trabalho são aplicados os testes de rácios de variâncias aos spreads de índices accionistas. Os spreads utilizados foram construídos com base no S&P 500 e uma série de outros índices de mercados accionistas mundiais. De forma a avaliar a eficiência dos mercados, foram utilizadas estratégias de negociação, baseadas na informação passada dos preços para gerar decisões de investimento. As estratégias de negociação são aplicadas tentando explorar a reversão para a média dos spreads. A hipótese de passeio aleatório dos spreads é rejeitada pelos testes de rácios de variâncias, e o sucesso das estratégias está dependente dos parâmetros utilizados. As estatísticas do desempenho das estratégias produzem resultados muito díspares, não permitindo tirar uma conclusão acerca da eficiência dos mercados.
In this paper we apply the variance ratio tests to equity indices spreads. The spreads used were constructed based on the S&P 500 and a number of other indices of global equity markets. In order to assess the efficiency of markets, we used a number of trading strategies based on past information in prices to generate investment decisions. The trading rules are applied in order to explore the mean reversion of spreads. The random walk hypothesis of spreads is rejected by the variance ratio tests, and the success of the strategies is dependent on the parameters used. The performance statistics of the trading rules produce highly disparate results, not allowing to draw a conclusion about the markets efficiency.
APA, Harvard, Vancouver, ISO, and other styles
33

Alves, Gonçalo Filipe Rodrigues. "Testing the random walk hypothesis with technical trading rules." Master's thesis, Instituto Superior de Economia e Gestão, 2015. http://hdl.handle.net/10400.5/10939.

Full text
Abstract:
Mestrado em Finanças
Neste trabalho são testadas as hipóteses de passeio aleatório ao mercado acionista português, examinando as dezoito ações e o índice PSI-20. Considerando cotações diárias e mensais durante o período de 1999-2015. Foram utilizados os testes Augmented Dickey-Fuller (ADF), os testes de rácio de variância automático assim como os rácios de variâncias individuais e múltiplos propostos por Lo e Mackinlay, e Chow e Denning, respetivamente. Os vários testes utilizados para confirmar a hipótese de passeio aleatório das dezoito ações assim como do índice PSI-20, obtiveram resultados mistos contra a hipótese testada. Enquanto o teste Augmented Dickey-Fuller (ADF) rejeitou a hipótese de raiz unitária para todas as ações e também para o índice PSI-20 confirmando assim um passeio aleatório. Por outro lado, os testes de rácios de variâncias, rejeitam a hipótese testada para algumas das ações consideradas assim como para o índice PSI-20, contudo tende esse número de ações tende a diminuir quando se utiliza as cotações mensais.
This paper investigates the efficiency of the eighteen stocks that constitute the main Portuguese stock index, the PSI-20 of the Lisbon Stock Exchange. Tools used for the investigation were daily and monthly data from January 1999 to May of 2015, using the Augmented Dickey-Fuller (ADF) test, the automatic variance ratio by Choi and the individual and multiple variance ratios, by Lo and Mackinlay, and, Chow and Denning, which test the efficiency of the eighteen stocks and PSI-20 index. The Augmented Dickey-Fuller (ADF) tests the null hypothesis that the series has a unit root, while the variance ratio tests the random walk hypothesis. Based on these tests, the results provide mixed evidence against the random walk hypothesis. The results for the unit root tests do not reject the efficient market hypothesis for the entire sample, while the results from the variance ratio tests do, but tend to decrease in monthly data.
APA, Harvard, Vancouver, ISO, and other styles
34

Félix, João Pedro Santos Silva. "A gestão de carteira de acções aplicada ao mercado francês." Master's thesis, Instituto Superior de Economia e Gestão, 2011. http://hdl.handle.net/10400.5/10263.

Full text
Abstract:
Mestrado em Finanças
O principal objectivo deste estudo é avaliar as possíveis vantagens de uma carteira caracterizada por uma gestão activa face a uma carteira caracterizada pela gestão passiva, com base no índice de acções CAC-40. A gestão activa teve por base em 2 modelos: Modelo de Markowitz (carteira óptima) e Modelo de Variância Mínima. Já a gestão passiva é baseada numa carteira composta por todas as acções em proporções iguais (carteira naïve). Na gestão activa as proporções dos activos constituintes de cada carteira foram revistos mensal, trimestral, semestral e anualmente tendo em conta a evolução do mercado. Foram consideradas janelas de dados de 1 e 2 anos para determinar as ponderações a investir em cada activo. O segundo objectivo foi analisar o impacto dos custos de intermediação financeira no desempenho das carteiras calculadas anteriormente. Foram utilizados os títulos que se mantiveram em bolsa durante o período compreendido entre Janeiro de 1997 e Dezembro de 2006, o que corresponde a 31 acções do CAC-40. Depois de realizado este trabalho, concluiu-se que a 1 mês a carteira naïve é a melhor opção de investimento e a 3 meses tanto esta carteira como a carteira de mercado são boas opções de investimento. Já a 6 e 12 meses, parece não existir diferenças entre as carteiras geridas de forma activa e passiva. Os custos de intermediação financeiros têm um impacto negativo nas rendibilidades e rácios de Sharpe das várias carteiras e devem ser considerados quando se pretende investir em acções.
The main goal of this thesis is to evaluate and compare the advantages of an active managed portfolio versus a passive managed portfolio which are composed by CAC-40 stocks. The active management portfolio is based on 2 models: Markowitz Portfolio Theory (optimized portfolio) and Minimum Variance Portfolio. On the other hand the passive management portfolio is composed by all stocks with the same weight (naïve portfolio). In the active management portfolio the weight of the stocks are allocated periodically, monthly, quarterly, semiannually and annually according to the market behavior. This allocation process will be taken in data "windows" of 1 and 2 years to determine the weight of every stock. The second goal of this thesis is to evaluate the impact of management costs in the 3 portfolios performance (optimized, minimum variance and naïve). The stocks sample used in this work consists in all stocks that remain in the French index CAC-40 between January 1 1997 and December 31 2006 which makes a total of 31 stocks. The conclusions show that the passive management is the best option for the monthly and quarterly investment. For the semiannual and annual investment, there's no difference between the 3 portfolios. The management costs have a negative impact in all portfolios returns and Sharpe ratios and they should be considered when investing in stocks, mainly when the manager does many transactions like in minimum variance portfolio
APA, Harvard, Vancouver, ISO, and other styles
35

Barber, J. R. "Variable-compression-ratio pistons for high power output diesel engines." Thesis, Brunel University, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.379249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Aviram, David. "The thermal properties of a variable aspect ratio cavity wall." Thesis, Kingston University, 2000. http://eprints.kingston.ac.uk/20643/.

Full text
Abstract:
An experimental investigation of the thermal properties of a variable cavity wall has been conducted with the aid of a Guarded Hot Box (GHB). The main objective of the investigation was to determine the thermal trends of that exist in such a wall at different aspect ratios and internal configurations. In the course of this research effort, attention had initially focused on the suitability of the GHB as a tool for measuring building components of low thermal transmittance. Following the initial evaluation that included computational modelling of the GHB, a series of modifications were employed, which included a series of baffie plates in the Guard box area. Experimental trials have shown that these modifications culminated in a reduced thermal gradient distribution within the box and along the test wall. As a result, the test wall was subjected to a more uniform heat flux and lower peripheral heat loss. A variable cavity wall measuring 1.2m by 1.2m and 0.41m deep was the main 'focus of this study. By means of a moveable brick leaf, the aspect ratio of the cavity wall could be remotely altered during the experiment, thereby allowing immediate comparison of thermal trends without the errors that are associated with building and testing a series of individual walls of different geometric proportions. In particular, this set-up enabled an accurate comparative analysis of cavity aspect ratios over a range of 15 to 30. Lazer Doppler Anemometry (LDA) and thermal measurement on the four surfaces of the cavity wall leafs were the prime means for collating experimental data. Extensive computational modelling complemented the research, which provided important insights both prior to and following the experimental stage. The use of Computational Fluid Dynamics (CFD), while not intended for precise solutions to models of the GHB and cavity wall, was never the less instrumental in establishing trends and expanding the experimental range, once corroboration of experimental results had been achieved. The experimental and computational results show that with successive cavity closure an optimum aspect ratio is reached, where thermal resistance peaks and velocity of the convective flow is minimal. At this aspect ratio, the flow regime was found to be conductive. The main implication of this result is that decreasing aspect ratio beyond this aspect ratio, by widening the cavity, will result in increasing heat losses due to the circulation of convective currents in the cavity. Thus, it was concluded that when convection diminishes, the thermal resistance of the air cavity would rise. Further computational and experimental work on the same wall with an internal partition, corroborated the trends found during the clear cavity experiment. It was found that a centrally placed vertical partition will double the thermal resistance of the wall. Furthermore, the thermal resistance of the partition was found to equal that of one partitioned cavity, raising the possibility of eliminating cavities from wall construction The effect of mortar joints upon cavity walls, at various aspect ratios, was also investigated. Results show that a vertically sinusoidal flow pattern exists in such cavities due to the thermal bridging effect of the mortar joints. The results of this study were used for several recommendations, which deal both with design of cavity walls and Guarded Hot Box design and operation.
APA, Harvard, Vancouver, ISO, and other styles
37

Mbou, Sob Ulrich Armel. "Calibration and imaging with variable radio sources." Thesis, Rhodes University, 2017. http://hdl.handle.net/10962/37977.

Full text
Abstract:
Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
APA, Harvard, Vancouver, ISO, and other styles
38

Smith, Michael Henry. "Vehicle powertrain modeling and ratio optimization for a continuously variable transmission." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/17801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wilde, Benjamin R. "Dynamics of variable density ratio reacting jets in unsteady, vitiated crossflow." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53040.

Full text
Abstract:
Jet in crossflow (JICF) configurations are often used for secondary fuel injection in staged-fuel combustion systems. The high temperature, vitiated crossflow in these systems is inherently unsteady and characterized by random, turbulent fluctuations and coherent, acoustic oscillations. This thesis presents the results of an experimental investigation into the dynamics of non-reacting and reacting jets injected into unsteady, vitiated crossflow. The flow structure and flame stabilization of jets with different momentum flux and density ratios relative to the crossflow are characterized using simultaneous time-resolved stereoscopic particle image velocimetry (SPIV) synchronized with OH planar laser induced fluorescence (PLIF). A modified trajectory scaling law is developed to account for the influence of near-field heat release on the jet trajectory. The second part of this work focuses on the response of a JICF to crossflow forcing. Acoustic drivers are used to excite natural resonances of the facility, which are characterized using the two-microphone method. Spectral analysis of SPIV results shows that, while the jet response to crossflow velocity fluctuations is often negligible, the fluctuating crossflow pressure induces a significant fluctuating jet exit velocity, which leads to periodic jet flapping. The flame response to crossflow forcing is studied using flame edge tracking. An analytical model is developed that predicts the dependence of the jet injector impedance upon important JICF parameters. In the final part of this work, vortex tracking and Mie scattering flow visualization are used to investigate the effect of near-field heat release on the shear layer dynamics. A phenomenological model is developed to explain the effect of combustion on the shear layer stability of density stratified, reacting JICF. The results of this study demonstrate the important effects of near-field heat release and crossflow acoustics on the dynamics of reacting JICF.
APA, Harvard, Vancouver, ISO, and other styles
40

Sousa, Júnior Gabriel Faria de. "Active versus passive management : the case of BOVESPA." Master's thesis, Instituto Superior de Economia e Gestão, 2016. http://hdl.handle.net/10400.5/11647.

Full text
Abstract:
Mestrado em Finanças
O principal objetivo deste trabalho é analisar alguns modelos subjacente à gestão de carteiras ativa e passiva e qual seria seu impacto sobre a escolha de uma determinada carteira constituída por ações que estão integrados no índice BOVESPA, maior mercado bolsista do Brasil. A gestão passiva é baseada numa carteira que visa replicar o comportamento do Índice BOVESPA, tendo como base os preços históricos do índice e no método naïve (1/N), no qual composição da carteira inclui todos os ativos do índice com as mesmas proporções. A gestão ativa baseia-se no método de Markowitz, conhecido como modelo de média variância, que visa maximizar o retorno tendo definido um determinado nível de risco, ou minimizar o risco tendo em conta um nível de retorno esperado. Também é usado o método da variância mínima que consiste em minimizar o risco independentemente do retorno. Nesta abordagem as proporções a investir em cada ativo são revistas mensalmente tendo em conta a evolução do mercado. Outro modelo utilizado será um método ajustado da média variância em que serão mantidos os pesos ótimos do primeiro período para as restantes janelas de dados. Para as determinar são consideradas "janelas" de dados de 1 e 2 anos. É considerado um horizonte de investimento de 10 anos, a partir de Janeiro de 2005 a Dezembro de 2014. Com base nos resultados é possível afirmar que a carteira de média variância deve ser a escolhida, uma vez que apresenta os melhores resultados.
The main purpose of this paper is to analyze some models underlying the active and passive portfolio management and what would be its impact on the choice of a portfolio composed by stocks which are integrated in BOVESPA Index, Brazilian biggest stock market. The passive management approach is based on the historical prices of BOVESPA Index which replicates the behavior of the market and on the naïve method (1/N), in which the portfolio includes all the stocks on the index with the same proportions. Active management is based on the Markowitz model, also known as mean variance model, whose objective is to maximize the return give a set risk level or, minimize the risk given an expected return. The minimum variance model is also used, whose goal is to minimize the risk independent of the return. On these approach the weights of each asset in the portfolio are revised monthly, based on the market evolution. Another model used is a Mean Variance adjusted method in which the first period optimal weights will be maintained for the remaining data windows. In order for these to be determined, "windows" of 1 and 2 years were used. We are considering a 10 year investment horizon, from January 2005 to December 2014. Based on the results, we can affirm that the mean variance portfolio should be chosen, as performed better both in terms of returns and, especially, in terms of Sharpe ratio when compared with the other two portfolios.
APA, Harvard, Vancouver, ISO, and other styles
41

TEIXEIRA, RENATO NUNES. "INTERNAL COMBUSTION ENGINES WITH VARIABLE COMPRESSION RATIO: A THEORETICAL AND EXPERIMENTAL ANALYSIS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1992. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=19099@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
É realizado um estudo teórico experimental sobre motores a combustão interna operando com taxa de compressão variável. É feita uma análise teórica sobre determinado mecanismo que permite variar a taxa de compressão. Para tal foi utilizado um programa de simulação para motores com ignição por centelha. No presente trabalho o modelo de simulação foi aprimorado, com a inclusão de previsão de detonação, de emissão de hidrocarbonetos, do cálculo da potencia de atrito, assim como a inclusão do dispositivo do mecanismo de taxa de compressão variável, entre outras alterações. Uma parte experimental foi também realizada, como o objetivo de validar os resultados do modelo teórico e de quantificar os benefícios proporcionados pelo mecanismo em questão. Para tal um motor de pesquisa de combustível – motor CFR – foi utilizado. Uma comparação dos resultados teóricos e experimentais obtidos no presente trabalho com os de outros pesquisadores é também apresentada.
The present work is concerned with a theoretical and expererimental study of variable compression ratio spark ignition internal combustion engines. A theoretical analysis of the engine, operating with a mechanism allows for variable compression ratio, is carried out. For that a simulation program is utilized. In the present work the simulation model was updated with the inclusion of friction, knocking and hidrocarbon emission models, among other things. An experimental work was also carried out, with a CFR engine. The objective was a wo-fold to validade the results of the theoretical model and to assens the benefits of running an engine with variable compression ratio. A comparison is also made between the rrsults of the present work and those from other authors.
APA, Harvard, Vancouver, ISO, and other styles
42

Ingvast, Johan. "Quadruped robot control and variable leg transmissions." Doctoral thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Blondeau, Julie E. "Development and testing of a variable aspect ratio wing using pneumatic telescopic spars." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/1761.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2004.
Thesis research directed by: Dept. of Aerospace Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
44

Lechesa, Wahau Simon. "A variable threshold for an energy detector using GNU radio." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29862.

Full text
Abstract:
Spectrum is a natural resource and should be treated as such. Spectrum has dual use applications that range from short distance communication links such as Bluetooth to health, power systems, transport, smart city applications and space communications and exploration. Next Generation Networks (NGNs) are designed to connect millions of devices seamlessly and with high throughput rates in the aforementioned sectors and others not mentioned. The use of spectrum has to be efficiently utilized and appropriated. Cognitive radio communications serve to improve use of dwindling spectrum availability. Spectrum sensing is the first and critical technology in cognitive radio meant to determine radio parameters. Energy Detection (ED) is a spectrum sensing technology that has a low computational and operational complexity, a relatively fast spectrum sensing technique to other spectrum sensing technologies, and requires no knowledge of the primary user’s transmit signal properties such as modulation or error correction schemes. In its classical case, ED compares the signal energy received with a fixed detection threshold, estimated with an expected noise level. Noise however in practice varies randomly due to thermal variations, non-uniform movement of electrons, imperfections of semiconductor materials and external noise sources to mention a few. This creates a noise uncertainty phenomenon which negatively affects the fixed threshold approach used in classical ED. Development of an out-of-tree module for a variable threshold energy detector using the estimated noise power at each sample point is presented in this dissertation. GNU Radio software and Ettus Universal Software Radio Peripheral (USRP) hardware were used to simulate the performance of the proposed variable threshold energy detector. The Neyman-Pearson theory was adopted in achieving the proposed variable threshold energy detector. The variable threshold energy detector successfully sensed the presence of a primary user signal at 1.25% less the spectrum sensing time of the constant threshold. An ROC curve plot also showed that the proposed variable threshold energy detector had a better performance in general as opposed to the constant threshold energy detector at low signal-to-noise ratio levels.
APA, Harvard, Vancouver, ISO, and other styles
45

Ngethe, Nixon Thuo. "An adaptive threshold energy detection technique with noise variance estimation for cognitive radio sensor networks." Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/20103.

Full text
Abstract:
The paradigm of wireless sensor networks (WSNs) has gained a lot of popularity in the recent years due to the proliferation of wireless devices. This is evident as WSNs find numerous application areas in fields such as agriculture, infrastructure monitoring, transport, and security surveillance. Traditionally, most deployments of WSNs operate in the unlicensed industrial scientific and medical (ISM) band and more specifically, the globally available 2.4 GHz frequency band. This band is shared with several other wireless technologies such as Bluetooth, Wi-Fi, near field communication and other proprietary technologies thus leading to overcrowding and interference problems. The concept of dynamic spectrum access alongside cognitive radio technology can mitigate the coexistence issues by allowing WSNs to dynamically access new spectrum opportunities. Furthermore, cognitive radio technology addresses some of the inherent constraints of WSNs thus introducing a myriad of benefits. This justifies the emergence of cognitive radio sensor networks (CRSNs). The selection of a spectrum sensing technique plays a vital role in the design and implementation of a CRSN. This dissertation sifts through the spectrum sensing techniques proposed in literature investigating their suitability for CRSN applications. We make amendments to the conventional energy detector particularly on the threshold selection technique. We propose an adaptive threshold energy detection model with noise variance estimation for implementation in CRSN systems. Experimental work on our adaptive threshold technique based on the recursive one-sided hypothesis test (ROHT) technique was carried out using MatLab. The results obtained indicate that our proposed technique is able to achieve adaptability of the threshold value based on the noise variance. We also employ the constant false alarm rate (CFAR) threshold to act as a defence mechanism against interference of the primary user at low signal to noise ratio (SNR). Our evaluations indicate that our adaptive threshold technique achieves double dynamicity of the threshold value based on the noise variance and the perceived SNR.
APA, Harvard, Vancouver, ISO, and other styles
46

Tan, Christabel Kun Looi. "The development of a variable mixing-ratio alternate-flow injection micomixer with elastomer valves." Thesis, University of Hertfordshire, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Sousa, Rita Cristina Pinto de. "Parameter estimation in the presence of auxiliary information." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/11295.

Full text
Abstract:
Dissertação para obtenção do Grau de Doutora em Estatística e Gestão de Risco, Especialidade em Estatística
In survey research, there are many situations when the primary variable of interest is sensitive. The sensitivity of some queries can give rise to a refusal to answer or to false answers given intentionally. Survey can be conducted in a variety of settings, in part dictated by the mode of data collection, and these settings can differ in how much privacy they offer the respondent. The estimates obtained from a direct survey on sensitive questions would be subject to high bias. A variety of techniques have been used to improve reporting by increasing the privacy of the respondents. The Randomized Response Technique (RRT), introduced byWarner in 1965, develops a random relation between the individual’s response and the question. This technique provides confidentiality to respondents and still allows the interviewers to estimate the characteristic of interest at an aggregate level. In this thesis we propose some estimators to improve the mean estimation of a sensitive variable based on a RRT by making use of available non-sensitive auxiliary information. In the first part of this thesis we present the ratio and the regression estimators as well as some generalizations in order to study the gain in the estimation over the ordinary RRT mean estimator. In chapters 4 and 5 we study the performance of some exponential type estimators, also based on a RRT. The final part of the thesis illustrates an approach to mean estimation in stratified sampling. This study confirms some previous results for a different sample design. An extensive simulation study and an application to a real dataset are done for all the study estimators to evaluate their performance. In the last chapter we present a general discussion referring to the main results and conclusions as well as showing an application to a real dataset which compares the performance of study estimators.
APA, Harvard, Vancouver, ISO, and other styles
48

Reeder, Rebecca A. "Change in Composition versus Variable Force as Influences on the Downward Trend in the Sex Ratio at Birth in the U.S., 1971-2006." University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1292363042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Fellows, Lesley. "Fatigue crack growth under variable stress ratios and complex load history." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Bell, Martin. "The low frequency array and the transient and variable radio sky." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/208253/.

Full text
Abstract:
This thesis addresses the topic of exploring and characterising the transient and variable radio sky, using both existing radio telescopes, and the next generation of radio facilities such as the Low Frequency Array (LOFAR). Studies of well known variable radio sources are presented in conjunction with blind searches of parameter space for unknown sources. Firstly, a three year campaign to monitor the low luminosity Active Galactic Nucleus NGC 7213 in the radio and X-ray bands is presented. Cross-correlation functions are used to calculate a global time lag between inflow (X-ray) and outflow (radio) events. Through this work the previously established scaling relationship between core radio and X-ray luminosities and black hole mass, known as the ‘fundamental plane of black hole activity’ is also explored with respect to NGC 7213. Secondly, the technical and algorithmic procedures to search for transient and variable radio sources within radio images is presented. These algorithms are intended for deployment on the LOFAR telescope, however, they are heavily tested in a blind survey using data obtained from the VLA archive. Through this work an upper limit on the rate of transient events on the sky at GHz frequencies is placed and compared with those found from other dedicated transient surveys. Finally, the design, operation and data reduction procedure for the Low Frequency Array, which will revolutionise our understanding of low frequency time domain astrophysics is explored. LOFAR commissioning observations are reduced and searched for transient and variable radio sources. The current quality of the calibration limits accurate variability studies, however, two unique LOFAR transient candidates that are not present in known radio source catalogues are explored (including multi-wavelength followup observations). In the conclusion to this thesis the parameter space that future radio telescopes may probe - including the potential rates of such events - is presented. At the nano-Jansky level up to 107 transients deg−2 yr−1 are predicted, which will form an unprecedented torrent of data, followup and unique physics to classify
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography