To see the other types of publications on this topic, follow the link: Bianchi type I models.

Dissertations / Theses on the topic 'Bianchi type I models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Bianchi type I models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Friedrichsen, James Edward. "Quantization of Bianchi type cosmological models /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Eriksson, Daniel. "Perturbative Methods in General Relativity." Doctoral thesis, Umeå : Department of Physics, Umeå University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Holgersson, David. "Lanczos potentialer i kosmologiska rumtider." Thesis, Linköping University, Department of Mathematics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2582.

Full text
Abstract:

We derive the equation linking the Weyl tensor with its Lanczos potential, called the Weyl-Lanczos equation, in 1+3 covariant formalism for perfect fluid Bianchi type I spacetime and find an explicit expression for a Lanczos potential of the Weyl tensor in these spacetimes. To achieve this, we first need to derive the covariant decomposition of the Lanczos potential in this formalism. We also study an example by Novello and Velloso and derive their Lanczos potential in shear-free, irrotational perfect fluid spacetimes from a particular ansatz in 1+3 covariant formalism. The existence of the Lanczos potential is in some ways analogous to the vector potential in electromagnetic theory. Therefore, we also derive the electromagnetic potential equation in 1+3 covariant formalism for a general spacetime. We give a short description of the necessary tools for these calculations and the cosmological formalism we are using.

APA, Harvard, Vancouver, ISO, and other styles
4

Cheng, A. D. Y. "Supersymmetric quantum Bianchi models." Thesis, University of Cambridge, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597576.

Full text
Abstract:
This thesis is about the quantum cosmology of supergravity theories, in particular N = 1 supergravity. In chapter 2, the Bianchi-IX model in N = 1 supergravity with a cosmological constant is investigated. This example is also restricted to the case of the k = +1 Friedmann universe. In chapter 3, the most general solution of the Lorentz constraints is found. Using this solution, N = 1 supergravity in the diagonal Bianchi-IX model is studied. The Hamilton-Jacobi equation is derived and completely solved. The Hartie-Hawking and wormhole actions are both found among the solutions. In chapter 4, the relation between the Chern-Simons functional and the no-boundary proposal is considered. The exponential of the Chern-Simons functional is the first known exact solution of quantum general relativity with a cosmological constant, being defined in the Ashtekar variables. However, it has turned out to be possible to show that the Chern-Simons functional and the no-boundary proposal are not related to each other in general relativity and N = 1 supergravity by considering perturbations around the k = +1 Friedmann universe. In chapter 5, the canonical formulation of N = 1 supergravity coupled to supermatter is presented. The supersymmetry and gauge constraints are derived. This model is then studied in the k = +1 Friedmann universe. It is found there are solutions in the case of a scalar multiplet. However, no non-trivial solution exists for a Yang-Mills multiplet, and it is explained why there is no physical state. Chapter 6 describes a brief investigation of the quantum cosmology of N = 2 and N = 4 supergravity theories. The thesis ends with concluding remarks and an indication of directions of future development.
APA, Harvard, Vancouver, ISO, and other styles
5

Lindblad, Petersen Oliver. "Bianchi type I solutions to Einstein's vacuum equations." Thesis, KTH, Matematik (Inst.), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-129194.

Full text
Abstract:
A natural question in general relativity is whether there exist singularities, like the Big Bang and black holes, in the universe. Albert Einstein did not in the beginning believe that singularities in general relativity are generic ([2], [3]). He claimed that the existence of singularities is due to symmetry assumptions. The symmetry assumptions are usually spatial isotropy and spatial homogeneity. Spatial isotropy means intuitively that, for a xed time, universe looks the same at all points and in all spatial directions. In the present paper, we will show the following: If we solve Einstein's vacuum equations with a certain type of initial data, called the Bianchi type I, the resulting space-time will either be the Minkowski space or an anisotropic space-time equipped with a so called Kasner metric. We show that, in the anisotropic case, the space-time will contain a certain singularity: the Big Bang. We distinguish between two dierent classes of a Kasner metrics; the Flat Kasner metric and the Non-at Kasner metric. In the case of a Flat Kasner metric, we show that it is possible to isometrically embed the entire space-time into Minkowski space. In the case of the Non-at Kasner metric, the space-time is not extendible and the gravity goes to innity approaching the time of the Big Bang. In addition we show, using any Kasner metric, that the universe expands proportional to the time passed since the Big Bang. This happens even though some directions will shrink or not change. The conclusion is: We have found two natural classes of anisotropic space-times, that include a Big Bang and expand. These results supports the idea that singularities are generic, i.e. are not due to the assumptions of symmetry of the universe. 1
APA, Harvard, Vancouver, ISO, and other styles
6

Giani, Leonardo. "Bianchi type II cosmology in Hořava–Lifshitz gravity." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10471/.

Full text
Abstract:
In this work a Bianchi type II space-time within the framework of projectable Horava Lifshitz gravity was investigated; the resulting field equations in the infrared limit λ = 1 were analyzed qualitatively. We have found the analytical solutions for a toy model in which only the higher curvature terms cubic in the spatial Ricci tensor are considered. The resulting behavior is still described by a transition among two Kasner epochs, but we have found a different transformation law of the Kasner exponents with respect to the one of Einstein's general relativity.
APA, Harvard, Vancouver, ISO, and other styles
7

Hervik, Sigbjørn. "Mathematical cosmology : Bianchi models, asymptotics and extra dimensions." Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.616093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sverin, Tomas. "Density Growth in Anisotropic Cosmologies of Bianchi Type I." Thesis, Umeå universitet, Institutionen för fysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-58102.

Full text
Abstract:
This work generalises earlier works on the growth of the density perturbations in Bianchi type I cosmologies filled with dust to include also the effects of pressure anda positive cosmological constant. For the analysis the 1+3 covariant split of space-time formalism is used. As the perturbative variables we use scalar quantities thatare zero on the background, and hence are gauge-invariant. These variables form acoupled closed system of first-order evolution equations, that is analysed numericallyand analytically. For short wavelengths an oscillatory behavior is obtained, whereasfor long wavelengths the energy density perturbations grow unboundedly.
APA, Harvard, Vancouver, ISO, and other styles
9

Lindblad, Petersen Oliver. "The wave equation and redshift in Bianchi type I spacetimes." Thesis, KTH, Matematik (Avd.), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-151317.

Full text
Abstract:
The thesis consists of two independent parts. In the first part, we show how the solution to the scalar wave equation on 3-torus-Bianchi type I spacetimes can be written as a Fourier decomposition. We present results on the behaviour of these Fourier modes and apply them to the case of 3-torus-Kasner spacetimes. In the second part, we first consider the solution to the scalar wave equation, with special initial data, as a model for light in Bianchi type I spacetimes. We show that the obtained redshift coincides with the cosmological redshift. We also consider the Cauchy problem for Maxwell's vacuum equations, with special initial data, in order to model light in Bianchi type I spacetimes. We calculate the redshift of the solution and show that, also in this case, the obtained redshift coincides with the cosmological redshift.
Uppsatsen består av två oberoende delar. I den första delen visar vi hur lösningen till den skalära vågekvationen i Bianchi typ I rumtider med 3-torus topologi kan skrivas som en Fourierserie med tidsberoende koefficienter, så kallade moder. Vi presenterar resultat som beskriver egenskaper hos dessa moder och applicerar resultaten i specialfallet med Kasner rumtider med 3-torus topologi. I den andra delen betraktar vi först lösningar till den skalära vågekvationen, med speciella initialdata, som en modell för ljus i Bianchi typ I rumtider. Vi visar att rödförskjutningen på ljuset sammanfaller med den kosmologiska rödförskjutningen i Bianchi typ I rumtider. Därefter betraktar vi Cauchyproblemet för Maxwells vakuumekvationer, med speciella initialdata, som en annan modell för ljus i Bianchi typ I rumtider. Vi beräknar rödförskjutningen på ljuset med denna modell och visar att, även i detta fall, sammanfaller den med den kosmologiska rödförskjutningen.
APA, Harvard, Vancouver, ISO, and other styles
10

Yearsley, J. M. "Anisotropic cosmologies and the role of matter." Thesis, University of Sussex, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bianchi, Lorenzo [Verfasser], Valentina [Akademischer Betreuer] Forini, Jan [Akademischer Betreuer] Plefka, and Arkady A. [Akademischer Betreuer] Tseytlin. "Perturbation theory for string sigma models / Lorenzo Bianchi. Gutachter: Valentina Forini ; Jan Plefka ; Arkady A. Tseytlin." Berlin : Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://d-nb.info/1084692643/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sandin, Patrik. "Cosmological Models and Singularities in General Relativity." Doctoral thesis, Karlstads universitet, Avdelningen för fysik och elektroteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-8206.

Full text
Abstract:
This is a thesis on general relativity. It analyzes dynamical properties of Einstein's field equations in cosmology and in the vicinity of spacetime singularities in a number of different situations. Different techniques are used depending on the particular problem under study; dynamical systems methods are applied to cosmological models with spatial homogeneity; Hamiltonian methods are used in connection with dynamical systems to find global monotone quantities determining the asymptotic states; Fuchsian methods are used to quantify the structure of singularities in spacetimes without symmetries. All these separate methods of analysis provide insights about different facets of the structure of the equations, while at the same time they show the relationships between those facets when the different methods are used to analyze overlapping areas. The thesis consists of two parts. Part I reviews the areas of mathematics and cosmology necessary to understand the material in part II, which consists of five papers. The first two of those papers uses dynamical systems methods to analyze the simplest possible homogeneous model with two tilted perfect fluids with a linear equation of state. The third paper investigates the past asymptotic dynamics of barotropic multi-fluid models that approach a `silent and local' space-like singularity to the past. The fourth paper uses Hamiltonian methods to derive new monotone functions for the tilted Bianchi type II model that can be used to completely characterize the future asymptotic states globally. The last paper proves that there exists a full set of solutions to Einstein's field equations coupled to an ultra-stiff perfect fluid that has an initial singularity that is very much like the singularity in Friedman models in a precisely defined way.

Status of the paper "Perfect Fluids and Generic Spacelike Singularities" has changed from manuscript to published since the thesis defense.

APA, Harvard, Vancouver, ISO, and other styles
13

Hough, Alison Janette. "Mouse models of type 2 diabetes." Thesis, Oxford Brookes University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Moss, Sean. "The dialectica models of type theory." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/280672.

Full text
Abstract:
This thesis studies some constructions for building new models of Martin-Löf type theory out of old. We refer to the main techniques as gluing and idempotent splitting. For each we give general conditions under which type constructors exist in the resulting model. These techniques are used to construct some examples of Dialectica models of type theory. The name is chosen by analogy with de Paiva's Dialectica categories, which semantically embody Gödel's Dialectica functional interpretation and its variants. This continues a programme initiated by von Glehn with the construction of the polynomial model of type theory. We complete the analogy between this model and Gödel's original Dialectica by using our techniques to construct a two-level version of this model, equipping the original objects with an extra layer of predicates. In order to do this we have to carefully build up the theory of finite sum types in a display map category. We construct two other notable models. The first is a model analogous to the Diller-Nahm variant, which requires a detailed study of biproducts in categories of algebras. To make clear the generalization from the categories studied by de Paiva, we illustrate the construction of the Diller-Nahm category in terms of gluing an indexed system of types together with a system of predicates. Following this we develop the general techniques needed for the type-theoretic case. The second notable model is analogous to the Dialectica category associated to the error monad as studied by Biering. This model has only weak dependent products. In order to get a model with full dependent products we use the idempotent splitting construction, which generalizes the Karoubi envelope of a category. Making sense of the Karoubi envelope in the type-theoretic case requires us to face up to issues of coherence in our models. We choose the route of making sure all of the constructions we use preserve strict coherence, rather than applying a general coherence theorem to produce a strict model afterwards. Our chosen method preserves more detailed information in the final model.
APA, Harvard, Vancouver, ISO, and other styles
15

Boulier, Simon Pierre. "Extending type theory with syntactic models." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0110/document.

Full text
Abstract:
Cette thèse s'intéresse à la métathéorie de la théorie des types intuitionniste. Les systèmes que nous considérons sont des variantes de la théorie des types de Martin-Löf ou du Calcul des Constructions, et nous nous intéressons à la cohérence de ces systèmes ou encore à l'indépendance d'axiomes par rapport à ces systèmes. Le fil rouge de cette thèse est la construction de modèles syntaxiques, qui sont des modèles qui réutilisent la théorie des types pour interpréter la théorie des types. Dans une première partie, nous introduisons la théorie des types à l'aide d'un système minimal et de plusieurs extensions potentielles. Dans une seconde partie, nous introduisons les modèles syntaxiques donnés par traduction de programme et donnons plusieurs exemples. Dans une troisième partie, nous présentons Template-Coq, un plugin de métaprogrammation pour Coq. Nous montrons comment l'utiliser pour implémenter directement certains modèles syntaxiques. Enfin, dans une dernière partie, nous nous intéressons aux théories des types à deux égalités : une égalité stricte et une égalité univalente. Nous proposons une relecture des travaux de Coquand et. al. et Orton et Pitts sur le modèle cubique en introduisant la notion de fibrance dégénérée
This thesis is about the metatheory of intuitionnistic type theory. The considered systems are variants of Martin-Löf type theory of Calculus of Constructions, and we are interested in the coherence of those systems and in the independence of axioms with respect to those systems. The common theme of this thesis is the construction of syntactic models, which are models reusing type theory to interpret type theory. In a first part, we introduce type theory by a minimal system and several possible extensions. In a second part, we introduce the syntactic models given by program translation and give several examples. In a third part, we present Template-Coq, a plugin for metaprogramming in Coq. We demonstrate how to use it to implement directly some syntactic models. Last, we consider type theories with two equalities: one strict and one univalent. We propose a re-reading of works of Coquand et.al. and of Orton and Pitts on the cubical model by introducing degenerate fibrancy
APA, Harvard, Vancouver, ISO, and other styles
16

von, Glehn Tamara. "Polynomials and models of type theory." Thesis, University of Cambridge, 2015. https://www.repository.cam.ac.uk/handle/1810/254394.

Full text
Abstract:
This thesis studies the structure of categories of polynomials, the diagrams that represent polynomial functors. Specifically, we construct new models of intensional dependent type theory based on these categories. Firstly, we formalize the conceptual viewpoint that polynomials are built out of sums and products. Polynomial functors make sense in a category when there exist pseudomonads freely adding indexed sums and products to fibrations over the category, and a category of polynomials is obtained by adding sums to the opposite of the codomain fibration. A fibration with sums and products is essentially the structure defining a categorical model of dependent type theory. For such a model the base category of the fibration should also be identified with the fibre over the terminal object. Since adding sums does not preserve this property, we are led to consider a general method for building new models of type theory from old ones, by first performing a fibrewise construction and then extending the base. Applying this method to the polynomial construction, we show that given a fibration with sufficient structure modelling type theory, there is a new model in a category of polynomials. The key result is establishing that although the base category is not locally cartesian closed, this model has dependent product types. Finally, we investigate the properties of identity types in this model, and consider the link with functional interpretations in logic.
APA, Harvard, Vancouver, ISO, and other styles
17

Nilsson, Oscar. "On Stochastic Volatility Models as an Alternative to GARCH Type Models." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-297173.

Full text
Abstract:
For the purpose of modelling and prediction of volatility, the family of Stochastic Volatility (SV) models is an alternative to the extensively used ARCH type models. SV models differ in their assumption that volatility itself follows a latent stochastic process. This reformulation of the volatility process makes however model estimation distinctly more complicated for the SV type models, which in this paper is conducted through Markov Chain Monte Carlo methods. The aim of this paper is to assess the standard SV model and the SV model assuming t-distributed errors and compare the results with their corresponding GARCH(1,1) counterpart. The data examined cover daily closing prices of the Swedish stock index OMXS30 for the period 2010-01-05 to 2016- 03-02. The evaluation show that both SV models outperform the two GARCH(1,1) models, where the SV model with assumed t-distributed error distribution give the smallest forecast errors.
APA, Harvard, Vancouver, ISO, and other styles
18

Stuk, Stephen Paul. "Multivariable systems theory for Lanchester type models." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/24171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Jackson, Henry Lee. "Synthetic models of Fe-type nitrile hydratase /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/8532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Eichenauer, Florian. "Analysis for dissipative Maxwell-Bloch type models." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17661.

Full text
Abstract:
Die vorliegende Dissertation befasst sich mit der mathematischen Modellierung semi-klassischer Licht-Materie-Interaktion. Im semiklassischen Bild wird Materie durch eine Dichtematrix "rho" beschrieben. Das Konzept der Dichtematrizen ist quantenmechanischer Natur. Auf der anderen Seite wird Licht durch ein klassisches elektromagnetisches Feld "(E,H)" beschrieben. Wir stellen einen mathematischen Rahmen vor, in dem wir systematisch dissipative Effekte in die Liouville-von-Neumann-Gleichung inkludieren. Bei unserem Ansatz sticht ins Auge, dass Lösungen der resultierenden Gleichung eine intrinsische Liapunov-Funktion besitzen. Anschließend koppeln wir die resultierende Gleichung mit den Maxwell-Gleichungen und erhalten ein neues selbstkonsistentes, dissipatives Modell vom Maxwell-Bloch-Typ. Der Fokus dieser Arbeit liegt auf der intensiven mathematischen Studie des dissipativen Modells vom Maxwell-Bloch-Typ. Da das Modell Lipschitz-Stetigkeit vermissen lassen, kreieren wir eine regularisierte Version des Modells, das Lipschitz-stetig ist. Wir beschränken unsere Analyse im Wesentlichen auf die Lipschitz-stetige Regularisierung. Für regularisierte Versionen des dissipativen Modells zeigen wir die Existenz von Lösungen des zugehörigen Anfangswertproblems. Der Kern des Existenzbeweises besteht aus einem Resultat von ``compensated compactness'''', das von P. Gérard bewiesen wurde, sowie aus einem Lemma vom Rellich-Typ. In Teilen folgt dieser Beweis dem Vorgehen einer älteren Arbeit von J.-L. Joly, G. Métivier und J. Rauch.
This thesis deals with the mathematical modeling of semi-classical matter-light interaction. In the semi-classical picture, matter is described by a density matrix "rho", a quantum mechanical concept. Light on the other hand, is described by a classical electromagnetic field "(E,H)". We give a short overview of the physical background, introduce the usual coupling mechanism and derive the classical Maxwell-Bloch equations which have intensively been studied in the literature. Moreover, We introduce a mathematical framework in which we state a systematic approach to include dissipative effects in the Liouville-von-Neumann equation. The striking advantage of our approach is the intrinsic existence of a Liapunov function for solutions to the resulting evolution equation. Next, we couple the resulting equation to the Maxwell equations and arrive at a new self-consistent dissipative Maxwell-Bloch type model for semi-classical matter-light interaction. The main focus of this work lies on the intensive mathematical study of the dissipative Maxwell-Bloch type model. Since our model lacks Lipschitz continuity, we create a regularized version of the model that is Lipschitz continuous. We mostly restrict our analysis to the Lipschitz continuous regularization. For regularized versions of the dissipative Maxwell-Bloch type model, we prove existence of solutions to the corresponding Cauchy problem. The core of the proof is based on results from compensated compactness due to P. Gérard and a Rellich type lemma. In parts, this proof closely follows the lines of an earlier work due to J.-L. Joly, G. Métivier and J. Rauch.
APA, Harvard, Vancouver, ISO, and other styles
21

Capriotti, Paolo. "Models of type theory with strict equality." Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/39382/.

Full text
Abstract:
This thesis introduces the idea of two-level type theory, an extension of Martin-Löf type theory that adds a notion of strict equality as an internal primitive. A type theory with a strict equality alongside the more conventional form of equality, the latter being of fundamental importance for the recent innovation of homotopy type theory (HoTT), was first proposed by Voevodsky, and is usually referred to as HTS. Here, we generalise and expand this idea, by developing a semantic framework that gives a systematic account of type formers for two-level systems, and proving a conservativity result relating back to a conventional type theory like HoTT. Finally, we show how a two-level theory can be used to provide partial solutions to open problems in HoTT. In particular, we use it to construct semi-simplicial types, and lay out the foundations of an internal theory of (∞, 1)-categories.
APA, Harvard, Vancouver, ISO, and other styles
22

Sushko, Stepan. "Pricing of European type options for Levy and conditionally Levy type models." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-2205.

Full text
Abstract:

In this thesis we consider two models for the computation of option prices. The first one is a generalization of the Black-Scholes model. In this generalization the volatility Sigma is not a constant. In the simplest case it changes at once at a certain time moment Tau. In some sense this is the conditionally Levy model. For this generalized Black-Scholes model have been theoretically obtained formulas for vanilla Call/Put option prices. Under the assumption of a good prediction of the parameter Sigma the obtained numerical results fit the real dara better than standard Black-Scholes model.

Second model is an exponential Levy model, where a Levy process is the CGMY process. We use the finite-difference scheme for computations of option prices. As example we consider vanilla Call/Put, Double-Barrier and Up-and-out options. After the estimation of the parameters of the CGMY process by the method of moments we obtain options prices and calculate fitting error. This fitting error for the CGMY model is smaller than for the Black-Scholes model.

APA, Harvard, Vancouver, ISO, and other styles
23

Gyldberg, Ellinor, and Henrik Bark. "Type 1 error rate and significance levels when using GARCH-type models." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-375770.

Full text
Abstract:
The purpose of this thesis is to test whether the probability of falsely rejecting a true null hypothesis of a model intercept being equal to zero is consistent with the chosen significance level when modelling the variance of the error term using GARCH (1,1), TGARCH (1,1) or IGARCH (1,1) models. We test this by estimating “Jensen’s alpha” to evaluate alpha trading, using a Monte Carlo simulation based on historical data from the Standard & Poor’s 500 Index and stocks in the Dow Jones Industrial Average Index. We evaluate over simulated daily data ranging over periods of 3 months, 6 months, and 1 year. Our results indicate that the GARCH and IGARCH consistently reject a true null hypothesis less often than the selected 1%, 5%, or 10%, whereas the TGARCH consistently rejects a true null more often than the chosen significance level. Thus, there is a risk of incorrect inferences when using these GARCH-type models.
APA, Harvard, Vancouver, ISO, and other styles
24

Liu, Qingfeng. "Econometric methods for market risk analysis : GARCH-type models and diffusion models." Kyoto University, 2007. http://hdl.handle.net/2433/136053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lemberger, Benjamin Kurt. "The one place we're trying to get to is just where we can't get: algebraic speciality and gravito-electromagnetism in Bianchi type IX." Oberlin College Honors Theses / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1400163799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Liu, Xi. "Some empirical studies on Solow-type growth models." Thesis, Mälardalens högskola, Akademin för hållbar samhälls- och teknikutveckling, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Shepardson, Dylan. "Algorithms for inverting Hodgkin-Huxley type neuron models." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31686.

Full text
Abstract:
Thesis (Ph.D)--Algorithms, Combinatorics, and Optimization, Georgia Institute of Technology, 2010.
Committee Chair: Tovey, Craig; Committee Member: Butera, Rob; Committee Member: Nemirovski, Arkadi; Committee Member: Prinz, Astrid; Committee Member: Sokol, Joel. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
28

Guan, Bo, and 关博. "On some new threshold-type time series models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B5053385X.

Full text
Abstract:
The subject of time series analysis has drawn significant attentions in recent years, since it is of tremendous interest to practitioners, as well as to academic researchers, to make statistical inferences and forecasts of future values of the interested variables. To do forecasting, parametric models are often required to describe the patterns of the observed data set. In order to describe data adequately, such statistical models should be established based on fundamental principles. Two threshold-type time series models, the buffered threshold autoregressive (BAR) model and the threshold moving-average (TMA) model are studied in this thesis. The most important contribution of this thesis is the extension of the classical threshold models via regime switching mechanisms that exhibit hysteresis to a new model called the buffered threshold model. For this type of new models, there is a buffer zone for the regime switching mechanism. The self-exciting buffered threshold autoregressive model has been thoroughly studied: a sufficient condition is given for the geometric ergodicity of the two-regime BAR process; the conditional least squares estimation is considered for the parameter estimation of the BAR model, and asymptotic properties including strong consistency and asymptotic distributions of the least square estimators are also derived. Monte Carlo experiments are conducted to give further support to the methodology developed for the new model. Two empirical examples are used to demonstrate the importance of the BAR model. Potential extensions for the basic buffer processes are discussed as well. Such extensions are expected to follow the development of classical threshold model and are motivated by their relationships with phenomena in the physical sciences. The proposed buffer process is more general than the classical threshold model, and it should be able to capture more nonlinear features exhibited by this nonlinear world than its predecessor. Although the theoretical understanding of the model is still at its infancy, it is believed that the buffer process will provide both researchers and practitioners with a useful tool to understand the nonlinear world. Moreover, some statistical properties of the threshold moving-average models are studied. Computer simulations have been extensively used, and some mathematical interpretation is attempted in the light of some existing research works. The model-building procedure for the TMA models is also reviewed. The effectiveness of some classical information criteria in selecting the correct TMA model is studied. A goodness-of-fit test is derived which would be useful in diagnostic checking the fitted TMA models.
published_or_final_version
Statistics and Actuarial Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
29

Osborne, Robert J. "Caenorhabditis elegans models of myotonic dystrophy type 1." Thesis, University of Nottingham, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.408632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Rayner, David Andrew James. "Type I string phenomenology and extra dimensional models." Thesis, University of Southampton, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Cha, Jin Seob. "Obtaining information from stochastic Lanchester-type combat models /." The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487673114113213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Mitchell, Hannah Jane. "Latent phase-type models for Italy's ageing population." Thesis, Queen's University Belfast, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709549.

Full text
Abstract:
Quality of care is deemed a concept of immense importance, but also of great difficulty to define and analyse. This study proposes the development of a novel statistical approach to healthcare modelling which overcomes the need to define quality of care by treating it as a hidden layer in a special type of markov model. The study setting for this research is the Italian healthcare system, in particular admissions into geriatric wards of the Lombardy region of Italy during 2009. The Coxian phase-type distribution was applied to this dataset and shown to give the best representation of the flow of patients. Covariates were then incorporated into this distribution and applied to the data. A simulation study of Coxian phase-type distribution with covariates was also undertaken. The main purpose of this research was to develop the theory of the Coxian phase-type distribution by incorporating a hidden layer within it which can represent quality of care. In forming this model novel methodology was presented. A discrete-time and continuous-time version of the model were both applied to the data with the results analysed. A further extension of the continuous-time hidden Markov model with the Coxian phase-type distribution was developed whereby covariates where incorporated into the hidden element. The results of this model, with application to the Lombardy dataset was analysed followed by a simulation study of all the newly developed models presented. In addition to the hidden Markov model with Coxian phase-type distribution the model was extended to introduce a duration component within the hidden layer. This extension formed the hidden semi- Markov model which relaxes the strict Markov assumption. This model was also applied to the Lombardy dataset.
APA, Harvard, Vancouver, ISO, and other styles
33

Cai, Xinhua. "Froecast the USA Stock Indices with GARCH-type Models." Thesis, Uppsala universitet, Statistiska institutionen, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-175432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Che, Xiaonan. "Markov type models for large-valued interbank payment systems." Thesis, London School of Economics and Political Science (University of London), 2011. http://etheses.lse.ac.uk/224/.

Full text
Abstract:
Due to the reform of payment systems from netting settlement systems to Real Time Gross Settlement systems (RTGS) around the world in recent years, there is a dramatic increase in the interest in modeling the large-valued interbank payment system. Recently some queueing facilities have been introduced in the response to the liquidity management within the RTGS systems. Since stochastic process models have been wildly applied in social networks, and some aspects of which have similar statistical properties with the payment system, therefore, based on the existing empirical research, a Markov type model for RTGS payment system with queueing and collateral borrowing facilities was developed. We analysed the effect on the performance of the payment system of the parameters, such as the probabilities of payment delay, the initial cash position of participating banks and the probabilities of cross bank payments. Two models were proposed; one is the simplest model where payments were assumed to be equally distributed among participating banks, the other one is a so-called "cluster" model, that there exists a concentration of payments flow between a few banks according to the evidence from empirical studies. We have found that the performance of the system depends on these parameters. A modest amount of total initial liquidity required by banks would achieve a desired performance, that minimising the number of unsettled payments by the end of a business day and negligible average lifetime of the debts. Because of the change of large-valued interbank payment systems, the concern has shift from credit risk to liquidity risk, and the payment systems around world started considering or already implemented different liquidity saving mechanisms to reduce the high demand of liquidity and maintain the low risk of default in the mean time. We proposed a specified queueing facility to the "cluster" model with modification with the consideration of the feature of the UK RTGS payment system, CHAPS. Some of thepayments would be submitted into a external queue by certain rules, and will be settled according an algorithm of bilateral or multilateral offsetting. While participating banks's post liquidity will be reserved for "important" payments only. The experiment of using simulated data showed that the liquidity saving mechanism was not equally beneficial to every bank, the banks who dominated most of the payment flow even suffered from higher level of debts at the end of a business day comparing with a pure RTGS system without any queueing facility. The stability of the structure of the central queue was verified. There was evidence that banks in the UK payment system would set up limits for other members to prevent unexpected credit exposure, and with these limits, banks also achieved a moderate liquidity saving in CHAPS. Both central bank and participating banks are interested in the probability that the limits are excess. The problem can be reduced to the calculation of boundary crossing probability from a Brownian motion with stochastic boundaries. Boundary crossing problems are very popular in many fields of Statistics. With powerful tools, such as martingales and infinitesimal generator of Brownian motion, we presented an alternative method and derived a set of theorems of boundary crossing probabilities for a Brownian motion with different kinds of stochastic boundaries, especially compound Poisson process boundaries. Both the numerical results and simulation experiments are studies. A variation of the method would be discussed when apply it to other stochastic boundaries, for instances, Gamma process, Inverse Gaussian process and Telegraph process. Finally, we provided a brief survey of approximations of Levy processes. The boundary crossing probabilities theorems derived earlier could be extended to a fair general situation with Levy process boundaries, by using an appropriate approximation.
APA, Harvard, Vancouver, ISO, and other styles
35

Almeida, Cesário Manuel de Deus Lavaredas de. "The nature of early-type galaxies in hierarchical models." Thesis, Durham University, 2008. http://etheses.dur.ac.uk/2181/.

Full text
Abstract:
In this Thesis we describe the properties of early-type galaxies in the context of hierarchical galaxy formation. We use two variants of the GALFORM model originally introduced by Cole et al.: the Baugh et al. and the Bower et al. models. We test the prescription defined by Cole et al. to calculate the sizes of bulges, by comparing GALFORM predictions with local observational data. We find that the model reproduces successfully several tight correlations observed for early-type galaxies: the relation between velocity dispersion and luminosity, the velocity dispersion-age relation and the Fundamental Plane. However, there is an important disagreement between the models and observations: in the model, the radii of the luminous spheroids are smaller than expected. We analyse how the physical ingredients involved in the calculation of the sizes influence these results. We explore the physics of massive galaxy formation in the models, by predicting the abundance, properties and clustering of luminous red galaxies (LRGs). Without adjusting any parameters in the two models, we find a good agreement between the GALFORM model and observations. We find that model LRGs are mainly elliptical galaxies, with stellar masses around 2 x 10(^11) h-(^1) M(_ʘ) and velocity dispersions of 250 kms-(^1). The models predict the correlation function of LRGs to be a power law down to small scales, which is in excellent agreement with the observational estimates. Finally, we predict the abundance, colour and clustering of submillimeter galaxies (SMGs), which are thought to be the progenitors of local massive early-type galaxies. At the wavelengths where these galaxies are detected, 850 um, the predictions using the standard GALFORM model are inaccurate, hence the time consuming GRASIL code is used in addition to GALFORM. We develop a new method, based on artificial neural networks, to rapidly generate galaxy spectra from a small set of properties.
APA, Harvard, Vancouver, ISO, and other styles
36

Browning, Jonathan Darren. "Synthesis of discrete models for aluminophosphate-type molecular sieves." Thesis, University of Southampton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mayes, Van Eric. "Phenomenology of heterotic and type II orientifold string models." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Orton, Richard Ian. "Cubical models of homotopy type theory : an internal approach." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289441.

Full text
Abstract:
This thesis presents an account of the cubical sets model of homotopy type theory using an internal type theory for elementary topoi. Homotopy type theory is a variant of Martin-Lof type theory where we think of types as spaces, with terms as points in the space and elements of the identity type as paths. We actualise this intuition by extending type theory with Voevodsky's univalence axiom which identifies equalities between types with homotopy equivalences between spaces. Voevodsky showed the univalence axiom to be consistent by giving a model of homotopy type theory in the category of Kan simplicial sets in a paper with Kapulkin and Lumsdaine. However, this construction makes fundamental use of classical logic in order to show certain results. Therefore this model cannot be used to explain the computational content of the univalence axiom, such as how to compute terms involving univalence. This problem was resolved by Cohen, Coquand, Huber and Mortberg, who presented a new model of type theory in Kan cubical sets which validated the univalence axiom using a constructive metatheory. This meant that the model provided an understanding of the computational content of univalence. In fact, the authors present a new type theory, cubical type theory, where univalence is provable using a new "glueing" type former. This type former comes with appropriate definitional equalities which explain how the univalence axiom should compute. In particular, Huber proved that any term of natural number type constructed in this new type theory must reduce to a numeral. This thesis explores models of type theory based on the cubical sets model of Cohen et al. It gives an account of this model using the internal language of toposes, where we present a series of axioms which are sufficient to construct a model of cubical type theory, and hence a model of homotopy type theory. This approach therefore generalises the original model and gives a new and useful method for analysing models of type theory. We also discuss an alternative derivation of the univalence axiom and show how this leads to a potentially simpler proof of univalence in any model satisfying the axioms mentioned above, such as cubical sets. Finally, we discuss some shortcomings of the internal language approach with respect to constructing univalent universes. We overcome these difficulties by extending the internal language with an appropriate modality in order to manipulate global elements of an object.
APA, Harvard, Vancouver, ISO, and other styles
39

Wei, Jun-Jie, Xue-Feng Wu, and Fulvio Melia. "TESTING COSMOLOGICAL MODELS WITH TYPE Ic SUPER LUMINOUS SUPERNOVAE." IOP PUBLISHING LTD, 2015. http://hdl.handle.net/10150/615104.

Full text
Abstract:
The use of type Ic Super Luminous Supernovae (SLSN Ic) to examine the cosmological expansion introduces a new standard ruler with which to test theoretical models. The sample suitable for this kind of work now includes 11 SLSN Ic's, which have thus far been used solely in tests involving $\Lambda$CDM. In this paper, we broaden the base of support for this new, important cosmic probe by using these observations to carry out a one-on-one comparison between the $R_{\rm h}=ct$ and $\Lambda$CDM cosmologies. We individually optimize the parameters in each cosmological model by minimizing the $\chi^{2}$ statistic. We also carry out Monte Carlo simulations based on these current SLSN Ic measurements to estimate how large the sample would have to be in order to rule out either model at a $\sim 99.7\%$ confidence level. The currently available sample indicates a likelihood of $\sim$$70-80\%$ that the $R_{\rm h}=ct$ Universe is the correct cosmology versus $\sim$$20-30\%$ for the standard model. These results are suggestive, though not yet compelling, given the current limited number of SLSN Ic's. We find that if the real cosmology is $\Lambda$CDM, a sample of $\sim$$240$ SLSN Ic's would be sufficient to rule out $R_{\rm h}=ct$ at this level of confidence, while $\sim$$480$ SLSN Ic's would be required to rule out $\Lambda$CDM if the real Universe is instead $R_{\rm h}=ct$. This difference in required sample size reflects the greater number of free parameters available to fit the data with $\Lambda$CDM. If such SLSN Ic's are commonly detected in the future, they could be a powerful tool for constraining the dark-energy equation of state in $\Lambda$CDM, and differentiating between this model and the $R_{\rm h}=ct$ Universe.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Chen. "Evaluating Time-varying Effect in Single-type and Multi-type Semi-parametric Recurrent Event Models." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/64371.

Full text
Abstract:
This dissertation aims to develop statistical methodologies for estimating the effects of time-fixed and time-varying factors in recurrent events modeling context. The research is motivated by the traffic safety research question of evaluating the influence of crash on driving risk and driver behavior. The methodologies developed, however, are general and can be applied to other fields. Four alternative approaches based on various data settings are elaborated and applied to 100-Car Naturalistic Driving Study in the following Chapters. Chapter 1 provides a general introduction and background of each method, with a sketch of 100-Car Naturalistic Driving Study. In Chapter 2, I assessed the impact of crash on driving behavior by comparing the frequency of distraction events in per-defined windows. A count-based approach based on mixed-effect binomial regression models was used. In Chapter 3, I introduced intensity-based recurrent event models by treating number of Safety Critical Incidents and Near Crash over time as a counting process. Recurrent event models fit the natural generation scheme of the data in this study. Four semi-parametric models are explored: Andersen-Gill model, Andersen-Gill model with stratified baseline functions, frailty model, and frailty model with stratified baseline functions. I derived model estimation procedure and and conducted model comparison via simulation and application. The recurrent event models in Chapter 3 are all based on proportional assumption, where effects are constant. However, the change of effects over time is often of primary interest. In Chapter 4, I developed time-varying coefficient model using penalized B-spline function to approximate varying coefficients. Shared frailty terms was used to incorporate correlation within subjects. Inference and statistical test are also provided. Frailty representation was proposed to link time-varying coefficient model with regular frailty model. In Chapter 5, I further extended framework to accommodate multi-type recurrent events with time-varying coefficient. Two types of recurrent-event models were developed. These models incorporate correlation among intensity functions from different type of events by correlated frailty terms. Chapter 6 gives a general review on the contributions of this dissertation and discussion of future research directions.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
41

Klochan, Oleh V. Physics Faculty of Science UNSW. "Ballistic transport in one-dimensional p-type GaAs devices." Awarded by:University of New South Wales, 2007. http://handle.unsw.edu.au/1959.4/35186.

Full text
Abstract:
In this thesis we study GaAs one dimensional hole systems with strong spin-orbit interaction effects. The primary focus is the Zeeman splitting of 1D subbands in the two orthogonal in-plane magnetic field directions. We study two types of 1D hole systems based on different (311)A grown heterostructures: a modulation doped GaAs/AlGaAs square quantum well and an undoped induced GaAs/AlGaAs triangular quantum well. The results from the modulation doped 1D wire show enhanced anisotropy of the effective Lande g-factor for the two in-plane field directions (parallel and perpendicular to the wire), compared to that in 2D hole systems. This enhancement is explained by the confinement induced reorientation of the total angular momentum ^ J from perpendicular to the 2D plane to in-plane and parallel to the wire. We use the intrinsic anisotropy of the in-plane g-factors to probe the 0:7 structure and the zero bias anomaly in 1D hole wires. We find that the behaviour of the 0:7 structure and the ZBA are correlated and depend strongly on the orientation of the in-plane field. This result proves the connection between the 0:7 and the ZBA and their relation to spin. We fabricate the first induced hole 1D wire with extremely stable gate characteristics and characterize this device. We also fabricate devices with two orthogonal induced hole wires on one chip, to study the interplay between the confinement, crystallographic anisotropy and spin-orbit coupling and their effect on the Zeeman splitting. We find that the ratios of the g-factors in the two orthogonal field directions for the two wires show opposite behaviour. We compare absolute values of the g-factors relative to the magnetic field direction. For B || [011] the g-factor is large for the wire along [011] and small for the wire along [233]. Whereas for B || [233], the g-factors are large irrespective of the wire direction. The former result can be explained by reorientation of ^ J along the wire, and the latter by an additional off-diagonal Zeeman term, which leads to the out-of-plane component of ^ J when B || [233], and as a result, to enhanced g-factors via increased exchange interactions.
APA, Harvard, Vancouver, ISO, and other styles
42

Jönsson, Johan. "Non-isotropic Cosmology in 1+3-formalism." Thesis, Linköpings universitet, Matematiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-113269.

Full text
Abstract:
Cosmology is an attempt to mathematically describe the behaviour of the universe, the most commonly used models are the Friedmann-Lemaître-Robertson-Walker solutions. These models seem to be accurate for an old universe, which is homogeneous with low anisotropy. However for an earlier universe these models might not be that accurate or even correct. The almost non-existent anisotropy observed today might have played a bigger role in the earlier universe. For this reason we will study another model known as Bianchi Type I, where the universe is not necessarily isotropic. We utilize a 1+3-covariant formalism to obtain the equations that determine the behaviour of the universe and then use a tetrad formalism to complement the 1+3-covariant equations. Using these equations we examine the geometry of space-time and its dynamical properties. Finally we briefly discuss the different singularities possible and examine some special cases of geodesic movement.
APA, Harvard, Vancouver, ISO, and other styles
43

Ma, Ou. "Dynamics of serial-type robotic manipulators." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Karlén, Anne, and Hossein Nohrouzian. "Lattice approximations for Black-Scholes type models in Option Pricing." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-21951.

Full text
Abstract:
This thesis studies binomial and trinomial lattice approximations in Black-Scholes type option pricing models. Also, it covers the basics of these models, derivations of model parameters by several methods under different kinds of distributions. Furthermore, the convergence of binomial model to normal distribution, Geometric Brownian Motion and Black-Scholes model isdiscussed. Finally, the connections and interrelations between discrete random variables under the Lattice approach and continuous random variables under models which follow Geometric Brownian Motion are discussed, compared and contrasted.
APA, Harvard, Vancouver, ISO, and other styles
45

Clemence, Robert D. "A type calculus for mathematical programming modeling languages." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA238160.

Full text
Abstract:
Dissertation (Ph.D. in Operations Research)--Naval Postgraduate School, September 1990.
Dissertation supervisor: Bradley, Gordon H. "September 1990." Description based on title screen viewed on December 17, 2009. DTIC Descriptor(s): Mathematical models, sizes (dimensions), validation, models, programming languages, drug addiction, algebra, mathematical programming, language, junctions, calculus, homogeneity, integrated systems, mathematical logic. DTIC Identifier(s): Programming languages, mathematical models, calculus, linear programming. Author(s) subject terms: Data types, integrated modeling, linear programming, model validation, mathematical programming software, special purpose languages. Includes bibliographical references (p. 129-132). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Xiaohui. "Lasso-type sparse regression and high-dimensional Gaussian graphical models." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42271.

Full text
Abstract:
High-dimensional datasets, where the number of measured variables is larger than the sample size, are not uncommon in modern real-world applications such as functional Magnetic Resonance Imaging (fMRI) data. Conventional statistical signal processing tools and mathematical models could fail at handling those datasets. Therefore, developing statistically valid models and computationally efficient algorithms for high-dimensional situations are of great importance in tackling practical and scientific problems. This thesis mainly focuses on the following two issues: (1) recovery of sparse regression coefficients in linear systems; (2) estimation of high-dimensional covariance matrix and its inverse matrix, both subject to additional random noise. In the first part, we focus on the Lasso-type sparse linear regression. We propose two improved versions of the Lasso estimator when the signal-to-noise ratio is low: (i) to leverage adaptive robust loss functions; (ii) to adopt a fully Bayesian modeling framework. In solution (i), we propose a robust Lasso with convex combined loss function and study its asymptotic behaviors. We further extend the asymptotic analysis to the Huberized Lasso, which is shown to be consistent even if the noise distribution is Cauchy. In solution (ii), we propose a fully Bayesian Lasso by unifying discrete prior on model size and continuous prior on regression coefficients in a single modeling framework. Since the proposed Bayesian Lasso has variable model sizes, we propose a reversible-jump MCMC algorithm to obtain its numeric estimates. In the second part, we focus on the estimation of large covariance and precision matrices. In high-dimensional situations, the sample covariance is an inconsistent estimator. To address this concern, regularized estimation is needed. For the covariance matrix estimation, we propose a shrinkage-to-tapering estimator and show that it has attractive theoretic properties for estimating general and large covariance matrices. For the precision matrix estimation, we propose a computationally efficient algorithm that is based on the thresholding operator and Neumann series expansion. We prove that, the proposed estimator is consistent in several senses under the spectral norm. Moreover, we show that the proposed estimator is minimax in a class of precision matrices that are approximately inversely closed.
APA, Harvard, Vancouver, ISO, and other styles
47

Clements, David. "Supersymmetry and phenomenology of heterotic and type I superstring models." Thesis, University of Oxford, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Coen, Pietro G. "Mathematical models of Haemophilus influenzae type b and Neisseria meningitidis." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Yoong Keok. "Context-dependent type-level models for unsupervised morpho-syntactic induction." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97759.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 126-141).
This thesis improves unsupervised methods for part-of-speech (POS) induction and morphological word segmentation by modeling linguistic phenomena previously not used. For both tasks, we realize these linguistic intuitions with Bayesian generative models that first create a latent lexicon before generating unannotated tokens in the input corpus. Our POS induction model explicitly incorporates properties of POS tags at the type-level which is not parameterized by existing token-based approaches. This enables our model to outperform previous approaches on a range of languages that exhibit substantial syntactic variation. In our morphological segmentation model, we exploit the fact that axes are correlated within a word and between adjacent words. We surpass previous unsupervised segmentation systems on the Modern Standard Arabic Treebank data set. Finally, we showcase the utility of our unsupervised segmentation model for machine translation of the Levantine dialectal Arabic for which there is no known segmenter. We demonstrate that our segmenter outperforms supervised and knowledge-based alternatives.
by Yoong Keok Lee.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
50

Patterson, Philip Edward. "Analysis of several modified webb type models of chemotherapy treatment." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 1992. http://digitalcommons.auctr.edu/dissertations/2826.

Full text
Abstract:
We consider modified versions of a cell population model for periodic chemotherapy treatment of tumors. These models contain both proliferating and quiescent cells, and treats the transitions between the two types of cells. Our models are defined by replacing the loss term function, μ(t, c), in Webb's model by a constant. This corresponds to having a constant infusion of the drug rather than having a periodic chemotherapy treatment. We compare our results with those of Webb who shows that shorter periods of treatment are advantageous.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography