To see the other types of publications on this topic, follow the link: Stochastic errors.

Dissertations / Theses on the topic 'Stochastic errors'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Stochastic errors.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

El, Arar El-Mehdi. "Stochastic models for the evaluation of numerical errors." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG104.

Full text
Abstract:
L'idée de considérer les erreurs d'arrondi comme des variables aléatoires n'est pas nouvelle. Basées sur des outils tels que l'indépendance des variables aléatoires ou le théorème central limite, plusieurs propositions ont démontré des bornes d'erreur en O(√n). Cette thèse est dédiée à l'étude de l'arrondi stochastique (SR) en tant que remplaçant du mode d'arrondi déterministe par défaut. Tout d'abord, nous introduisons une nouvelle approche pour dériver une borne probabiliste de l'erreur en O(√n), basée sur le calcul de la variance et l'inégalité de Bienaymé-Chebyshev. Ensuite, nous développons un cadre général permettant l'analyse probabiliste des erreurs des algorithmes sous SR. Dans ce contexte, nous décomposons l'erreur en une martingale plus un biais. Nous montrons que le biais est nul pour les algorithmes présentant des erreurs multilinéaires, tandis que l'analyse probabiliste de la martingale conduit à des bornes probabilistes de l'erreur en O(√n). Pour le calcul de la variance, nous montrons que le biais est négligeable au premier ordre par rapport à la martingale, et nous prouvons des bornes probabilistes de l'erreur en O(√n)<br>The idea of assuming rounding errors as random variables is not new. Based on tools such as independent random variables or the Central Limit Theorem, various propositions have demonstrated error bounds in O(√n). This thesis is dedicated to studying stochastic rounding (SR) as a replacement for the default deterministic rounding mode. First, we introduce a new approach to derive a probabilistic error bound in O(√n) based on variance calculation and Bienaymé-Chebyshev inequality. Second, we demonstrate a general framework that allows the probabilistic error analysis of algorithms under SR. In this context, we decompose the error into a martingale plus a drift. We show that the drift is zero for algorithms with multi-linear errors, while the probabilistic analysis of the martingale term leads to probabilistic error bounds in O(√n). We show that the drift is negligible at the first order compared to the martingale term for the variance computation, and we prove probabilistic error bounds in O(√n)
APA, Harvard, Vancouver, ISO, and other styles
2

Mårtensson, Jonas. "Geometric analysis of stochastic model errors in system identification." Doctoral thesis, KTH, Reglerteknik, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4506.

Full text
Abstract:
Models of dynamical systems are important in many disciplines of science, ranging from physics and traditional mechanical and electrical engineering to life sciences, computer science and economics. Engineers, for example, use models for development, analysis and control of complex technical systems. Dynamical models can be derived from physical insights, for example some known laws of nature, (which are models themselves), or, as considered here, by fitting unknown model parameters to measurements from an experiment. The latter approach is what we call system identification. A model is always (at best) an approximation of the true system, and for a model to be useful, we need some characterization of how large the model error is. In this thesis we consider model errors originating from stochastic (random) disturbances that the system was subject to during the experiment. Stochastic model errors, known as variance-errors, are usually analyzed under the assumption of an infinite number of data. In this context the variance-error can be expressed as a (complicated) function of the spectra (and cross-spectra) of the disturbances and the excitation signals, a description of the true system, and the model structure (i.e., the parametrization of the model). The primary contribution of this thesis is an alternative geometric interpretation of this expression. This geometric approach consists in viewing the asymptotic variance as an orthogonal projection on a vector space that to a large extent is defined from the model structure. This approach is useful in several ways. Primarily, it facilitates structural analysis of how, for example, model structure and model order, and possible feedback mechanisms, affect the variance-error. Moreover, simple upper bounds on the variance-error can be obtained, which are independent of the employed model structure. The accuracy of estimated poles and zeros of linear time-invariant systems can also be analyzed using results closely related to the approach described above. One fundamental conclusion is that the accuracy of estimates of unstable poles and zeros is little affected by the model order, while the accuracy deteriorates fast with the model order for stable poles and zeros. The geometric approach has also shown potential in input design, which treats how the excitation signal (input signal) should be chosen to yield informative experiments. For example, we show cases when the input signal can be chosen so that the variance-error does not depend on the model order or the model structure. Perhaps the most important contribution of this thesis, and of the geometric approach, is the analysis method as such. Hopefully the methodology presented in this work will be useful in future research on the accuracy of identified models; in particular non-linear models and models with multiple inputs and outputs, for which there are relatively few results at present.<br>QC 20100810
APA, Harvard, Vancouver, ISO, and other styles
3

Mårtensson, Jonas. "Geometric analysis of stochastic model errors in system identification /." Stockholm : Elektro- och systemteknik, Kungliga Tekniska högskolan, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rama, Vishal. "Estimating stochastic volatility models with student-t distributed errors." Master's thesis, Faculty of Science, 2020. http://hdl.handle.net/11427/32390.

Full text
Abstract:
This dissertation aims to extend on the idea of Bollerslev (1987), estimating ARCH models with Student-t distributed errors, to estimating Stochastic Volatility (SV) models with Student-t distributed errors. It is unclear whether Gaussian distributed errors sufficiently account for the observed leptokurtosis in financial time series and hence the extension to examine Student-t distributed errors for these models. The quasi-maximum likelihood estimation approach introduced by Harvey (1989) and the conventional Kalman filter technique are described so that the SV model with Gaussian distributed errors and SV model with Student-t distributed errors can be estimated. Estimation of GARCH (1,1) models is also described using the method maximum likelihood. The empirical study estimated four models using data on four different share return series and one index return, namely: Anglo American, BHP, FirstRand, Standard Bank Group and JSE Top 40 index. The GARCH and SV model with Student-t distributed errors both perform best on the series examined in this dissertation. The metric used to determine the best performing model was the Akaike information criterion (AIC).
APA, Harvard, Vancouver, ISO, and other styles
5

Ketata, Chefi. "Knowledge-assisted stochastic evaluation of sampling errors in mineral processing streams." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq39321.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Van, Langenhove Jan Willem. "Adaptive control of deterministic and stochastic approximation errors in simulations of compressible flow." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066357/document.

Full text
Abstract:
La simulation de systèmes d'ingénierie non linéaire complexes tels que les écoulements de fluide compressibles peut être ciblée pour rendre plus efficace et précise l'approximation d'une quantité spécifique (scalaire) d'intérêt du système. En mettant de côté l'erreur de modélisation et l'incertitude paramétrique, on peut y parvenir en combinant des estimations d'erreurs axées sur des objectifs et des raffinements adaptatifs de maillage spatial anisotrope. A cette fin, un cadre élégant et efficace est celui de l'adaptation dite basé-métrique où une estimation d'erreur a priori est utilisée comme indicateur d’adaptation de maillage. Dans cette thèse on propose une nouvelle extension de cette approche au cas des approximations de système portant une composante stochastique. Dans ce cas, un problème d'optimisation est formulé et résolu pour un meilleur contrôle des sources d'erreurs. Ce problème est posé dans le cadre continu de l'espace de métrique riemannien. Des développements algorithmiques sont également proposés afin de déterminer les sources dominates d’erreur et effectuer l’adaptation dans les espaces physique ou des paramètres incertains. L’approche proposé est testée sur divers problèmes comprenant une entrée de scramjet supersonique soumise à des incertitudes paramétriques géométriques et opérationnelles. Il est démontré que cette approche est capable de bien capturé les singularités dans l’escape stochastique, tout en équilibrant le budget de calcul et les raffinements de maillage dans les deux espaces<br>The simulation of complex nonlinear engineering systems such as compressible fluid flows may be targeted to make more efficient and accurate the approximation of a specific (scalar) quantity of interest of the system. Putting aside modeling error and parametric uncertainty, this may be achieved by combining goal-oriented error estimates and adaptive anisotropic spatial mesh refinements. To this end, an elegant and efficient framework is the one of (Riemannian) metric-based adaptation where a goal-based a priori error estimation is used as indicator for adaptivity. This thesis proposes a novel extension of this approach to the case of aforementioned system approximations bearing a stochastic component. In this case, an optimisation problem leading to the best control of the distinct sources of errors is formulated in the continuous framework of the Riemannian metric space. Algorithmic developments are also presented in order to quantify and adaptively adjust the error components in the deterministic and stochastic approximation spaces. The capability of the proposed method is tested on various problems including a supersonic scramjet inlet subject to geometrical and operational parametric uncertainties. It is demonstrated to accurately capture discontinuous features of stochastic compressible flows impacting pressure-related quantities of interest, while balancing computational budget and refinements in both spaces
APA, Harvard, Vancouver, ISO, and other styles
7

Wall, John H. "A study of the effects of stochastic inertial sensor errors in dead-reckoning navigation." Auburn, Ala., 2007. http://repo.lib.auburn.edu/07M%20Theses/WALL_JOHN_59.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Everitt, Niklas. "Identification of Modules in Acyclic Dynamic Networks A Geometric Analysis of Stochastic Model Errors." Licentiate thesis, KTH, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-159698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nguyen, Ngoc B. "Estimation of Technical Efficiency in Stochastic Frontier Analysis." Bowling Green State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1275444079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bottegal, Giulio. "Modeling, estimation and identification of stochastic systems with latent variables." Doctoral thesis, Università degli studi di Padova, 2013. http://hdl.handle.net/11577/3423358.

Full text
Abstract:
The main topic of this thesis is the analysis of static and dynamic models in which some variables, although directly influencing the behavior of certain observables, are not accessible to measurements. These models find applications in many branches of science and engineering, such as control systems, communications, natural and biological sciences and econometrics. It is well-known that models with unaccessible - or latent - variables, usually suffer from a lack of uniqueness of representation. In other words, there are in general many models of the same type describing a given set of observables say, the measurable input-output variables. This is well-known and has been well-studied for a special class of linear models, called state-space models. In this thesis we shall focus on two particular classes of stochastic systems with latent variables: the generalized factor analysis models and errors-in-variables models. For these classes of models there are still some unresolved issues related to non-uniqueness of the representation and clarifying these issues is of paramount importance for their identification. Since mathematical models usually need to be estimated from experimental data, solving the non-uniqueness problem is essential for their use in statistical inference (system identification) from measured data.<br>L’argomento principale di questa tesi è l’analisi di modelli statici e dinamici in cui alcune variabili non sono accessibili a misurazioni, nonostante esse influenzino l’evoluzione di certe osservazioni. Questi modelli trovano applicazione in molte discipline delle scienze e dell’ingegneria, come ad esempio l’automatica, le telecomunicazioni, le scienze naturali, la biologia e l’econometria e sono stati studiati approfonditamente nel campo dell’identificazione dei modelli. E' ben noto che sistemi con variabili inaccessibili - o latenti, spesso soffrono di una mancanza di unicità nella rappresentazione. In altre parole, in generale ci sono molti modelli dello stesso tipo che possono descrivere un dato insieme di osservazioni, come ad esempio variabili misurabili di ingresso-uscita. Questo è ben noto, ed è stato studiato a fondo per una classe speciale di modelli lineari, chiamata modelli a spazio di stato. In questa tesi ci si focalizza su due classi particolari di sistemi stocastici a variabili latenti: i modelli generalized factor analysis e i modelli errors-in-variables. Per queste classi di modelli ci sono ancora alcuni problemi irrisolti legati alla non unicità della rappresentazione e chiarificare questi problemi è di importanza fondamentale per la loro identificazione. Poiché solitamente i modelli matematici necessitano ti essere stimati da dati sperimentali, è essenziale risolvere il problema della non unicità per il loro utilizzo nell’inferenza statistica (identificazione di modelli) da dati misurati.
APA, Harvard, Vancouver, ISO, and other styles
11

Savanhu, Richard. "Bayesian estimation of stochastic volatility models with fat tails and correlated errors applied to the South African financial market." Master's thesis, University of Cape Town, 2011. http://hdl.handle.net/11427/11085.

Full text
Abstract:
Includes bibliographical references (leaves 39-40).<br>In this study we apply Markov Chain Monte Carlo methods in the Bayesian framework to estimate Stochastic Volatility models using South African financial market data. A single move Gibbs sampler is used to sample parameters from the posterior distribution. Volatility is used as measure of an asset's risk. It is particularly important in risk management, derivatives pricing, and portfolio selection. When pricing derivatives it is important to quote the correct volatility trading in the market, hence there is need for good estimates of volatility. To capture the stylised facts about asset returns we used the model extended for fat tails and correlated errors. To support this model against the basic model of Taylor (1986), we computed Bayes Factors of Jacquier, Polson and Ross (2004). The extended model was found to be far superior to the basic model.
APA, Harvard, Vancouver, ISO, and other styles
12

Genova, Barazarte Ezequiel. "Stochastic modeling of the variation of velocity and permeability as a function of effective pressure using the Bed-of-Nails asperity-deformation model." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hotz-Behofsits, Christian, Florian Huber, and Thomas Zörner. "Predicting crypto-currencies using sparse non-Gaussian state space models." Wiley, 2018. http://dx.doi.org/10.1002/for.2524.

Full text
Abstract:
In this paper we forecast daily returns of crypto-currencies using a wide variety of different econometric models. To capture salient features commonly observed in financial time series like rapid changes in the conditional variance, non-normality of the measurement errors and sharply increasing trends, we develop a time-varying parameter VAR with t-distributed measurement errors and stochastic volatility. To control for overparameterization, we rely on the Bayesian literature on shrinkage priors that enables us to shrink coefficients associated with irrelevant predictors and/or perform model specification in a flexible manner. Using around one year of daily data we perform a real-time forecasting exercise and investigate whether any of the proposed models is able to outperform the naive random walk benchmark. To assess the economic relevance of the forecasting gains produced by the proposed models we moreover run a simple trading exercise.
APA, Harvard, Vancouver, ISO, and other styles
14

Irshad, Yasir. "On some continuous-time modeling and estimation problems for control and communication." Doctoral thesis, Karlstads universitet, Institutionen för ingenjörsvetenskap och fysik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-26129.

Full text
Abstract:
The scope of the thesis is to estimate the parameters of continuous-time models used within control and communication from sampled data with high accuracy and in a computationally efficient way.In the thesis, continuous-time models of systems controlled in a networked environment, errors-in-variables systems, stochastic closed-loop systems, and wireless channels are considered. The parameters of a transfer function based model for the process in a networked control system are estimated by a covariance function based approach relying upon the second order statistical properties of input and output signals. Some other approaches for estimating the parameters of continuous-time models for processes in networked environments are also considered. The multiple input multiple output errors-in-variables problem is solved by means of a covariance matching algorithm. An analysis of a covariance matching method for single input single output errors-in-variables system identification is also presented. The parameters of continuous-time autoregressive exogenous models are estimated from closed-loop filtered data, where the controllers in the closed-loop are of proportional and proportional integral type, and where the closed-loop also contains a time-delay. A stochastic differential equation is derived for Jakes's wireless channel model, describing the dynamics of a scattered electric field with the moving receiver incorporating a Doppler shift.<br><p>The thesis consists of five main parts, where the first part is an introduction- Parts II-IV are based on the following articles:</p><p><strong>Part II</strong> - Networked Control Systems</p><p>1. Y. Irshad, M. Mossberg and T. Söderström. <em>System identification in a networkedenvironment using second order statistical properties</em>.</p><p>A versionwithout all appendices is published as Y. Irshad, M. Mossberg and T. Söderström. <em>System identification in a networked environment using second order statistical properties</em>. Automatica, 49(2), pages 652–659, 2013.</p><p>Some preliminary results are also published as M. Mossberg, Y. Irshad and T. Söderström. <em>A covariance function based approachto networked system identification.</em> In Proc. 2nd IFAC Workshop on Distributed Estimation and Control in Networked Systems, pages 127–132, Annecy,France, September 13–14, 2010</p><p>2. Y. Irshad and M. Mossberg. <em>Some parameters estimation methods applied tonetworked control systems</em>.A journal submission is made. Some preliminary results are published as Y. Irshad and M. Mossberg.<em> A comparison of estimation concepts applied to networked control systems</em>. In Proc. 19th Int. Conf. on Systems, Signals andImage Processing, pages 120–123, Vienna, Austria, April 11–13, 2012.</p><p><strong>Part III</strong> - Errors-in-variables Identification</p><p>3. Y. Irshad and M. Mossberg. <em>Continuous-time covariance matching for MIMOEIV system identification</em>. A journal submission is made.</p><p>4. T. Söderström, Y. Irshad, M. Mossberg and W. X. Zheng. <em>On the accuracy of acovariance matching method for continuous-time EIV identification. </em>Provisionally accepted for publication in Automatica.</p><p>Some preliminary results are published as T. Söderström, Y. Irshad, M. Mossberg, and W. X. Zheng. <em>Accuracy analysis of a covariance matching method for continuous-time errors-in-variables system identification</em>. In Proc. 16th IFAC Symp. System Identification, pages 1383–1388, Brussels, Belgium, July 11–13, 2012.</p><p><strong>Part IV</strong> - Wireless Channel Modeling</p><p>5. Y. Irshad and M. Mossberg.<em> Wireless channel modeling based on stochasticdifferential equations .</em>Some results are published as M. Mossberg and Y. Irshad.<em> A stochastic differential equation forwireless channelsbased on Jakes’s model with time-varying phases,</em> In Proc. 13th IEEEDigitalSignal Processing Workshop, pages 602–605, Marco Island, FL, January4–7, 2009.</p><p><strong>Part V</strong> - Closed-loop Identification</p><p>6. Y. Irshad and M. Mossberg. Closed-loop identification of P- and PI-controlledtime-delayed stochastic systems.Some results are published as M. Mossberg and Y. Irshad. <em>Closed-loop identific ation of stochastic models from filtered data</em>, In Proc. IEEE Multi-conference on Systems and Control,San Antonio, TX, September 3–5, 2008</p>
APA, Harvard, Vancouver, ISO, and other styles
15

Psychou, Georgia [Verfasser], Tobias G. [Akademischer Betreuer] Noll, Holger [Akademischer Betreuer] Blume, and Tobias [Akademischer Betreuer] Gemmeke. "Stochastic Approaches for Speeding-Up the Analysis of the Propagation of Hardware-Induced Errors and Characterization of System-Level Mitigation Schemes in Digital Communication Systems / Georgia Psychou ; Tobias G. Noll, Holger Blume, Tobias Gemmeke." Aachen : Universitätsbibliothek der RWTH Aachen, 2018. http://d-nb.info/1162503351/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ivanis, Predrag, and Bane Vasic. "Error Errore Eicitur: A Stochastic Resonance Paradigm for Reliable Storage of Information on Unreliable Media." IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2016. http://hdl.handle.net/10150/621739.

Full text
Abstract:
We give an architecture of a storage system consisting of a storage medium made of unreliable memory elements and an error correction circuit made of a combination of noisy and noiseless logic gates that is capable of retaining the stored information with the lower probability of error than a storage system with a correction circuit made completely of noiseless logic gates. Our correction circuit is based on the iterative decoding of low-density parity check codes, and uses the positive effect of errors in logic gates to correct errors in memory elements. In the spirit of Marcus Tullius Cicero's Clavus clavo eicitur (one nail drives out another), the proposed storage system operates on the principle: error errore eicitur-one error drives out another. The randomness that is present in the logic gates makes these classes of decoders superior to their noiseless counterparts. Moreover, random perturbations do not require any additional computational resources as they are inherent to unreliable hardware itself. To utilize the benefits of logic gate failures, our correction circuit relies on two key novelties: a mixture of reliable and unreliable gates and decoder rewinding. We present a method based on absorbing Markov chains for the probability of error analysis, and explain how the randomness in the variable and check node update function helps a decoder to escape to local minima associated with trapping sets.
APA, Harvard, Vancouver, ISO, and other styles
17

Unsal, Derya. "Estimation Of Deterministic And Stochastic Imu Error Parameters." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614059/index.pdf.

Full text
Abstract:
Inertial Measurement Units, the main component of a navigation system, are used in several systems today. IMU&rsquo<br>s main components, gyroscopes and accelerometers, can be produced at a lower cost and higher quantity. Together with the decrease in the production cost of sensors it is observed that the performances of these sensors are getting worse. In order to improve the performance of an IMU, the error compensation algorithms came into question and several algorithms have been designed. Inertial sensors contain two main types of errors which are deterministic errors like scale factor, bias, misalignment and stochastic errors such as bias instability and scale factor instability. Deterministic errors are the main part of error compensation algorithms. This thesis study explains the methodology of how the deterministic errors are defined by 27 state static and 60 state dynamic rate table calibration test data and how those errors are used in the error compensation model. In addition, the stochastic error parameters, gyroscope and bias instability, are also modeled with Gauss Markov Model and instant sensor bias instability values are estimated by Kalman Filter algorithm. Therefore, accelerometer and gyroscope bias instability can be compensated in real time. In conclusion, this thesis study explores how the IMU performance is improved by compensating the deterministic end stochastic errors. The simulation results are supported by a real IMU test data.
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Peng-Yu. "Experimental Assessment of MEMS INS Stochastic Error Model." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1420709961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Pellissetti, Manuel F. "On estimating the error in stochastic model-based predictions." Available to US Hopkins community, 2003. http://wwwlib.umi.com/dissertations/dlnow/3080743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Yeh, Chen-Chu Alex. "Minimum-error-probability equalization and multi-user detection." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/12994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Haessig, Pierre. "Dimensionnement et gestion d’un stockage d’énergie pour l'atténuation des incertitudes de production éolienne." Thesis, Cachan, Ecole normale supérieure, 2014. http://www.theses.fr/2014DENS0030/document.

Full text
Abstract:
Le contexte de nos travaux de thèse est l'intégration de l'énergie éolienne sur les réseaux insulaires. Ces travaux sont soutenus par EDF SEI, l'opérateur électrique des îles françaises. Nous étudions un système éolien-stockage où un système de stockage d'énergie doit aider un producteur éolien à tenir, vis-à-vis du réseau, un engagement de production pris un jour à l'avance. Dans ce contexte, nous proposons une démarche pour l'optimisation du dimensionnement et du contrôle du système de stockage (gestion d'énergie). Comme les erreurs de prévision J+1 de production éolienne sont fortement incertaines, la gestion d'énergie du stockage est un problème d'optimisation stochastique (contrôle optimal stochastique). Pour le résoudre, nous étudions tout d'abord la modélisation des composants du système (modélisation énergétique du stockage par batterie Li-ion ou Sodium-Soufre) ainsi que des entrées (modélisation temporelle stochastique des entrées incertaines). Nous discutons également de la modélisation du vieillissement du stockage, sous une forme adaptée à l'optimisation de la gestion. Ces modèles nous permettent d'optimiser la gestion de l'énergie par la méthode de la programmation dynamique stochastique (SDP). Nous discutons à la fois de l'algorithme et de ses résultats, en particulier de l'effet de la forme des pénalisations sur la loi de gestion. Nous présentons également l'application de la SDP sur des problèmes complémentaires de gestion d'énergie (lissage de la production d'un houlogénérateur, limitation des rampes de production éolienne). Cette étude de l'optimisation de la gestion permet d'aborder l'optimisation du dimensionnement (choix de la capacité énergétique). Des simulations temporelles stochastiques mettent en évidence le fort impact de la structure temporelle (autocorrélation) des erreurs de prévision sur le besoin en capacité de stockage pour atteindre un niveau de performance donné. La prise en compte de paramètres de coût permet ensuite l'optimisation du dimensionnement d'un point de vue économique, en considérant les coûts de l'investissement, des pertes ainsi que du vieillissement. Nous étudions également le dimensionnement du stockage lorsque la pénalisation des écarts à l'engagement comporte un seuil de tolérance. Nous terminons ce manuscrit en abordant la question structurelle de l'interaction entre l'optimisation du dimensionnement et celle du contrôle d'un système de stockage, car ces deux problèmes d'optimisation sont couplés<br>The context of this PhD thesis is the integration of wind power into the electricity grid of small islands. This work is supported by EDF SEI, the system operator for French islands. We study a wind-storage system where an energy storage is meant to help a wind farm operator fulfill a day-ahead production commitment to the grid. Within this context, we propose an approach for the optimization of the sizing and the control of the energy storage system (energy management). Because day-ahead wind power forecast errors are a major source of uncertainty, the energy management of the storage is a stochastic optimization problem (stochastic optimal control). To solve this problem, we first study the modeling of the components of the system. This include energy-based models of the storage system, with a focus on Lithium-ion and Sodium-Sulfur battery technologies. We then model the system inputs and in particular the stochastic time series like day-ahead forecast errors. We also discuss the modeling of storage aging, using a formulation which is adapted to the control optimization. Assembling all these models enables us to optimize the energy management of the storage system using the stochastic dynamic programming (SDP) method. We introduce the SDP algorithms and present our optimization results, with a special interest for the effect of the shape of the penalty function on the energy control law. We also present additional energy management applications with SDP (mitigation of wind power ramps and smoothing of ocean wave power). Having optimized the storage energy management, we address the optimization of the storage sizing (choice of the rated energy). Stochastic time series simulations show that the temporal structure (autocorrelation) of wind power forecast errors have a major impact on the need for storage capacity to reach a given performance level. Then we combine simulation results with cost parameters, including investment, losses and aging costs, to build a economic cost function for sizing. We also study storage sizing when the penalization of commitment deviations includes a tolerance threshold. We finish this manuscript with a structural study of the interaction between the optimizations of the sizing and the control of an energy storage system, because these two optimization problems are coupled
APA, Harvard, Vancouver, ISO, and other styles
22

Ghazali, Saadia. "The global error in weak approximations of stochastic differential equations." Thesis, Imperial College London, 2007. http://hdl.handle.net/10044/1/1260.

Full text
Abstract:
In this thesis, the convergence analysis of a class of weak approximations of solutions of stochastic differential equations is presented. This class includes recent approximations such as Kusuoka’s moment similar families method and the Lyons-Victoir cubature of Wiener Space approach. It is shown that the rate of convergence depends intrinsically on the smoothness of the chosen test function. For smooth functions (the required degree of smoothness depends on the order of the approximation), an equidistant partition of the time interval on which the approximation is sought is optimal. For functions that are less smooth, for example Lipschitz functions, the rate of convergence decays and the optimal partition is no longer equidistant. An asymptotic rate of convergence is also established for the Lyons-Victoir method. The analysis rests upon Kusuoka- Stroock’s results on the smoothness of the distribution of the solution of a stochastic differential equation. Finally the results are applied to the numerical solution of the filtering problem and the pricing of asian options.
APA, Harvard, Vancouver, ISO, and other styles
23

Shen, Wei. "Stochastic gradient descent for pairwise learning : stability and optimization error." HKBU Institutional Repository, 2019. https://repository.hkbu.edu.hk/etd_oa/700.

Full text
Abstract:
In this thesis, we study the stability and its trade-off with optimization error for stochastic gradient descent (SGD) algorithms in the pairwise learning setting. Pairwise learning refers to a learning task which involves a loss function depending on pairs of instances among which notable examples are bipartite ranking, metric learning, area under ROC curve (AUC) maximization and minimum error entropy (MEE) principle. Our contribution is twofold. Firstly, we establish the stability results for SGD for pairwise learning in the convex, strongly convex and non-convex settings, from which generalization errors can be naturally derived. Moreover, we also give the stability results of buffer-based SGD and projected SGD. Secondly, we establish the trade-off between stability and optimization error of SGD algorithms for pairwise learning. This is achieved by lower-bounding the sum of stability and optimization error by the minimax statistical error over a prescribed class of pairwise loss functions. From this fundamental trade-off, we obtain lower bounds for the optimization error of SGD algorithms and the excess expected risk over a class of pairwise losses. In addition, we illustrate our stability results by giving some specific examples and experiments of AUC maximization and MEE.
APA, Harvard, Vancouver, ISO, and other styles
24

Moon, Kyoung-Sook. "Adaptive Algorithms for Deterministic and Stochastic Differential Equations." Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gerencsér, Máté. "Stochastic PDEs with extremal properties." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20445.

Full text
Abstract:
We consider linear and semilinear stochastic partial differential equations that in some sense can be viewed as being at the "endpoints" of the classical variational theory by Krylov and Rozovskii [25]. In terms of regularity of the coeffcients, the minimal assumption is boundedness and measurability, and a unique L2- valued solution is then readily available. We investigate its further properties, such as higher order integrability, boundedness, and continuity. The other class of equations considered here are the ones whose leading operators do not satisfy the strong coercivity condition, but only a degenerate version of it, and therefore are not covered by the classical theory. We derive solvability in Wmp spaces and also discuss their numerical approximation through finite different schemes.
APA, Harvard, Vancouver, ISO, and other styles
26

Vavruška, Marek. "Realised stochastic volatility in practice." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-165381.

Full text
Abstract:
Realised Stochastic Volatility model of Koopman and Scharth (2011) is applied to the five stocks listed on NYSE in this thesis. Aim of this thesis is to investigate the effect of speeding up the trade data processing by skipping the cleaning rule requiring the quote data. The framework of the Realised Stochastic Volatility model allows the realised measures to be biased estimates of the integrated volatility, which further supports this approach. The number of errors in recorded trades has decreased significantly during the past years. Different sample lengths were used to construct one day-ahead forecasts of realised measures to examine the forecast precision sensitivity to the rolling window length. Use of the longest window length does not lead to the lowest mean square error. The dominance of the Realised Stochastic Volatility model in terms of the lowest mean square errors of one day-ahead out-of-sample forecasts has been confirmed.
APA, Harvard, Vancouver, ISO, and other styles
27

Agrawal, S., S. G. Dhande, K. Deb, Beer D. J. De, and M. Truscott. "Synthesis of mechanical error in rapid prototyping processes using stochastic 1 approach." Journal for New Generation Sciences, Vol 3, Issue 1: Central University of Technology, Free State, Bloemfontein, 2005. http://hdl.handle.net/11462/464.

Full text
Abstract:
Published Article<br>A synthesis procedure for allocating tolerances and clearances in rapid prototyping (RP) processes has been developed, using a unified method based on stochastic approach, as developed by the authors, to study the mechanical error in RP processes. The tolerances and clearances that cause mechanical error have been assumed to be random variables, and are optimally allocated so as to restrict the mechanical error within the specified limits. Using the synthesis procedure, the allocation is done for the Fused Deposition Modeling (FDM) and the Stereolithography (SL) processes.
APA, Harvard, Vancouver, ISO, and other styles
28

Delgado-Loperena, Dharma. "A stochastic dynamic model for human error analysis in nuclear power plants /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3137693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Waddington, Jonathan. "Human optokinetic nystagmus : a stochastic analysis." Thesis, University of Plymouth, 2012. http://hdl.handle.net/10026.1/1040.

Full text
Abstract:
Optokinetic nystagmus (OKN) is a fundamental gaze-stabilising response in which eye movements attempt to compensate for the retinal slip caused by self-motion. The OKN response consists of a slow following movement made in the direction of stimulus motion interrupted by fast eye movements that are primarily made in the opposite direction. The timing and amplitude of these slow phases and quick phases are notably variable, but this variability is poorly understood. In this study I performed principal component analysis on OKN parameters in order to investigate how the eigenvectors and eigenvalues of the underlying components contribute to the correlation between OKN parameters over time. I found three categories of principal components that could explain the variance within each cycle of OKN, and only parameters from within a single cycle contributed highly to any given component. Differences found in the correlation matrices of OKN parameters appear to reflect changes in the eigenvalues of components, while eigenvectors remain predominantly similar across participants, and trials. I have developed a linear and stochastic model of OKN based on these results and demonstrated that OKN can be described as a 1st order Markov process, with three sources of noise affecting SP velocity, QP triggering, and QP amplitude. I have used this model to make some important predictions about the optokinetic reflex: the transient response of SP velocity, the existence of signal dependent noise in the system, the target position of QPs, and the threshold at which QPs are generated. Finally, I investigate whether the significant variability within OKN may represent adaptive control of explicit and implicit parameters. iii
APA, Harvard, Vancouver, ISO, and other styles
30

Perez, Rafael A. "Uncertainty Analysis of Computational Fluid Dynamics Via Polynomial Chaos." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/28984.

Full text
Abstract:
The main limitations in performing uncertainty analysis of CFD models using conventional methods are associated with cost and effort. For these reasons, there is a need for the development and implementation of efficient stochastic CFD tools for performing uncertainty analysis. One of the main contributions of this research is the development and implementation of Intrusive and Non-Intrusive methods using polynomial chaos for uncertainty representation and propagation. In addition, a methodology was developed to address and quantify turbulence model uncertainty. In this methodology, a complex perturbation is applied to the incoming turbulence and closure coefficients of a turbulence model to obtain the sensitivity derivatives, which are used in concert with the polynomial chaos method for uncertainty propagation of the turbulence model outputs.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
31

Thompson, Mery H. "Optimum experimental designs for models with a skewed error distribution with an application to stochastic frontier models /." Connect to e-thesis, 2008. http://theses.gla.ac.uk/236/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2008.<br>Ph.D. thesis submitted to the Faculty of Information and Mathematical Sciences, Department of Statistics, 2008. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
32

He, Niao. "Saddle point techniques in convex composite and error-in-measurement optimization." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54400.

Full text
Abstract:
This dissertation aims to develop efficient algorithms with improved scalability and stability properties for large-scale optimization and optimization under uncertainty, and to bridge some of the gaps between modern optimization theories and recent applications emerging in the Big Data environment. To this end, the dissertation is dedicated to two important subjects -- i) Large-scale Convex Composite Optimization and ii) Error-in-Measurement Optimization. In spite of the different natures of these two topics, the common denominator, to be presented, lies in their accommodation for systematic use of saddle point techniques for mathematical modeling and numerical processing. The main body can be split into three parts. In the first part, we consider a broad class of variational inequalities with composite structures, allowing to cover the saddle point/variational analogies of the classical convex composite minimization (i.e. summation of a smooth convex function and a simple nonsmooth convex function). We develop novel composite versions of the state-of-the-art Mirror Descent and Mirror Prox algorithms aimed at solving such type of problems. We demonstrate that the algorithms inherit the favorable efficiency estimate of their prototypes when solving structured variational inequalities. Moreover, we develop several variants of the composite Mirror Prox algorithm along with their corresponding complexity bounds, allowing the algorithm to handle the case of imprecise prox mapping as well as the case when the operator is represented by an unbiased stochastic oracle. In the second part, we investigate four general types of large-scale convex composite optimization problems, including (a) multi-term composite minimization, (b) linearly constrained composite minimization, (c) norm-regularized nonsmooth minimization, and (d) maximum likelihood Poisson imaging. We demonstrate that the composite Mirror Prox, when integrated with saddle point techniques and other algorithmic tools, can solve all these optimization problems with the best known so far rates of convergences. Our main related contributions are as follows. Firstly, regards to problems of type (a), we develop an optimal algorithm by integrating the composite Mirror Prox with a saddle point reformulation based on exact penalty. Secondly, regards to problems of type (b), we develop a novel algorithm reducing the problem to solving a ``small series'' of saddle point subproblems and achieving an optimal, up to log factors, complexity bound. Thirdly, regards to problems of type (c), we develop a Semi-Proximal Mirror-Prox algorithm by leveraging the saddle point representation and linear minimization over problems' domain and attain optimality both in the numbers of calls to the first order oracle representing the objective and calls to the linear minimization oracle representing problem's domain. Lastly, regards to problem (d), we show that the composite Mirror Prox when applied to the saddle point reformulation circumvents the difficulty with non-Lipschitz continuity of the objective and exhibits better convergence rate than the typical rate for nonsmooth optimization. We conduct extensive numerical experiments and illustrate the practical potential of our algorithms in a wide spectrum of applications in machine learning and image processing. In the third part, we examine error-in-measurement optimization, referring to decision-making problems with data subject to measurement errors; such problems arise naturally in a number of important applications, such as privacy learning, signal processing, and portfolio selection. Due to the postulated observation scheme and specific structure of the problem, straightforward application of standard stochastic optimization techniques such as Stochastic Approximation (SA) and Sample Average Approximation (SAA) are out of question. Our goal is to develop computationally efficient and, hopefully, not too conservative data-driven techniques applicable to a broad scope of problems and allowing for theoretical performance guarantees. We present two such approaches -- one depending on a fully algorithmic calculus of saddle point representations of convex-concave functions and the other depending on a general approximation scheme of convex stochastic programming. Both approaches allow us to convert the problem of interests to a form amenable for SA or SAA. The latter developments are primarily focused on two important applications -- affine signal processing and indirect support vector machines.
APA, Harvard, Vancouver, ISO, and other styles
33

Hernandez, Moreno Andres Felipe. "A metamodeling approach for approximation of multivariate, stochastic and dynamic simulations." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43690.

Full text
Abstract:
This thesis describes the implementation of metamodeling approaches as a solution to approximate multivariate, stochastic and dynamic simulations. In the area of statistics, metamodeling (or ``model of a model") refers to the scenario where an empirical model is build based on simulated data. In this thesis, this idea is exploited by using pre-recorded dynamic simulations as a source of simulated dynamic data. Based on this simulated dynamic data, an empirical model is trained to map the dynamic evolution of the system from the current discrete time step, to the next discrete time step. Therefore, it is possible to approximate the dynamics of the complex dynamic simulation, by iteratively applying the trained empirical model. The rationale in creating such approximate dynamic representation is that the empirical models / metamodels are much more affordable to compute than the original dynamic simulation, while having an acceptable prediction error. The successful implementation of metamodeling approaches, as approximations of complex dynamic simulations, requires understanding of the propagation of error during the iterative process. Prediction errors made by the empirical model at earlier times of the iterative process propagate into future predictions of the model. The propagation of error means that the trained empirical model will deviate from the expensive dynamic simulation because of its own errors. Based on this idea, Gaussian process model is chosen as the metamodeling approach for the approximation of expensive dynamic simulations in this thesis. This empirical model was selected not only for its flexibility and error estimation properties, but also because it can illustrate relevant issues to be considered if other metamodeling approaches were used for this purpose.
APA, Harvard, Vancouver, ISO, and other styles
34

Szwaykowska, Klementyna. "Controlled Lagrangian particle tracking: analyzing the predictability of trajectories of autonomous agents in ocean flows." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50357.

Full text
Abstract:
Use of model-based path planning and navigation is a common strategy in mobile robotics. However, navigation performance may degrade in complex, time-varying environments under model uncertainty because of loss of prediction ability for the robot state over time. Exploration and monitoring of ocean regions using autonomous marine robots is a prime example of an application where use of environmental models can have great benefits in navigation capability. Yet, in spite of recent improvements in ocean modeling, errors in model-based flow forecasts can still significantly affect the accuracy of predictions of robot positions over time, leading to impaired path-following performance. In developing new autonomous navigation strategies, it is important to have a quantitative understanding of error in predicted robot position under different flow conditions and control strategies. The main contributions of this thesis include development of an analytical model for the growth of error in predicted robot position over time and theoretical derivation of bounds on the error growth, where error can be attributed to drift caused by unmodeled components of ocean flow. Unlike most previous works, this work explicitly includes spatial structure of unmodeled flow components in the proposed error growth model. It is shown that, for a robot operating under flow-canceling control in a static flow field with stochastic errors in flow values returned at ocean model gridpoints, the error growth is initially rapid, but slows when it reaches a value of approximately twice the ocean model gridsize. Theoretical values for mean and variance of error over time under a station-keeping feedback control strategy and time-varying flow fields are computed. Growth of error in predicted vehicle position is modeled for ocean models whose flow forecasts include errors with large spatial scales. Results are verified using data from several extended field deployments of Slocum autonomous underwater gliders, in Monterey Bay, CA in 2006, and in Long Bay, SC in 2012 and 2013.
APA, Harvard, Vancouver, ISO, and other styles
35

Moon, Kyoung-Sook. "Convergence rates of adaptive algorithms for deterministic and stochastic differential equations." Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Sazak, Hakan Savas. "Estimation And Hypothesis Testing In Stochastic Regression." Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/3/724294/index.pdf.

Full text
Abstract:
Regression analysis is very popular among researchers in various fields but almost all the researchers use the classical methods which assume that X is nonstochastic and the error is normally distributed. However, in real life problems, X is generally stochastic and error can be nonnormal. Maximum likelihood (ML) estimation technique which is known to have optimal features, is very problematic in situations when the distribution of X (marginal part) or error (conditional part) is nonnormal. Modified maximum likelihood (MML) technique which is asymptotically giving the estimators equivalent to the ML estimators, gives us the opportunity to conduct the estimation and the hypothesis testing procedures under nonnormal marginal and conditional distributions. In this study we show that MML estimators are highly efficient and robust. Moreover, the test statistics based on the MML estimators are much more powerful and robust compared to the test statistics based on least squares (LS) estimators which are mostly used in literature. Theoretically, MML estimators are asymptotically minimum variance bound (MVB) estimators but simulation results show that they are highly efficient even for small sample sizes. In this thesis, Weibull and Generalized Logistic distributions are used for illustration and the results given are based on these distributions. As a future study, MML technique can be utilized for other types of distributions and the procedures based on bivariate data can be extended to multivariate data.
APA, Harvard, Vancouver, ISO, and other styles
37

Matthews, Charles. "Error in the invariant measure of numerical discretization schemes for canonical sampling of molecular dynamics." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8949.

Full text
Abstract:
Molecular dynamics (MD) computations aim to simulate materials at the atomic level by approximating molecular interactions classically, relying on the Born-Oppenheimer approximation and semi-empirical potential energy functions as an alternative to solving the difficult time-dependent Schrodinger equation. An approximate solution is obtained by discretization in time, with an appropriate algorithm used to advance the state of the system between successive timesteps. Modern MD simulations simulate complex systems with as many as a trillion individual atoms in three spatial dimensions. Many applications use MD to compute ensemble averages of molecular systems at constant temperature. Langevin dynamics approximates the effects of weakly coupling an external energy reservoir to a system of interest, by adding the stochastic Ornstein-Uhlenbeck process to the system momenta, where the resulting trajectories are ergodic with respect to the canonical (Boltzmann-Gibbs) distribution. By solving the resulting stochastic differential equations (SDEs), we can compute trajectories that sample the accessible states of a system at a constant temperature by evolving the dynamics in time. The complexity of the classical potential energy function requires the use of efficient discretization schemes to evolve the dynamics. In this thesis we provide a systematic evaluation of splitting-based methods for the integration of Langevin dynamics. We focus on the weak properties of methods for confiurational sampling in MD, given as the accuracy of averages computed via numerical discretization. Our emphasis is on the application of discretization algorithms to high performance computing (HPC) simulations of a wide variety of phenomena, where configurational sampling is the goal. Our first contribution is to give a framework for the analysis of stochastic splitting methods in the spirit of backward error analysis, which provides, in certain cases, explicit formulae required to correct the errors in observed averages. A second contribution of this thesis is the investigation of the performance of schemes in the overdamped limit of Langevin dynamics (Brownian or Smoluchowski dynamics), showing the inconsistency of some numerical schemes in this limit. A new method is given that is second-order accurate (in law) but requires only one force evaluation per timestep. Finally we compare the performance of our derived schemes against those in common use in MD codes, by comparing the observed errors introduced by each algorithm when sampling a solvated alanine dipeptide molecule, based on our implementation of the schemes in state-of-the-art molecular simulation software. One scheme is found to give exceptional results for the computed averages of functions purely of position.
APA, Harvard, Vancouver, ISO, and other styles
38

Tithi, Tasnuva Tarannum. "Error-Floors of the 802.3an LDPC Code for Noise Assisted Decoding." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7465.

Full text
Abstract:
In digital communication, information is sent as bits, which is corrupted by the noise present in wired/wireless medium known as the channel. The Low Density Parity Check (LDPC) codes are a family of error correction codes used in communication systems to detect and correct erroneous data at the receiver. Data is encoded with error correction coding at the transmitter and decoded at the receiver. The Noisy Gradient Descent BitFlip (NGDBF) decoding algorithm is a new algorithm with excellent decoding performance with relatively low implementation requirements. This dissertation aims to characterize the performance of the NGDBF algorithm. A simple improvement over NGDBF called the Re-decoded NGDBF (R-NGDBF) is proposed to enhance the performance of NGDBF decoding algorithm. A general method to estimate the decoding parameters of NGDBF is presented. The estimated parameters are then verified in a hardware implementation of the decoder to validate the accuracy of the estimation technique.
APA, Harvard, Vancouver, ISO, and other styles
39

CASSADER, Marco. "Benchmark Tracking Portfolio Problems with Stochastic Ordering Constraints." Doctoral thesis, Università degli studi di Bergamo, 2015. http://hdl.handle.net/10446/62374.

Full text
Abstract:
This work debates several approaches to solve the benchmark tracking problems and introduces different orders of stochastic dominance constraints in the decisional process. Portfolio managers usually address with the problem to compare their performance with a given benchmark. In this work, we propose different solutions for index tracking, enhanced indexation and active managing strategies. Firstly, we introduce a linear measure to deal with the passive strategy problem analyzing its impact in the index tracking formulation. This measure results to be not only theoretically suitable but also it empirically improves the solution the results. Then, proposing realistic enhanced indexation strategies, we show how to solve this problem minimizing a linear dispersion measure. Secondly, we generalize the idea to consider a functional in the tracking error problem considering the class of dilation, expected bounded risk measures and LP compound metric. We formulate different metrics for the benchmark tracking problem and we introduce linear formulation constraints to construct portfolio which maximizes the preference of non-satiable risk averse investors with positive skewness developing the concept of stochastic investment chain. Thirdly, active strategies are proposed to maximize the performances of portfolio managers according with different investor's preferences. Thus, we introduce linear programming portfolio selection models maximizing four performance measures and evaluate the impact of the stochastic dominance constraints in the ex-post final wealth.
APA, Harvard, Vancouver, ISO, and other styles
40

Thompson, Mery Helena. "Optimum experimental designs for models with a skewed error distribution : with an application to stochastic frontier models." Thesis, University of Glasgow, 2008. http://theses.gla.ac.uk/236/.

Full text
Abstract:
In this thesis, optimum experimental designs for a statistical model possessing a skewed error distribution are considered, with particular interest in investigating possible parameter dependence of the optimum designs. The skewness in the distribution of the error arises from its assumed structure. The error consists of two components (i) random error, say V, which is symmetrically distributed with zero expectation, and (ii) some type of systematic error, say U, which is asymmetrically distributed with nonzero expectation. Error of this type is sometimes called 'composed' error. A stochastic frontier model is an example of a model that possesses such an error structure. The systematic error, U, in a stochastic frontier model represents the economic efficiency of an organisation. Three methods for approximating information matrices are presented. An approximation is required since the information matrix contains complicated expressions, which are difficult to evaluate. However, only one method, 'Method 1', is recommended because it guarantees nonnegative definiteness of the information matrix. It is suggested that the optimum design is likely to be sensitive to the approximation. For models that are linearly dependent on the model parameters, the information matrix is independent of the model parameters but depends on the variance parameters of the random and systematic error components. Consequently, the optimum design is independent of the model parameters but may depend on the variance parameters. Thus, designs for linear models with skewed error may be parameter dependent. For nonlinear models, the optimum design may be parameter dependent in respect of both the variance and model parameters. The information matrix is rank deficient. As a result, only subsets or linear combinations of the parameters are estimable. The rank of the partitioned information matrix is such that designs are only admissible for optimal estimation of the model parameters, excluding any intercept term, plus one linear combination of the variance parameters and the intercept. The linear model is shown to be equivalent to the usual linear regression model, but with a shifted intercept. This suggests that the admissible designs should be optimal for estimation of the slope parameters plus the shifted intercept. The shifted intercept can be viewed as a transformation of the intercept in the usual linear regression model. Since D_A-optimum designs are invariant to linear transformations of the parameters, the D_A-optimum design for the asymmetrically distributed linear model is just the linear, parameter independent, D_A-optimum design for the usual linear regression model with nonzero intercept. C-optimum designs are not invariant to linear transformations. However, if interest is in optimally estimating the slope parameters, the linear transformation of the intercept to the shifted intercept is no longer a consideration and the C-optimum design is just the linear, parameter independent, C-optimum design for the usual linear regression model with nonzero intercept. If interest is in estimating the slope parameters, and the shifted intercept, the C-optimum design will depend on (i) the design region; (ii) the distributional assumption on U; (iii) the matrix used to define admissible linear combinations of parameters; (iv) the variance parameters of U and V; (v) the method used to approximate the information matrix. Some numerical examples of designs for a cross-sectional log-linear Cobb-Douglas stochastic production frontier model are presented to demonstrate the nonlinearity of designs for models with a skewed error distribution. Torsney's (1977) multiplicative algorithm was implemented in finding the optimum designs.
APA, Harvard, Vancouver, ISO, and other styles
41

Guan, Peng. "Stochastic Geometry Analysis of LTE-A Cellular Networks." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS252/document.

Full text
Abstract:
L’objectif principal de cette thèse est l’analyse des performances des réseaux LTE-A (Long Term Evolution- Advanced) au travers de la géométrie stochastique. L’analyse mathématique des réseaux cellulaires est un problème difficile, pour lesquels ils existent déjà un certain nombre de résultats mais qui demande encore des efforts et des contributions sur le long terme. L’utilisation de la géométrie aléatoire et des processus ponctuels de Poisson (PPP) s’est avérée être une approche permettant une modélisation pertinente des réseaux cellulaires et d’une complexité faible (tractable). Dans cette thèse, nous nous intéressons tout particulièrement à des modèles s’appuyant sur ces processus de Poisson : PPP-based abstraction. Nous développons un cadre mathématique qui permet le calcul de quantités reflétant les performances des réseaux LTE-A, tels que la probabilité d’erreur, la probabilité et le taux de couverture, pour plusieurs scénarios couvrant entre autres le sens montant et descendant. Nous considérons également des transmissions multi-antennes, des déploiements hétérogènes, et des systèmes de commande de puissance de la liaison montante. L’ensemble de ces propositions a été validé par un grand nombre de simulations. Le cadre mathématique développé dans cette thèse se veut général, et doit pouvoir s’appliquer à un nombre d’autres scénarios importants. L’intérêt de l’approche proposée est de permettre une évaluation des performances au travers de l’évaluation des formules, et permettent en conséquences d’éviter des simulations qui peuvent prendre énormément de temps en terme de développement ou d’exécution<br>The main focus of this thesis is on performance analysis and system optimization of Long Term Evolution - Advanced (LTE-A) cellular networks by using stochastic geometry. Mathematical analysis of cellular networks is a long-lasting difficult problem. Modeling the network elements as points in a Poisson Point Process (PPP) has been proven to be a tractable yet accurate approach to the performance analysis in cellular networks, by leveraging the powerful mathematical tools such as stochastic geometry. In particular, relying on the PPP-based abstraction model, this thesis develops the mathematical frameworks to the computations of important performance measures such as error probability, coverage probability and average rate in several application scenarios in both uplink and downlink of LTE-A cellular networks, for example, multi-antenna transmissions, heterogeneous deployments, uplink power control schemes, etc. The mathematical frameworks developed in this thesis are general enough and the accuracy has been validated against extensive Monte Carlo simulations. Insights on performance trends and system optimization can be done by directly evaluating the formulas to avoid the time-consuming numerical simulations
APA, Harvard, Vancouver, ISO, and other styles
42

Fiorilli, Luca. "Identificazione strutturale mediante l’algoritmo "Stochastic Subspace Identification - SSI"." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Il presente elaborato affronta il problema del rilevamento dei danni strutturali nell’ambito dello Structural Health Monitoring (SHM) attraverso l’identificazione delle caratteristiche dinamiche delle strutture e con il solo utilizzo di vibrazioni ambientali. Questa procedura pertiene all’Operational Modal Analysis (OMA), campo dell’ingegneria che, tenendo conto della sola risposta del sistema senza conoscerne l’input, cattura i segnali nel dominio del tempo o della frequenza e identifica i parametri modali del sistema: frequenze naturali, indici di smorzamento e forme modali. Il successo di ogni metodo OMA dipende dalle caratteristiche dei segnali acquisiti, come la durata della loro registrazione o la frequenza di campionamento, e dal tipo di sistema di rilevamento, ad esempio la Wireless Sensor Network (WSN), rete meno costosa e più facile da realizzare rispetto a quella cablata, ma che registra un Time Synchronization Error (TSE) tra i clock dei suoi nodi sensore. Tra i metodi OMA, le tecniche Stochastic Subspace Identification (SSI) sono considerate tra le più potenti e affidabili nel dominio del tempo. Il nucleo di questo lavoro è la presentazione dei due approcci SSI, il Covariance-driven e il Data-driven, e il confronto tra le loro prestazioni, al fine di indagarne vantaggi e svantaggi anche attraverso la loro applicazione pratica. Viene studiata, in particolare, l’influenza della durata del segnale acquisito e dell’errore di sincronizzazione sulla precisione del calcolo delle proprietà dinamiche stimate con gli algoritmi SSI, che vengono poi confrontate con quelle ottenute grazie alla tecnica Frequency Domain Decomposition (FDD). Dai risultati numerici risulta che per ottenere una buona stima dei parametri modali si può ricorrere anche ad un segnale di breve durata e che il TSE ha un impatto negativo sul calcolo delle forme modali.
APA, Harvard, Vancouver, ISO, and other styles
43

Joseph, Binoy. "Clustering For Designing Error Correcting Codes." Thesis, Indian Institute of Science, 1994. https://etd.iisc.ac.in/handle/2005/3915.

Full text
Abstract:
In this thesis we address the problem of designing codes for specific applications. To do so we make use of the relationship between clusters and codes. Designing a block code over any finite dimensional space may be thought of as forming the corresponding number of clusters over the particular dimensional space. In literature we have a number of algorithms available for clustering. We have examined the performance of a number of such algorithms, such as Linde-Buzo-Gray, Simulated Annealing, Simulated Annealing with Linde-Buzo-Gray, Deterministic Annealing, etc, for design of codes. But all these algorithms make use of the Eucledian squared error distance measure for clustering. This distance measure does not match with the distance measure of interest in the error correcting scenario, namely, Hamming distance. Consequently we have developed an algorithm that can be used for clustering with Hamming distance as the distance measure. Also, it has been observed that stochastic algorithms, such as Simulated Annealing fail to produce optimum codes due to very slow convergence near the end. As a remedy, we have proposed a modification based on the code structure, for such algorithms for code design which makes it possible to converge to the optimum codes.
APA, Harvard, Vancouver, ISO, and other styles
44

Joseph, Binoy. "Clustering For Designing Error Correcting Codes." Thesis, Indian Institute of Science, 1994. http://hdl.handle.net/2005/66.

Full text
Abstract:
In this thesis we address the problem of designing codes for specific applications. To do so we make use of the relationship between clusters and codes. Designing a block code over any finite dimensional space may be thought of as forming the corresponding number of clusters over the particular dimensional space. In literature we have a number of algorithms available for clustering. We have examined the performance of a number of such algorithms, such as Linde-Buzo-Gray, Simulated Annealing, Simulated Annealing with Linde-Buzo-Gray, Deterministic Annealing, etc, for design of codes. But all these algorithms make use of the Eucledian squared error distance measure for clustering. This distance measure does not match with the distance measure of interest in the error correcting scenario, namely, Hamming distance. Consequently we have developed an algorithm that can be used for clustering with Hamming distance as the distance measure. Also, it has been observed that stochastic algorithms, such as Simulated Annealing fail to produce optimum codes due to very slow convergence near the end. As a remedy, we have proposed a modification based on the code structure, for such algorithms for code design which makes it possible to converge to the optimum codes.
APA, Harvard, Vancouver, ISO, and other styles
45

Kahaei, Mohammad Hossein. "Performance analysis of adaptive lattice filters for FM signals and alpha-stable processes." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36044/7/36044_Digitised_Thesis.pdf.

Full text
Abstract:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
46

Scotti, Simone. "Applications of the error theory using Dirichlet forms." Phd thesis, Université Paris-Est, 2008. http://tel.archives-ouvertes.fr/tel-00349241.

Full text
Abstract:
This thesis is devoted to the study of the applications of the error theory using Dirichlet forms. Our work is split into three parts. The first one deals with the models described by stochastic differential equations. After a short technical chapter, an innovative model for order books is proposed. We assume that the bid-ask spread is not an imperfection, but an intrinsic property of exchange markets instead. The uncertainty is carried by the Brownian motion guiding the asset. We find that spread evolutions can be evaluated using closed formulae and we estimate the impact of the underlying uncertainty on the related contingent claims. Afterwards, we deal with the PBS model, a new model to price European options. The seminal idea is to distinguish the market volatility with respect to the parameter used by traders for hedging. We assume the former constant, while the latter volatility being an erroneous subjective estimation of the former. We prove that this model anticipates a bid-ask spread and a smiled implied volatility curve. Major properties of this model are the existence of closed formulae for prices, the impact of the underlying drift and an efficient calibration strategy. The second part deals with the models described by partial differential equations. Linear and non-linear PDEs are examined separately. In the first case, we show some interesting relations between the error and wavelets theories. When non-linear PDEs are concerned, we study the sensitivity of the solution using error theory. Except when exact solution exists, two possible approaches are detailed: first, we analyze the sensitivity obtained by taking "derivatives" of the discrete governing equations. Then, we study the PDEs solved by the sensitivity of the theoretical solutions. In both cases, we show that sharp and bias solve linear PDE depending on the solution of the former PDE itself and we suggest algorithms to evaluate numerically the sensitivities. Finally, the third part is devoted to stochastic partial differential equations. Our analysis is split into two chapters. First, we study the transmission of an uncertainty, present on starting conditions, on the solution of SPDE. Then, we analyze the impact of a perturbation of the functional terms of SPDE and the coefficient of the related Green function. In both cases, we show that the sharp and bias verify linear SPDE depending on the solution of the former SPDE itself
APA, Harvard, Vancouver, ISO, and other styles
47

Thomas, Nicolas. "Stochastic numerical methods for Piecewise Deterministic Markov Processes : applications in Neuroscience." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS385.

Full text
Abstract:
Dans cette thèse, motivés par des applications en Neuroscience, nous étudions des méthodes efficaces de type Monte Carlo (MC) et Multilevel Monte Carlo (MLMC) basées sur le thinning pour des processus (de Markov) déterministe par morceaux (PDMP ou PDP) que l'on appliquent à des modèles à conductance. D'une part, lorsque la trajectoire déterministe du PDMP est connue explicitement nous aboutissons à une simulation exacte. D'autre part, lorsque la trajectoire déterministe du PDMP n'est pas explicite, nous établissons des estimées d'erreurs forte et un développement de l'erreur faible pour le schéma numérique que nous introduisons. La méthode de thinning est fondamentale dans cette thèse. Outre le fait que cette méthode est intuitive, nous l'utilisons à la fois numériquement (pour simuler les trajectoires de PDMP/PDP) et théoriquement (pour construire les instants de saut et établir des estimées d'erreurs pour les PDMP/PDP)<br>In this thesis, motivated by applications in Neuroscience, we study efficient Monte Carlo (MC) and Multilevel Monte Carlo (MLMC) methods based on the thinning for piecewise deterministic (Markov) processes (PDMP or PDP) that we apply to stochastic conductance-based models. On the one hand, when the deterministic motion of the PDMP is explicitly known we end up with an exact simulation. On the other hand, when the deterministic motion is not explicit, we establish strong estimates and a weak error expansion for the numerical scheme that we introduce. The thinning method is fundamental in this thesis. Beside the fact that it is intuitive, we use it both numerically (to simulate trajectories of PDMP/PDP) and theoretically (to construct the jump times and establish error estimates for PDMP/PDP)
APA, Harvard, Vancouver, ISO, and other styles
48

Tempone, Olariaga Raul. "Numerical Complexity Analysis of Weak Approximation of Stochastic Differential Equations." Doctoral thesis, KTH, Numerisk analys och datalogi, NADA, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3413.

Full text
Abstract:
The thesis consists of four papers on numerical complexityanalysis of weak approximation of ordinary and partialstochastic differential equations, including illustrativenumerical examples. Here by numerical complexity we mean thecomputational work needed by a numerical method to solve aproblem with a given accuracy. This notion offers a way tounderstand the efficiency of different numerical methods. The first paper develops new expansions of the weakcomputational error for Itˆo stochastic differentialequations using Malliavin calculus. These expansions have acomputable leading order term in a posteriori form, and arebased on stochastic flows and discrete dual backward problems.Beside this, these expansions lead to efficient and accuratecomputation of error estimates and give the basis for adaptivealgorithms with either deterministic or stochastic time steps.The second paper proves convergence rates of adaptivealgorithms for Itˆo stochastic differential equations. Twoalgorithms based either on stochastic or deterministic timesteps are studied. The analysis of their numerical complexitycombines the error expansions from the first paper and anextension of the convergence results for adaptive algorithmsapproximating deterministic ordinary differential equations.Both adaptive algorithms are proven to stop with an optimalnumber of time steps up to a problem independent factor definedin the algorithm. The third paper extends the techniques to theframework of Itˆo stochastic differential equations ininfinite dimensional spaces, arising in the Heath Jarrow Mortonterm structure model for financial applications in bondmarkets. Error expansions are derived to identify differenterror contributions arising from time and maturitydiscretization, as well as the classical statistical error dueto finite sampling. The last paper studies the approximation of linear ellipticstochastic partial differential equations, describing andanalyzing two numerical methods. The first method generates iidMonte Carlo approximations of the solution by sampling thecoefficients of the equation and using a standard Galerkinfinite elements variational formulation. The second method isbased on a finite dimensional Karhunen- Lo`eve approximation ofthe stochastic coefficients, turning the original stochasticproblem into a high dimensional deterministic parametricelliptic problem. Then, adeterministic Galerkin finite elementmethod, of either h or p version, approximates the stochasticpartial differential equation. The paper concludes by comparingthe numerical complexity of the Monte Carlo method with theparametric finite element method, suggesting intuitiveconditions for an optimal selection of these methods. 2000Mathematics Subject Classification. Primary 65C05, 60H10,60H35, 65C30, 65C20; Secondary 91B28, 91B70.<br>QC 20100825
APA, Harvard, Vancouver, ISO, and other styles
49

Zavar, Moosavi Azam Sadat. "Probabilistic and Statistical Learning Models for Error Modeling and Uncertainty Quantification." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82491.

Full text
Abstract:
Simulations and modeling of large-scale systems are vital to understanding real world phenomena. However, even advanced numerical models can only approximate the true physics. The discrepancy between model results and nature can be attributed to different sources of uncertainty including the parameters of the model, input data, or some missing physics that is not included in the model due to a lack of knowledge or high computational costs. Uncertainty reduction approaches seek to improve the model accuracy by decreasing the overall uncertainties in models. Aiming to contribute to this area, this study explores uncertainty quantification and reduction approaches for complex physical problems. This study proposes several novel probabilistic and statistical approaches for identifying the sources of uncertainty, modeling the errors, and reducing uncertainty to improve the model predictions for large-scale simulations. We explore different computational models. The first class of models studied herein are inherently stochastic, and numerical approximations suffer from stability and accuracy issues. The second class of models are partial differential equations, which capture the laws of mathematical physics; however, they only approximate a more complex reality, and have uncertainties due to missing dynamics which is not captured by the models. The third class are low-fidelity models, which are fast approximations of very expensive high-fidelity models. The reduced-order models have uncertainty due to loss of information in the dimension reduction process. We also consider uncertainty analysis in the data assimilation framework, specifically for ensemble based methods where the effect of sampling errors is alleviated by localization. Finally, we study the uncertainty in numerical weather prediction models coming from approximate descriptions of physical processes.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
50

Angoshtari, Bahman. "Stochastic modeling and methods for portfolio management in cointegrated markets." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:1ae9236c-4bf0-4d9b-a694-f08e1b8713c0.

Full text
Abstract:
In this thesis we study the utility maximization problem for assets whose prices are cointegrated, which arises from the investment practice of convergence trading and its special forms, pairs trading and spread trading. The major theme in the first two chapters of the thesis, is to investigate the assumption of market-neutrality of the optimal convergence trading strategies, which is a ubiquitous assumption taken by practitioners and academics alike. This assumption lacks a theoretical justification and, to the best of our knowledge, the only relevant study is Liu and Timmermann (2013) which implies that the optimal convergence strategies are, in general, not market-neutral. We start by considering a minimalistic pairs-trading scenario with two cointegrated stocks and solve the Merton investment problem with power and logarithmic utilities. We pay special attention to when/if the stochastic control problem is well-posed, which is overlooked in the study done by Liu and Timmermann (2013). In particular, we show that the problem is ill-posed if and only if the agent’s risk-aversion is less than a constant which is an explicit function of the market parameters. This condition, in turn, yields the necessary and sufficient condition for well-posedness of the Merton problem for all possible values of agent’s risk-aversion. The resulting well-posedness condition is surprisingly strict and, in particular, is equivalent to assuming the optimal investment strategy in the stocks to be market-neutral. Furthermore, it is shown that the well-posedness condition is equivalent to applying Novikov’s condition to the market-price of risk, which is a ubiquitous sufficient condition for imposing absence of arbitrage. To the best of our knowledge, these are the only theoretical results for supporting the assumption of market-neutrality of convergence trading strategies. We then generalise the results to the more realistic setting of multiple cointegrated assets, assuming risk factors that effects the asset returns, and general utility functions for investor’s preference. In the process of generalising the bivariate results, we also obtained some well-posedness conditions for matrix Riccati differential equations which are, to the best of our knowledge, new. In the last chapter, we set up and justify a Merton problem that is related to spread-trading with two futures assets and assuming proportional transaction costs. The model possesses three characteristics whose combination makes it different from the existing literature on proportional transaction costs: 1) finite time horizon, 2) Multiple risky assets 3) stochastic opportunity set. We introduce the HJB equation and provide rigorous arguments showing that the corresponding value function is the viscosity solution of the HJB equation. We end the chapter by devising a numerical scheme, based on the penalty method of Forsyth and Vetzal (2002), to approximate the viscosity solution of the HJB equation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography