Дисертації з теми "Multivariate time series forecasting"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Multivariate time series forecasting.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Multivariate time series forecasting".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Qiang, Fu. "Bayesian multivariate time series models for forecasting European macroeconomic series." Thesis, University of Hull, 2000. http://hydra.hull.ac.uk/resources/hull:8068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Research on and debate about 'wise use' of explicitly Bayesian forecasting procedures has been widespread and often heated. This situation has come about partly in response to the dissatisfaction with the poor forecasting performance of conventional methods and partly in view of the development of computational capacity and macro-data availability. Experience with Bayesian econometric forecasting schemes is still rather limited, but it seems to be an attractive alternative to subjectively adjusted statistical models [see, for example, Phillips (1995a), Todd (1984) and West & Harrison (1989)]. It provides effective standards of forecasting performance and has demonstrated success in forecasting macroeconomic variables. Therefore, there would seem a case for seeking some additional insights into the important role of such methods in achieving objectives within the macroeconomics profession. The primary concerns of this study, motivated by the apparent deterioration of mainstream macroeconometric forecasts of the world economy in recent years [Wallis (1989), pp.34-43], are threefold. The first is to formalize a thorough, yet simple, methodological framework for empirical macroeconometric modelling in a Bayesian spirit. The second is to investigate whether improved forecasting accuracy is feasible within a European-based multicountry context. This is conducted with particular emphasis on the construction and implementation of Bayesian vector autoregressive (BVAR) models that incorporate both a priori and cointegration restrictions. The third is to extend the approach and apply it to the joint-modelling of system-wide interactions amongst national economies. The intention is to attempt to generate more accurate answers to a variety of practical questions about the future path towards a united Europe. The use of BVARs has advanced considerably. In particular, the value of joint-modelling with time-varying parameters and much more sophisticated prior distributions has been stressed in the econometric methodology literature. See e.g. Doan et al. (1984). Kadiyala and Karlsson (1993, 1997), Litterman (1986a), and Phillips (1995a, 1995b). Although trade-linked multicountry macroeconomic models may not be able to clarify all the structural and finer economic characteristics of each economy, they do provide a flexible and adaptable framework for analysis of global economic issues. In this thesis, the forecasting record for the main European countries is examined using the 'post mortem' of IMF, DECO and EEC sources. The formulation, estimation and selection of BVAR forecasting models, carried out using Microfit, MicroTSP, PcGive and RATS packages, are reported. Practical applications of BVAR models especially address the issues as to whether combinations of forecasts explicitly outperform the forecasts of a single model, and whether the recent failures of multicountry forecasts can be attributed to an increase in the 'internal volatility' of the world economic environment. See Artis and Holly (1992), and Barrell and Pain (1992, p.3). The research undertaken consolidates existing empirical and theoretical knowledge of BVAR modelling. It provides a unified coverage of economic forecasting applications and develops a common, effective and progressive methodology for the European economies. The empirical results reflect that in simulated 'out-of-sample' forecasting performances, the gains in forecast accuracy from imposing prior and long-run constraints are statistically significant, especially for small estimation sample sizes and long forecast horizons.
2

Katardjiev, Nikola. "High-variance multivariate time series forecasting using machine learning." Thesis, Uppsala universitet, Institutionen för informatik och media, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-353827.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
There are several tools and models found in machine learning that can be used to forecast a certain time series; however, it is not always clear which model is appropriate for selection, as different models are suited for different types of data, and domain-specific transformations and considerations are usually required. This research aims to examine the issue by modeling four types of machine- and deep learning algorithms - support vector machine, random forest, feed-forward neural network, and a LSTM neural network - on a high-variance, multivariate time series to forecast trend changes one time step in the future, accounting for lag.The models were trained on clinical trial data of patients in an alcohol addiction treatment plan provided by a Uppsala-based company. The results showed moderate performance differences, with a concern that the models were performing a random walk or naive forecast. Further analysis was able to prove that at least one model, the feed-forward neural network, was not undergoing this and was able to make meaningful forecasts one time step into the future. In addition, the research also examined the effec tof optimization processes by comparing a grid search, a random search, and a Bayesian optimization process. In all cases, the grid search found the lowest minima, though its slow runtimes were consistently beaten by Bayesian optimization, which contained only slightly lower performances than the grid search.
Det finns flera verktyg och modeller inom maskininlärning som kan användas för att utföra tidsserieprognoser, men det är sällan tydligt vilken modell som är lämplig vid val, då olika modeller är anpassade för olika sorts data. Denna forskning har som mål att undersöka problemet genom att träna fyra modeller - support vector machine, random forest, ett neuralt nätverk, och ett LSTM-nätverk - på en flervariabelstidserie med hög varians för att förutse trendskillnader ett tidssteg framåt i tiden, kontrollerat för tidsfördröjning. Modellerna var tränade på klinisk prövningsdata från patienter som deltog i en alkoholberoendesbehandlingsplan av ett Uppsalabaserat företag. Resultatet visade vissa moderata prestandaskillnader, och en oro fanns att modellerna utförde en random walk-prognos. I analysen upptäcktes det dock att den ena neurala nätverksmodellen inte gjorde en sådan prognos, utan utförde istället meningsfulla prediktioner. Forskningen undersökte även effekten av optimiseringsprocesser genomatt jämföra en grid search, random search, och Bayesisk optimisering. I alla fall hittade grid search lägsta minimumpunkten, men dess långsamma körtider blev konsistent slagna av Bayesisk optimisering, som även presterade på nivå med grid search.
3

Lima, Diego Duarte. "A study of demand forecasting cashew trade in Cearà through multivariate time series." Universidade Federal do CearÃ, 2013. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=12185.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
nÃo hÃ
The application of time series in varius areas such as engineering, logistics, operations research and economics, aims to provide the knowledge of the dependency between observations, trends, seasonality and forecasts. Considering the lack of effective supporting methods od logistics planning in the area of foreign trade, the multivariate models habe been presented and used in this work, in the area of time series: vector autoregression (VAR), vector autoregression moving-average (VARMA) and state-space integral equation (SS). These models were used for the analysis of demand forecast, the the bivariate series of value and volume of cashew nut exports from Cearà from 1996 to 2012. The results showed that the model state space was more successful in predicting the variables value and volume over the period that goes from january to march 2013, when compared to other models by the method of root mean squared error, getting the lowest values for those criteria.
A aplicaÃÃo de sÃries temporais em diversas Ãreas como engenharia, logÃstica, pesquisa operacional e economia, tem como objetivo o conhecimento da dependÃncia entre dados, suas possÃveis tendÃncias, sazonalidades e a previsÃo de dados futuros. Considerando a carÃncia de mÃtodos eficazes de suporte ao planejamento logÃstico na Ãrea de comÃrcio exterior, neste trabalho foram apresentados e utilizados os modelos multivariados, na Ãrea de sÃries temporais: auto-regressivo vetorial (VAR), auto-regressivomÃdias mÃveis vetorial (ARMAV) e espaÃo de estados (EES). Estes modelos foram empregados para a anÃlise de previsÃo de demanda, da sÃrie bivaria de valor e volume das exportaÃÃes cearenses de castanha de caju no perÃodo de 1996 à 2012. Os resultados mostraram que o modelo espaÃo de estados foi mais eficiente na previsÃo das variÃveis valor e volume ao longo do perÃodo janeiro à marÃo de 2013, quando comparado aos demais modelos pelo mÃtodo da raiz quadrada do erro mÃdio quadrÃtico, obtendo os menores valores para o referido critÃrio.
4

Larsson, Klara, and Freja Ling. "Time Series forecasting of the SP Global Clean Energy Index using a Multivariate LSTM." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301904.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Clean energy and machine learning are subjects that play significant roles in shaping our future. The current climate crisis has forced the world to take action towards more sustainable solutions. Arrangements such as the UN’s Sustainable Development Goals and the Paris Agreement are causing an increased interest in renewable energy solutions. Further, the EU Taxonomy Regulation, applied in 2020, aims to scale up sustainable investments and to direct cash flows toward sustainable projects and activities. These measures create interest in investing in renewable energy alternatives and predicting future movements of stocks related to these businesses. Machine learning models have previously been used to predict time series with promising results. However, predicting time series in the form of stock price indices has, throughout previous attempts, proved to be a difficult task due to the complexity of the variables that play a role in the indices’ movements. This paper uses the machine learning algorithm long short-term memory (LSTM) to predict the S&P Global Clean Energy Index. The research question revolves around how well the LSTM model performs on this specific index and how the result is affected when past returns from correlating variables are added to the model. The researched variables are crude oil price, gold price, and interest. A model for each correlating variable was created, as well as one with all three, and one standard model which used only historical data from the index. The study found that while the model with the variable which had the strongest correlation performed best among the multivariate models, the standard model using only the target variable gave the most accurate result of any of the LSTM models.
Den pågående klimatkrisen har tvingat allt fler länder till att vidta åtgärder, och FN:s globala hållbarhetsmål och Parisavtalet ökar intresset för förnyelsebar energi. Vidare lanserade EU-kommissionen den 21 april 2021 ett omfattande åtgärdspaket, med syftet att öka investeringar i hållbara verksamheter. Detta skapar i sin tur ett ökat intresse för investeringar i förnyelsebar energi och metoder för att förutspå aktiepriser för dessa bolag. Maskininlärningsmodeller har tidigare använts för tidsserieanalyser med goda resultat, men att förutspå aktieindex har visat sig svårt till stor del på grund av uppgiftens komplexitet och antalet variabler som påverkar börsen. Den här uppsatsen använder sig av maskininlärningsmodellen long short-term memory (LSTM) för att förutspå S&P:s Global Clean Energy Index. Syftet är att ta reda på hur träffsäkert en LSTM-modell kan förutspå detta index, och hur resultatet påverkas då modellen används med ytterligare variabler som korrelerar med indexet. De variabler som undersöks är priset på råolja, priset på guld, och ränta. Modeller för var variabel skapades, samt en modell med samtliga variabler och en med endast historisk data från indexet. Resultatet visar att den modell med den variabel som korrelerar starkast med indexet presterade bäst bland flervariabelmodellerna, men den modell som endast användes med historisk data från indexet gav det mest träffsäkra resultatet.
5

Saluja, Rohit. "Interpreting Multivariate Time Series for an Organization Health Platform." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289465.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Machine learning-based systems are rapidly becoming popular because it has been realized that machines are more efficient and effective than humans at performing certain tasks. Although machine learning algorithms are extremely popular, they are also very literal and undeviating. This has led to a huge research surge in the field of interpretability in machine learning to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process. Moreover, in most real-world problems just making predictions using machine learning algorithms only solves the problem partially. Time series is one of the most popular and important data types because of its dominant presence in the fields of business, economics, and engineering. Despite this, interpretability in time series is still relatively unexplored as compared to tabular, text, and image data. With the growing research in the field of interpretability in machine learning, there is also a pressing need to be able to quantify the quality of explanations produced after interpreting machine learning models. Due to this reason, evaluation of interpretability is extremely important. The evaluation of interpretability for models built on time series seems completely unexplored in research circles. This thesis work focused on achieving and evaluating model agnostic interpretability in a time series forecasting problem.  The use case discussed in this thesis work focused on finding a solution to a problem faced by a digital consultancy company. The digital consultancy wants to take a data-driven approach to understand the effect of various sales related activities in the company on the sales deals closed by the company. The solution involved framing the problem as a time series forecasting problem to predict the sales deals and interpreting the underlying forecasting model. The interpretability was achieved using two novel model agnostic interpretability techniques, Local interpretable model- agnostic explanations (LIME) and Shapley additive explanations (SHAP). The explanations produced after achieving interpretability were evaluated using human evaluation of interpretability. The results of the human evaluation studies clearly indicate that the explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model. The human evaluation study results also indicated that LIME and SHAP explanations were almost equally understandable with LIME performing better but with a very small margin. The work done during this project can easily be extended to any time series forecasting or classification scenario for achieving and evaluating interpretability. Furthermore, this work can offer a very good framework for achieving and evaluating interpretability in any machine learning-based regression or classification problem.
Maskininlärningsbaserade system blir snabbt populära eftersom man har insett att maskiner är effektivare än människor när det gäller att utföra vissa uppgifter. Även om maskininlärningsalgoritmer är extremt populära, är de också mycket bokstavliga. Detta har lett till en enorm forskningsökning inom området tolkbarhet i maskininlärning för att säkerställa att maskininlärningsmodeller är tillförlitliga, rättvisa och kan hållas ansvariga för deras beslutsprocess. Dessutom löser problemet i de flesta verkliga problem bara att göra förutsägelser med maskininlärningsalgoritmer bara delvis. Tidsserier är en av de mest populära och viktiga datatyperna på grund av dess dominerande närvaro inom affärsverksamhet, ekonomi och teknik. Trots detta är tolkningsförmågan i tidsserier fortfarande relativt outforskad jämfört med tabell-, text- och bilddata. Med den växande forskningen inom området tolkbarhet inom maskininlärning finns det också ett stort behov av att kunna kvantifiera kvaliteten på förklaringar som produceras efter tolkning av maskininlärningsmodeller. Av denna anledning är utvärdering av tolkbarhet extremt viktig. Utvärderingen av tolkbarhet för modeller som bygger på tidsserier verkar helt outforskad i forskarkretsar. Detta uppsatsarbete fokuserar på att uppnå och utvärdera agnostisk modelltolkbarhet i ett tidsserieprognosproblem.  Fokus ligger i att hitta lösningen på ett problem som ett digitalt konsultföretag står inför som användningsfall. Det digitala konsultföretaget vill använda en datadriven metod för att förstå effekten av olika försäljningsrelaterade aktiviteter i företaget på de försäljningsavtal som företaget stänger. Lösningen innebar att inrama problemet som ett tidsserieprognosproblem för att förutsäga försäljningsavtalen och tolka den underliggande prognosmodellen. Tolkningsförmågan uppnåddes med hjälp av två nya tekniker för agnostisk tolkbarhet, lokala tolkbara modellagnostiska förklaringar (LIME) och Shapley additiva förklaringar (SHAP). Förklaringarna som producerats efter att ha uppnått tolkbarhet utvärderades med hjälp av mänsklig utvärdering av tolkbarhet. Resultaten av de mänskliga utvärderingsstudierna visar tydligt att de förklaringar som produceras av LIME och SHAP starkt hjälpte människor att förstå förutsägelserna från maskininlärningsmodellen. De mänskliga utvärderingsstudieresultaten visade också att LIME- och SHAP-förklaringar var nästan lika förståeliga med LIME som presterade bättre men med en mycket liten marginal. Arbetet som utförts under detta projekt kan enkelt utvidgas till alla tidsserieprognoser eller klassificeringsscenarier för att uppnå och utvärdera tolkbarhet. Dessutom kan detta arbete erbjuda en mycket bra ram för att uppnå och utvärdera tolkbarhet i alla maskininlärningsbaserade regressions- eller klassificeringsproblem.
6

Bäärnhielm, Arvid. "Multiple time-series forecasting on mobile network data using an RNN-RBM model." Thesis, Uppsala universitet, Datalogi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-315782.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The purpose of this project is to evaluate the performance of a forecasting model based on a multivariate dataset consisting of time series of traffic characteristic performance data from a mobile network. The forecasting is made using machine learning with a deep neural network. The first part of the project involves the adaption of the model design to fit the dataset and is followed by a number of simulations where the aim is to tune the parameters of the model to give the best performance. The simulations show that with well tuned parameters, the neural network performes better than the baseline model, even when using only a univariate dataset. If a multivariate dataset is used, the neural network outperforms the baseline model even when the dataset is small.
7

Noureldin, Diaa. "Essays on multivariate volatility and dependence models for financial time series." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:fdf82d35-a5e7-4295-b7bf-c7009cad7b56.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis investigates the modelling and forecasting of multivariate volatility and dependence in financial time series. The first paper proposes a new model for forecasting changes in the term structure (TS) of interest rates. Using the level, slope and curvature factors of the dynamic Nelson-Siegel model, we build a time-varying copula model for the factor dynamics allowing for departure from the normality assumption typically adopted in TS models. To induce relative immunity to structural breaks, we model and forecast the factor changes and not the factor levels. Using US Treasury yields for the period 1986:3-2010:12, our in-sample analysis indicates model stability and we show statistically significant gains due to allowing for a time-varying dependence structure which permits joint extreme factor movements. Our out-of-sample analysis indicates the model's superior ability to forecast the conditional mean in terms of root mean square error reductions and directional forecast accuracy. The forecast gains are stronger during the recent financial crisis. We also conduct out-of-sample model evaluation based on conditional density forecasts. The second paper introduces a new class of multivariate volatility models that utilizes high-frequency data. We discuss the models' dynamics and highlight their differences from multivariate GARCH models. We also discuss their covariance targeting specification and provide closed-form formulas for multi-step forecasts. Estimation and inference strategies are outlined. Empirical results suggest that the HEAVY model outperforms the multivariate GARCH model out-of-sample, with the gains being particularly significant at short forecast horizons. Forecast gains are obtained for both forecast variances and correlations. The third paper introduces a new class of multivariate volatility models which is easy to estimate using covariance targeting. The key idea is to rotate the returns and then fit them using a BEKK model for the conditional covariance with the identity matrix as the covariance target. The extension to DCC type models is given, enriching this class. We focus primarily on diagonal BEKK and DCC models, and a related parameterisation which imposes common persistence on all elements of the conditional covariance matrix. Inference for these models is computationally attractive, and the asymptotics is standard. The techniques are illustrated using recent data on the S&P 500 ETF and some DJIA stocks, including comparisons to the related orthogonal GARCH models.
8

Schwartz, Michael. "Optimized Forecasting of Dominant U.S. Stock Market Equities Using Univariate and Multivariate Time Series Analysis Methods." Chapman University Digital Commons, 2017. http://digitalcommons.chapman.edu/comp_science_theses/3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This dissertation documents an investigation into forecasting U.S. stock market equities via two very different time series analysis techniques: 1) autoregressive integrated moving average (ARIMA), and 2) singular spectrum analysis (SSA). Approximately 40% of the S&P 500 stocks are analyzed. Forecasts are generated for one and five days ahead using daily closing prices. Univariate and multivariate structures are applied and results are compared. One objective is to explore the hypothesis that a multivariate model produces superior performance over a univariate configuration. Another objective is to compare the forecasting performance of ARIMA to SSA, as SSA is a relatively recent development and has shown much potential. Stochastic characteristics of stock market data are analyzed and found to be definitely not Gaussian, but instead better fit to a generalized t-distribution. Probability distribution models are validated with goodness-of-fit tests. For analysis, stock data is segmented into non-overlapping time “windows” to support unconditional statistical evaluation. Univariate and multivariate ARIMA and SSA time series models are evaluated for independence. ARIMA models are found to be independent, but SSA models are not able to reach independence. Statistics for out-of-sample forecasts are computed for every stock in every window, and multivariate-univariate confidence interval shrinkages are examined. Results are compared for univariate, bivariate, and trivariate combinations of highly-correlated stocks. Effects are found to be mixed. Bivariate modeling and forecasting with three different covariates are investigated. Examination of results with covariates of trading volume, principal component analysis (PCA), and volatility reveal that PCA exhibits the best overall forecasting accuracy in the entire field of investigated elements, including univariate models. Bivariate-PCA structures are applied in a back-testing environment to evaluate economic significance and robustness of the methods. Initial results of back-testing yielded similar results to those from earlier independent testing. Inconsistent performance across test intervals inspired the development of a second technique that yields improved results and positive economic significance. Robustness is validated through back-testing across multiple market trends.
9

Costantini, Mauro, Cuaresma Jesus Crespo, and Jaroslava Hlouskova. "Can Macroeconomists Get Rich Forecasting Exchange Rates?" WU Vienna University of Economics and Business, 2014. http://epub.wu.ac.at/4181/1/wp176.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We provide a systematic comparison of the out-of-sample forecasts based on multivariate macroeconomic models and forecast combinations for the euro against the US dollar, the British pound, the Swiss franc and the Japanese yen. We use profit maximization measures based on directional accuracy and trading strategies in addition to standard loss minimization measures. When comparing predictive accuracy and profit measures, data snooping bias free tests are used. The results indicate that forecast combinations help to improve over benchmark trading strategies for the exchange rate against the US dollar and the British pound, although the excess return per unit of deviation is limited. For the euro against the Swiss franc or the Japanese yen, no evidence of generalized improvement in profit measures over the benchmark is found. (authors' abstract)
Series: Department of Economics Working Paper Series
10

Oscar, Nordström. "Multivariate Short-term Electricity Load Forecasting with Deep Learning and exogenous covariates." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-183982.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Maintaining the electricity balance between supply and demand is a challenge for electricity suppliers. If there is an under or overproduction, it entails financial costs and affects consumers and the climate. To better understand how to maintain the balance, can the suppliers use short-term forecasts of electricity load. Hence it is of paramount importance that the forecasts are reliable and of high accuracy. Studies show that time series modeling moves towards more data-driven methods, such as Artificial Neural Networks due to their ability to extract complex relationships and flexibility. This study evaluates the performance of a multivariate Deep Autoregressive Neural Network (DeepAR) in ashort-term forecasting scenario of electricity load, with forecasted weather parameters as exogenous covariates. This thesis’s goal is twofold: to test the performance in terms of evaluation metrics of day-ahead forecasts in exogenous covariates’ presence and examine the robustness when exposing DeepAR to deviations in input data. We perform feature selection on given covariates to identify and extract relevant parameters to facilitate the training process and implement a feature importance algorithm to examine which parameters the model considers essential. To test the robustness, we simulate two cases. In the first case, we introduce Quarantine periods, which mask data prior to the forecast range, and the second case introduces an artificial outlier. An exploratory analysis displays significant annual characteristic differences between seasons, therefore do we use two test sets, one in winter and one in summer. The result shows that DeepAR is robust against potential deviations in input data and that DeepAR surpassed both benchmark models in all of the tested scenarios. In the ideal test scenario where weather parametershad the most significant impact (winter), do DeepAR achieve a Normalized Deviation(ND) of 2.5%, compared to the second-best model, with an ND of 4.4%
11

Zhao, Tao. "A new method for detection and classification of out-of-control signals in autocorrelated multivariate processes." Morgantown, W. Va. : [West Virginia University Libraries], 2008. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5615.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.S.)--West Virginia University, 2008.
Title from document title page. Document formatted into pages; contains x, 111 p. : ill. Includes abstract. Includes bibliographical references (p. 102-106).
12

Costantini, Mauro, Cuaresma Jesus Crespo, and Jaroslava Hlouskova. "Forecasting errors, directional accuracy and profitability of currency trading: The case of EUR/USD exchange rate." Wiley, 2016. http://dx.doi.org/10.1002/for.2398.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We provide a comprehensive study of out-of-sample forecasts for the EUR/USD exchange rate based on multivariate macroeconomic models and forecast combinations. We use profit maximization measures based on directional accuracy and trading strategies in addition to standard loss minimization measures. When comparing predictive accuracy and profit measures, data snooping bias free tests are used. The results indicate that forecast combinations, in particular those based on principal components of forecasts, help to improve over benchmark trading strategies, although the excess return per unit of deviation is limited.
13

Backer-Meurke, Henrik, and Marcus Polland. "Predicting Road Rut with a Multi-time-series LSTM Model." Thesis, Högskolan Dalarna, Institutionen för information och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:du-37599.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Road ruts are depressions or grooves worn into a road. Increases in rut depth are highly undesirable due to the heightened risk of hydroplaning. Accurately predicting increases in road rut depth is important for maintenance planning within the Swedish Transport Administration. At the time of writing this paper, the agency utilizes a linear regression model and is developing a feed-forward neural network for road rut predictions. The aim of the study was to evaluate the possibility of using a Recurrent Neural Network to predict road rut. Through design science research, an artefact in the form of a LSTM model was designed, developed, and evaluated.The dataset consisted of multiple-multivariate short time series where research was limited. Case studies were conducted which inspired the conceptual design of the model. The baseline LSTM model proposed in this paper utilizes the full dataset in combination with time-series individualization through an added index feature. Additional features thought to correlate with rut depth was also studied through multiple training set variations. The model was evaluated by calculating the Root Mean Squared Error (RMSE) and the Mean Absolute Error (MAE) for each training set variation. The baseline model predicted rut depth with a MAE of 0.8110 (mm) and a RMSE of 1.124 (mm) outperforming a control set without the added index. The feature with the highest correlation to rut depth was curvature with a MAEof 0.8031 and a RMSE of 1.1093. Initial finding shows that there is a possibility of utilizing an LSTM model trained on multiple-multivariate time series to predict rut depth. Time series individualization through an added index feature yielded better results than control, indicating that it had the desired effect on model performance.
14

Johansson, David. "Automatic Device Segmentation for Conversion Optimization : A Forecasting Approach to Device Clustering Based on Multivariate Time Series Data from the Food and Beverage Industry." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-81476.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis investigates a forecasting approach to clustering device behavior based on multivariate time series data. Identifying an equitable selection to use in conversion optimization testing is a difficult task. As devices are able to collect larger amounts of data about their behavior it becomes increasingly difficult to utilize manual selection of segments in traditional conversion optimization systems. Forecasting the segments can be done automatically to reduce the time spent on testing while increasing the test accuracy and relevance. The thesis evaluates the results of utilizing multiple forecasting models, clustering models and data pre-processing techniques. With optimal conditions, the proposed model achieves an average accuracy of 97,7%.
15

Köhler, Steffen [Verfasser], Vasyl [Gutachter] Golosnoy, and Christoph [Gutachter] Hanck. "Modeling, testing and forecasting persistent univariate and multivariate time-series with financial applications / Steffen Köhler ; Gutachter: Vasyl Golosnoy, Christoph Hanck ; Fakultät für Wirtschaftswissenschaft." Bochum : Ruhr-Universität Bochum, 2021. http://d-nb.info/1236813774/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Moudiki, Thierry. "Interest rates modeling for insurance : interpolation, extrapolation, and forecasting." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1110/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'ORSA Own Risk Solvency and Assessment est un ensemble de règles définies par la directive européenne Solvabilité II. Il est destiné à servir d'outil d'aide à la décision et d'analyse stratégique des risques. Dans le contexte de l'ORSA, les compagnies d'assurance doivent évaluer leur solvabilité future, de façon continue et prospective. Pour ce faire, ces dernières doivent notamment obtenir des projections de leur bilan (actif et passif) sur un certain horizon temporel. Dans ce travail de thèse, nous nous focalisons essentiellement sur l'aspect de prédiction des valeurs futures des actifs. Plus précisément, nous traitons de la courbe de taux, de sa construction et de son extrapolation à une date donnée, et de ses prédictions envisagées dans le futur. Nous parlons dans le texte de "courbe de taux", mais il s'agit en fait de construction de courbes de facteurs d'actualisation. Le risque de défaut de contrepartie n'est pas explicitement traité, mais des techniques similaires à celles développées peuvent être adaptées à la construction de courbe de taux incorporant le risque de défaut de contrepartie
The Own Risk Solvency and Assessment (ORSA) is a set of processes defined by the European prudential directive Solvency II, that serve for decision-making and strategic analysis. In the context of ORSA, insurance companies are required to assess their solvency needs in a continuous and prospective way. For this purpose, they notably need to forecast their balance sheet -asset and liabilities- over a defined horizon. In this work, we specifically focus on the asset forecasting part. This thesis is about the Yield Curve, Forecasting, and Forecasting the Yield Curve. We present a few novel techniques for the construction, the extrapolation of static curves (that is, curves which are constructed at a fixed date), and for forecasting the spot interest rates over time. Throughout the text, when we say "Yield Curve", we actually mean "Discount curve". That is: we ignore the counterparty credit risk, and consider that the curves are risk-free. Though, the same techniques could be applied to construct/forecast the actual risk-free curves and credit spread curves, and combine both to obtain pseudo- discount curves incorporating the counterparty credit risk
17

Yongtao, Yu. "Exchange rate forecasting model comparison: A case study in North Europe." Thesis, Uppsala universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-154948.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the past, a lot of studies about the comparison of exchange rate forecasting models have been carried out. Most of these studies have a similar result which is the random walk model has the best forecasting performance. In this thesis, I want to find a model to beat the random walk model in forecasting the exchange rate. In my study, the vector autoregressive model (VAR), restricted vector autoregressive model (RVAR), vector error correction model (VEC), Bayesian vector autoregressive model are employed in the analysis. These multivariable time series models are compared with the random walk model by evaluating the forecasting accuracy of the exchange rate for three North European countries both in short-term and long-term. For short-term, it can be concluded that the random walk model has the best forecasting accuracy. However, for long-term, the random walk model is beaten. The equal accuracy test proves this phenomenon really exists.
18

Sävhammar, Simon. "Uniform interval normalization : Data representation of sparse and noisy data sets for machine learning." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19194.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The uniform interval normalization technique is proposed as an approach to handle sparse data and to handle noise in the data. The technique is evaluated transforming and normalizing the MoodMapper and Safebase data sets, the predictive capabilities are compared by forecasting the data set with aLSTM model. The results are compared to both the commonly used MinMax normalization technique and MinMax normalization with a time2vec layer. It was found the uniform interval normalization performed better on the sparse MoodMapper data set, and the denser Safebase data set. Future works consist of studying the performance of uniform interval normalization on other data sets and with other machine learning models.
19

Campos, Celso Vilela Chaves. "Previsão da arrecadação de receitas federais: aplicações de modelos de séries temporais para o estado de São Paulo." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/96/96131/tde-12052009-150243/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
O objetivo principal do presente trabalho é oferecer métodos alternativos de previsão da arrecadação tributária federal, baseados em metodologias de séries temporais, inclusive com a utilização de variáveis explicativas, que reflitam a influência do cenário macroeconômico na arrecadação tributária, com o intuito de melhorar a acurácia da previsão da arrecadação. Para tanto, foram aplicadas as metodologias de modelos dinâmicos univariados, multivariados, quais sejam, Função de Transferência, Auto-regressão Vetorial (VAR), VAR com correção de erro (VEC), Equações Simultâneas, e de modelos Estruturais. O trabalho tem abrangência regional e limita-se à análise de três séries mensais da arrecadação, relativas ao Imposto de Importação, Imposto Sobre a Renda das Pessoas Jurídicas e Contribuição para o Financiamento da Seguridade Social - Cofins, no âmbito da jurisdição do estado de São Paulo, no período de 2000 a 2007. Os resultados das previsões dos modelos acima citados são comparados entre si, com a modelagem ARIMA e com o método dos indicadores, atualmente utilizado pela Secretaria da Receita Federal do Brasil (RFB) para previsão anual da arrecadação tributária, por meio da raiz do erro médio quadrático de previsão (RMSE). A redução média do RMSE foi de 42% em relação ao erro cometido pelo método dos indicadores e de 35% em relação à modelagem ARIMA, além da drástica redução do erro anual de previsão. A utilização de metodologias de séries temporais para a previsão da arrecadação de receitas federais mostrou ser uma alternativa viável ao método dos indicadores, contribuindo para previsões mais precisas, tornando-se ferramenta segura de apoio para a tomada de decisões dos gestores.
The main objective of this work is to offer alternative methods for federal tax revenue forecasting, based on methodologies of time series, inclusively with the use of explanatory variables, which reflect the influence of the macroeconomic scenario in the tax collection, for the purpose of improving the accuracy of revenues forecasting. Therefore, there were applied the methodologies of univariate dynamic models, multivariate, namely, Transfer Function, Vector Autoregression (VAR), VAR with error correction (VEC), Simultaneous Equations, and Structural Models. The work has a regional scope and it is limited to the analysis of three series of monthly tax collection of the Import Duty, the Income Tax Law over Legal Entities Revenue and the Contribution for the Social Security Financing Cofins, under the jurisdiction of the state of São Paulo in the period from 2000 to 2007. The results of the forecasts from the models above were compared with each other, with the ARIMA moulding and with the indicators method, currently used by the Secretaria da Receita Federal do Brasil (RFB) to annual foresee of the tax collection, through the root mean square error of approximation (RMSE). The average reduction of RMSE was 42% compared to the error committed by the method of indicators and 35% of the ARIMA model, besides the drastic reduction in the annual forecast error. The use of time-series methodologies to forecast the collection of federal revenues has proved to be a viable alternative to the method of indicators, contributing for more accurate predictions, becoming a safe support tool for the managers decision making process.
20

Grubb, Howard John. "Multivariate time series modelling." Thesis, University of Bath, 1990. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280803.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Malan, Karien. "Stationary multivariate time series analysis." Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-06132008-173800.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Ribeiro, Joana Patrícia Bordonhos. "Outlier identification in multivariate time series." Master's thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/22200.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mestrado em Matemática e Aplicações
Com o desenvolvimento tecnológico, existe uma cada vez maior disponibilidade de dados. Geralmente representativos de situações do dia-a-dia, a existência de grandes quantidades de informação tem o seu interesse quando permite a extração de valor para o mercado. Além disso, surge importância em analisar não só os valores disponíveis mas também a sua associação com o tempo. A existência de valores anormais é inevitável. Geralmente denotados como outliers, a procura por estes valores é realizada comummente com o intuito de fazer a sua exclusão do estudo. No entanto, os outliers representam muitas vezes um objetivo de estudo. Por exemplo, no caso de deteção de fraudes bancárias ou no diagnóstico de doenças, o objetivo central é identificar situações anormais. Ao longo desta dissertação é apresentada uma metodologia que permite detetar outliers em séries temporais multivariadas, após aplicação de métodos de classificação. A abordagem escolhida é depois aplicada a um conjunto de dados real, representativo do funcionamento de caldeiras. O principal objetivo é identificar as suas falhas. Consequentemente, pretende-se melhorar os componentes do equipamento e portanto diminuir as suas falhas. Os algoritmos implementados permitem identificar não só as falhas do aparelho mas também o seu funcionamento normal. Pretende-se que as metodologias escolhidas sejam também aplicadas nos aparelhos futuros, permitindo melhorar a identificação em tempo real das falhas.
With the technological development, there is an increasing availability of data. Usually representative of day-to-day actions, the existence of large amounts of information has its own interest when it allows to extract value to the market. In addition, it is important to analyze not only the available values but also their association with time. The existence of abnormal values is inevitable. Usually denoted as outliers, the search for these values is commonly made in order to exclude them from the study. However, outliers often represent a goal of study. For example, in the case of bank fraud detection or disease diagnosis, the central objective is to identify the abnormal situations. Throughout this dissertation we present a methodology that allows the detection of outliers in multivariate time series, after application of classification methods. The chosen approach is then applied to a real data set, representative of boiler operation. The main goal is to identify faults. It is intended to improve boiler components and, hence, reduce the faults. The implemented algorithms allow to identify not only the boiler faults but also their normal operation cycles. We aim that the chosen methodologies will also be applied in future devices, allowing to improve real-time fault identification.
23

Chen, Chloe Chen. "Graphical modelling of multivariate time series." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/7091.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis mainly works on the parametric graphical modelling of multivariate time series. The idea of graphical model is that each missing edge in the graph corresponds to a zero partial coherence between a pair of component processes. A vector autoregressive process (VAR) together with its associated partial correlation graph defines a graphical interaction (GI) model. The current estimation methodologies are few and lacking of details when fitting GI models. Given a realization of the VAR process, we seek to determine its graph via the GI model; we proceed by assuming each possible graph and a range of possible autoregressive orders, carrying out the estimation, and then using model-selection criteria AIC and/or BIC to select amongst the graphs and orders. We firstly consider a purely time domain approach by maximizing the conditional maximum likelihood function with zero constraints; this non-convex problem is made convex by a ‘relaxation’ step, and solved via convex optimization. The solution is exact with high probability (and would be always exact if a certain covariance matrix was block-Toeplitz). Alternatively we look at an iterative algorithm switching between time and frequency domains. It updates the spectral estimates using equations that incorporate information from the graph, and then solving the multivariate Yule-Walker equations to estimate the VAR process parameters. We show that both methods work very well on simulated data from GI models. The methods are then applied on real EEG data recorded from Schizophrenia patients, who suffer from abnormalities of brain connectivity. Though the pretreatment has been carried out to remove improper information, the raw methods do not provide any interpretive results. Some essential modification is made in the iterative algorithm by spectral up-weighting which solves the instability problem of spectral inversion efficiently. Equivalently in convex optimization method, adding noise seems also to work but interpretation of eigenvalues (small/large) is less clear. Both methods essentially delivered the same results via GI models; encouragingly the results are consistent from a completely different method based on nonparametric/multiple hypothesis testing.
24

Aranda, Cotta Higor Henrique. "Robust methods in multivariate time series." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC064.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ce manuscrit propose de nouvelles méthodes d’estimation robustes pour les fonctions matricielles d’autocovariance et d’autocorrélation de séries chronologiques multivariées stationnaires pouvant présenter des valeurs aberrantes aléatoires additives. Ces fonctions jouent un rôle important dans l’identification et l’estimation des paramètres de modèles de séries chronologiques multivariées stationnaires. Nous proposons tout d'abord de nouveaux estimateurs des fonctions matricielles d’autocovariance et d’autocorrélation construits en utilisant une approche spectrale à l'aide du périodogramme matriciel. Comme dans le cas des estimateurs classiques des fonctions d’autocovariance et d’autocorrélation matricielles, ces estimateurs sont affectés par des observations aberrantes. Ainsi, toute procédure d'identification ou d'estimation les utilisant est directement affectée, ce qui entraîne des conclusions erronées. Pour atténuer ce problème, nous proposons l’utilisation de techniques statistiques robustes pour créer des estimateurs résistants aux observations aléatoires aberrantes. Dans un premier temps, nous proposons de nouveaux estimateurs des fonctions d’autocorvariance et d’autocorrélation de séries chronologiques univariées. Les domaines temporel et fréquentiel sont liés par la relation existant entre la fonction d’autocovariance et la densité spectrale. Le périodogramme étant sensible aux données aberrantes, nous obtenons un estimateur robuste en le remplaçant parle $M$-périodogramme. Les propriétés asymptotiques des estimateurs sont établies. Leurs performances sont étudiées au moyen de simulations numériques pour différentes tailles d’échantillons et différents scénarios de contamination. Les résultats empiriques indiquent que les méthodes proposées fournissent des valeurs proches de celles obtenues par la fonction d'autocorrélation classique quand les données ne sont pas contaminées et resistent à différents cénarios de contamination. Ainsi, les estimateurs proposés dans cette thèse sont des méthodes alternatives utilisables pour des séries chronologiques présentant ou non des valeurs aberrantes. Les estimateurs obtenus pour des séries chronologiques univariées sont ensuite étendus au cas de séries multivariées. Cette extension est simplifiée par le fait que le calcul du périodogramme croisé ne fait intervenir que les coefficients de Fourier de chaque composante de la série. Le $M$-périodogramme matriciel apparaît alors comme une alternative robuste au périodogramme matriciel pour construire des estimateurs robustes des fonctions matricielles d’autocovariance et d’autocorrélation. Les propriétés asymptotiques sont étudiées et des expériences numériques sont réalisées. Comme exemple d'application avec des données réelles, nous utilisons les fonctions proposées pour ajuster un modèle autoregressif par la méthode de Yule-Walker à des données de pollution collectées dans la région de Vitória au Brésil.Enfin, l'estimation robuste du nombre de facteurs dans les modèles factoriels de grande dimension est considérée afin de réduire la dimensionnalité. En présence de valeurs aberrantes, les critères d’information proposés par Bai & Ng (2002) tendent à surestimer le nombre de facteurs. Pour atténuer ce problème, nous proposons de remplacer la matrice de covariance standard par la matrice de covariance robuste proposée dans ce manuscrit. Nos simulations montrent qu'en l'absence de contamination, les méthodes standards et robustes sont équivalentes. En présence d'observations aberrantes, le nombre de facteurs estimés augmente avec les méthodes non robustes alors qu'il reste le même en utilisant les méthodes robustes. À titre d'application avec des données réelles, nous étudions des concentrations de polluant PM$_{10}$ mesurées dans la région de l'Île-de-France en France
This manuscript proposes new robust estimation methods for the autocovariance and autocorrelation matrices functions of stationary multivariates time series that may have random additives outliers. These functions play an important role in the identification and estimation of time series model parameters. We first propose new estimators of the autocovariance and of autocorrelation matrices functions constructed using a spectral approach considering the periodogram matrix periodogram which is the natural estimator of the spectral density matrix. As in the case of the classic autocovariance and autocorrelation matrices functions estimators, these estimators are affected by aberrant observations. Thus, any identification or estimation procedure using them is directly affected, which leads to erroneous conclusions. To mitigate this problem, we propose the use of robust statistical techniques to create estimators resistant to aberrant random observations.As a first step, we propose new estimators of autocovariance and autocorrelation functions of univariate time series. The time and frequency domains are linked by the relationship between the autocovariance function and the spectral density. As the periodogram is sensitive to aberrant data, we get a robust estimator by replacing it with the $M$-periodogram. The $M$-periodogram is obtained by replacing the Fourier coefficients related to periodogram calculated by the standard least squares regression with the ones calculated by the $M$-robust regression. The asymptotic properties of estimators are established. Their performances are studied by means of numerical simulations for different sample sizes and different scenarios of contamination. The empirical results indicate that the proposed methods provide close values of those obtained by the classical autocorrelation function when the data is not contaminated and it is resistant to different contamination scenarios. Thus, the estimators proposed in this thesis are alternative methods that can be used for time series with or without outliers.The estimators obtained for univariate time series are then extended to the case of multivariate series. This extension is simplified by the fact that the calculation of the cross-periodogram only involves the Fourier coefficients of each component from the univariate series. Thus, the $M$-periodogram matrix is a robust periodogram matrix alternative to build robust estimators of the autocovariance and autocorrelation matrices functions. The asymptotic properties are studied and numerical experiments are performed. As an example of an application with real data, we use the proposed functions to adjust an autoregressive model by the Yule-Walker method to Pollution data collected in the Vitória region Brazil.Finally, the robust estimation of the number of factors in large factorial models is considered in order to reduce the dimensionality. It is well known that the values random additive outliers affect the covariance and correlation matrices and the techniques that depend on the calculation of their eigenvalues and eigenvectors, such as the analysis principal components and the factor analysis, are affected. Thus, in the presence of outliers, the information criteria proposed by Bai & Ng (2002) tend to overestimate the number of factors. To alleviate this problem, we propose to replace the standard covariance matrix with the robust covariance matrix proposed in this manuscript. Our Monte Carlo simulations show that, in the absence of contamination, the standard and robust methods are equivalent. In the presence of outliers, the number of estimated factors increases with the non-robust methods while it remains the same using robust methods. As an application with real data, we study pollutant concentrations PM$_{10}$ measured in the Île-de-France region of France
Este manuscrito é centrado em propor novos métodos de estimaçao das funçoes de autocovariancia e autocorrelaçao matriciais de séries temporais multivariadas com e sem presença de observaçoes discrepantes aleatorias. As funçoes de autocovariancia e autocorrelaçao matriciais desempenham um papel importante na analise e na estimaçao dos parametros de modelos de série temporal multivariadas. Primeiramente, nos propomos novos estimadores dessas funçoes matriciais construıdas, considerando a abordagem do dominio da frequencia por meio do periodograma matricial, um estimador natural da matriz de densidade espectral. Como no caso dos estimadores tradicionais das funçoes de autocovariancia e autocorrelaçao matriciais, os nossos estimadores tambem sao afetados pelas observaçoes discrepantes. Assim, qualquer analise subsequente que os utilize é diretamente afetada causando conclusoes equivocadas. Para mitigar esse problema, nos propomos a utilizaçao de técnicas de estatistica robusta para a criaçao de estimadores resistentes as observaçoes discrepantes aleatorias. Inicialmente, nos propomos novos estimadores das funçoes de autocovariancia e autocorrelaçao de séries temporais univariadas considerando a conexao entre o dominio do tempo e da frequencia por meio da relaçao entre a funçao de autocovariancia e a densidade espectral, do qual o periodograma tradicional é o estimador natural. Esse estimador é sensivel as observaçoes discrepantes. Assim, a robustez é atingida considerando a utilizaçao do Mperiodograma. O M-periodograma é obtido substituindo a regressao por minimos quadrados com a M-regressao no calculo das estimativas dos coeficientes de Fourier relacionados ao periodograma. As propriedades assintoticas dos estimadores sao estabelecidas. Para diferentes tamanhos de amostras e cenarios de contaminaçao, a performance dos estimadores é investigada. Os resultados empiricos indicam que os métodos propostos provem resultados acurados. Isto é, os métodos propostos obtêm valores proximos aos da funçao de autocorrelaçao tradicional no contexto de nao contaminaçao dos dados. Quando ha contaminaçao, os M-estimadores permanecem inalterados. Deste modo, as funçoes de M-autocovariancia e de M-autocorrelaçao propostas nesta tese sao alternativas vi aveis para séries temporais com e sem observaçoes discrepantes. A boa performance dos estimadores para o cenario de séries temporais univariadas motivou a extensao para o contexto de séries temporais multivariadas. Essa extensao é direta, haja vista que somente os coeficientes de Fourier relativos à cada uma das séries univariadas sao necessarios para o calculo do periodograma cruzado. Novamente, a relaçao de dualidade entre o dominio da frequência e do tempo é explorada por meio da conexao entre a funçao matricial de autocovariancia e a matriz de densidade espectral de séries temporais multivariadas. É neste sentido que, o presente artigo propoe a matriz M-periodograma como um substituto robusto à matriz periodograma tradicional na criaçao de estimadores das funçoes matriciais de autocovariancia e autocorrelaçao. As propriedades assintoticas sao estudas e experimentos numéricos sao realizados. Como exemplo de aplicaçao à dados reais, nos aplicamos as funçoes propostas no artigo na estimaçao dos parâmetros do modelo de série temporal multivariada pelo método de Yule-Walker para a modelagem dos dados MP10 da regiao de Vitoria/Brasil. Finalmente, a estimaçao robusta dos numeros de fatores em modelos fatoriais aproximados de alta dimensao é considerada com o objetivo de reduzir a dimensionalidade. Ésabido que dados discrepantes afetam as matrizes de covariancia e correlaçao. Em adiçao, técnicas que dependem do calculo dos autovalores e autovetores dessas matrizes, como a analise de componentes principais e a analise fatorial, sao completamente afetadas. Assim, na presença de observaçoes discrepantes, o critério de informaçao proposto por Bai & Ng (2002) tende a superestimar o numero de fatores. [...]
25

Martinho, Carla Alexandra Lopes. "Modelos vectoriais ARMA : estudo e potencialidades." Master's thesis, Instituto Superior de Economia e Gestão, 1997. http://hdl.handle.net/10400.5/21745.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mestrado em Matemática Aplicada à Economia e Gestão
Neste trabalho vai-se proceder ao estudo e à aplicação prática sobre sucessões cronológicas reais dos modelos vectoriais ARMA. Estes modelos generalizam os modelos univariados ARMA e os modelos multivariados de função transferência, tendo vantagem sobre estes últimos porque permitem a análise conjunta de sucessões cronológicas que apresentam efeito de feedback. E de esperar que a modelação conjunta de sucessões potencie a capacidade de as descrever, obtendo-se ganhos significativos em termos previsionais. Deste modo, procerder-se-á ao estudo, com base na análise de dois exemplos concretos, do comportamento dos modelos vectoriais ARMA, conffontando-os com os resultados obtidos pelos modelos univariados e pelos modelos de função transferência.
The aim of this work is to present the methodology of the vectorial ARMA models applied to real time series. These models are generalisations of the univariate ARMA models and of the multivariate transfer function models. The advantage of the vectorial ARMA modelling is to allow the joint analysis of the time series which exhibit feedback effects. It is our intention to show that this joint modelization increases the capacity of describing and forecasting. The application was made with the use of two real examples comparing the results ffom the vectorial ARMA, the univariate and the transfer function modelling.
info:eu-repo/semantics/publishedVersion
26

Kajitani, Yoshio. "Forecasting time series with neural nets." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ39836.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Policarpi, Andrea. "Transformers architectures for time series forecasting." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25005/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Time series forecasting is an important task related to countless applications, spacing from anomaly detection to healthcare problems. The ability to predict future values of a given time series is a non­trivial operation, whose complexity heavily depends on the number and the quality of data available. Historically, the problem has been addressed by statistical models and simple deep learning architectures such as CNNs and RNNs; recently many Transformer-based models have also been used, with excellent results. This thesis work aims to evaluate the performances of two transformer-based models, namely a TransformerT2V and an Informer, when applied to time series forecasting problems, and compare them with non-transformer architectures. Furthermore, a second contribution resides in the exploration of the Informer's Probsparse mechanism, and the suggestion of improvements to increase the model performances.
28

Bodwick, M. K. "Multivariate time series : The search for structure." Thesis, Lancaster University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233978.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Batres-Estrada, Bilberto. "Deep learning for multivariate financial time series." Thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168751.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Deep learning is a framework for training and modelling neural networks which recently have surpassed all conventional methods in many learning tasks, prominently image and voice recognition. This thesis uses deep learning algorithms to forecast financial data. The deep learning framework is used to train a neural network. The deep neural network is a Deep Belief Network (DBN) coupled to a Multilayer Perceptron (MLP). It is used to choose stocks to form portfolios. The portfolios have better returns than the median of the stocks forming the list. The stocks forming the S&P 500 are included in the study. The results obtained from the deep neural network are compared to benchmarks from a logistic regression network, a multilayer perceptron and a naive benchmark. The results obtained from the deep neural network are better and more stable than the benchmarks. The findings support that deep learning methods will find their way in finance due to their reliability and good performance.
30

Cheung, Chung-pak, and 張松柏. "Multivariate time series analysis on airport transportation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B31976499.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ghalwash, Mohamed. "Interpretable Early Classification of Multivariate Time Series." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/239730.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computer and Information Science
Ph.D.
Recent advances in technology have led to an explosion in data collection over time rather than in a single snapshot. For example, microarray technology allows us to measure gene expression levels in different conditions over time. Such temporal data grants the opportunity for data miners to develop algorithms to address domain-related problems, e.g. a time series of several different classes can be created, by observing various patient attributes over time and the task is to classify unseen patient based on his temporal observations. In time-sensitive applications such as medical applications, some certain aspects have to be considered besides providing accurate classification. The first aspect is providing early classification. Accurate and timely diagnosis is essential for allowing physicians to design appropriate therapeutic strategies at early stages of diseases, when therapies are usually the most effective and the least costly. We propose a probabilistic hybrid method that allows for early, accurate, and patient-specific classification of multivariate time series that, by training on a full time series, offer classification at a very early time point during the diagnosis phase, while staying competitive in terms of accuracy with other models that use full time series both in training and testing. The method has attained very promising results and outperformed the baseline models on a dataset of response to drug therapy in Multiple Sclerosis patients and on a sepsis therapy dataset. Although attaining accurate classification is the primary goal of data mining task, in medical applications it is important to attain decisions that are not only accurate and obtained early, but can also be easily interpreted which is the second aspect of medical applications. Physicians tend to prefer interpretable methods rather than black-box methods. For that purpose, we propose interpretable methods for early classification by extracting interpretable patterns from the raw time series to help physicians in providing early diagnosis and to gain insights into and be convinced about the classification results. The proposed methods have been shown to be more accurate and provided classifications earlier than three alternative state-of-the-art methods when evaluated on human viral infection datasets and a larger myocardial infarction dataset. The third aspect has to be considered for medical applications is the need for predictions to be accompanied by a measure which allows physicians to judge about the uncertainty or belief in the prediction. Knowing the uncertainty associated with a given prediction is especially important in clinical diagnosis where data mining methods assist clinical experts in making decisions and optimizing therapy. We propose an effective method to provide uncertainty estimate for the proposed interpretable early classification methods. The method was evaluated on four challenging medical applications by characterizing decrease in uncertainty of prediction. We showed that our proposed method meets the requirements of uncertainty estimates (the proposed uncertainty measure takes values in the range [0,1] and propagates over time). To the best of our knowledge, this PhD thesis will have a great impact on the link between data mining community and medical domain experts and would give physicians sufficient confidence to put the proposed methods into real practice.
Temple University--Theses
32

Thu, Huong Nguyen. "Goodness-of-fit in Multivariate Time Series." Doctoral thesis, Universidad Carlos III de Madrid, Department of Statistics, Facultad de Ciencias Sociales y Jurıdicas, Campus de Getafe, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-18770.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Goodness-of-fit is an important task in time series analysis. In this thesis, wepropose a new family of statistics and a new goodness-of-fit process for the wellknownmultivariate autoregressive moving average VARMA(p,q) model.Some preliminary results are studied first for an initial goodness-of-fit method.Since the residuals of the fit play an important role in identification and diagnosticchecking, relations between least squares residuals and true errors are studied. Anexplicit representation of the information matrix as a limit is also obtained.Second, we generalize a univariate goodness-of-fit process studied in Ubierna andVelilla (2007). An explicit form of the limit covariance function is presented, aswell as a characterization of its limit properties in terms of a parametric Gaussianprocess. This motivates the introduction of a new goodness-of-fit process based ona transformed correlation matrix sequence. The construction and properties of theassociated transformation matrices are investigated. We also prove the convergenceof this new process to the Brownian bridge. Thus, statistics defined as functionals of our process use a null distribution that is free of unknown parameters.Finally, simulations, comparisons, and examples of application are presented toillustrate our theoretical findings and contributions. Our proposed goodness-of-fitstatistics are shown to be quite sensitive for detecting lack of fit. They also seem tobe relatively independent of the choice of a particular lag.
Upprättat; 2014; 20160603 (andbra)
33

Sánchez, Enríquez Heider Ysaías. "Anomaly detection in streaming multivariate time series." Tesis, Universidad de Chile, 2017. http://repositorio.uchile.cl/handle/2250/149078.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Doctor en Ciencias, Mención Computación
Este trabajo de tesis presenta soluciones para al problema de detección de anomalı́as en flujo de datos multivariantes. Dado una subsequencia de serie temporal (una pequeña parte de la serie original) como entrada, uno quiere conocer si este corresponde a una observación normal o es una anomalı́a, con respecto a la información histórica. Pueden surgir dificultades debido principalmente a que los tipos de anomalı́a son desconocidos. Además, la detección se convierte en una tarea costosa debido a la gran cantidad de datos y a la existencia de variables de dominios heterogéneos. En este contexto, se propone un enfoque de detección de anomalı́as basado en Discord Discovery, que asocia la anomalı́a con la subsecuencia más inusual utilizando medidas de similitud. Tı́picamente, los métodos de reducción de la dimensionalidad y de indexación son elaborados para restringir el problema resolviéndolo eficientemente. Adicionalmente, se propone técnicas para generar modelos representativos y consisos a partir de los datos crudos con el fin de encontrar los patrones inusuales. Estas técnicas también mejoran la eficiencia en la búsqueda mediante la reducción de la dimensionalidad. Se aborda las series multivariantes usando técnicas de representación sobre subsequencias no- normalizadas, y se propone nuevas técnicas de discord discovery basados en ı́ndices métricos. El enfoque propuesto es comparado con técnicas del estado del arte. Los resultados ex- perimentales demuestran que aplicando la transformación de translación y representación de series temporales pueden contribuir a mejorar la eficacia en la detección. Además, los métodos de indexación métrica y las heurı́sticas de discord discovery pueden resolver eficien- temente la detección de anomalı́as en modo offline y online en flujos de series temporales multivariantes.
Este trabajo ha sido financiado por beca CONICYT - CHILE / Doctorado para Extranjeros, y apoyada parcialmente por el Proyecto FONDEF D09I1185 y el Programa de Becas de NIC Chile
34

Damle, Chaitanya. "Flood forecasting using time series data mining." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Pisanelli, Gioele. "Time series forecasting for smart waste management." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22432/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Questo lavoro di tesi, svolto in ambito di tirocinio presso Sis.Ter Srl, propone un sistema di ottimizzazione della raccolta dei rifiuti applicato ai cestini della spazzatura presenti sul territorio della città di Delft, in Olanda. Tale sistema crea il percorso migliore utile allo svuotamento dei cestini che più lo necessitano, determinandolo in base allo stato dei cestini attuale (noto grazie a sensori) e futuro (predetto tramite l'uso di reti neurali).
36

Wohlrabe, Klaus. "Forecasting with mixed-frequency time series models." Diss., lmu, 2009. http://nbn-resolving.de/urn:nbn:de:bvb:19-96817.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Fiorucci, José Augusto. "Time series forecasting : advances on Theta method." Universidade Federal de São Carlos, 2016. https://repositorio.ufscar.br/handle/ufscar/7399.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Submitted by Caroline Periotto (carol@ufscar.br) on 2016-09-21T14:53:55Z No. of bitstreams: 1 TeseJAF.pdf: 1812104 bytes, checksum: 817ececd9c05df0ddae3a91de3c8bb14 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-09-23T18:27:05Z (GMT) No. of bitstreams: 1 TeseJAF.pdf: 1812104 bytes, checksum: 817ececd9c05df0ddae3a91de3c8bb14 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-09-23T18:27:11Z (GMT) No. of bitstreams: 1 TeseJAF.pdf: 1812104 bytes, checksum: 817ececd9c05df0ddae3a91de3c8bb14 (MD5)
Made available in DSpace on 2016-09-23T18:27:17Z (GMT). No. of bitstreams: 1 TeseJAF.pdf: 1812104 bytes, checksum: 817ececd9c05df0ddae3a91de3c8bb14 (MD5) Previous issue date: 2016-05-13
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Accurate and robust forecasting methods for univariate time series are critical as the historical data can be used in the strategic planning of such future operations as buying and selling to ensure product inventory and meet market demands. In this context, several competitions for time series forecasting have been organized, with the M3-Competition as the largest. As the winner of M3-Competition, the Theta method has attracted attention from researchers for its predictive performance and simplicity. The Theta method is a combination of other methods, which proposes the decomposition of the deseasonalized time series into two other time series called "theta lines". The first completely removes the curvatures of the data, thus accurately estimating the long-term trend. The second doubles the curvatures to better approximate short-term behavior. Several issues have been raised about the Theta method, even by its originators. They include the number of theta lines, their parameters, weights to combine them, and construction of prediction intervals, among others. This doctorate thesis resolves part of these issues. We derive optimal weights for combine the theta lines, this result is used to derive statistical models which generalizes /approximate the standard Theta method. The statistical methodology is considering for parameter estimation and for compute the prediction intervals. The optimal weights are also used to propose new methods that hold two or more theta lines. Part of proposed methodology is implemented in a package for R-programming language. In an empirical investigation using the M3-Competition data set with more than 3000 time series, the proposed methods/models demonstrated significant accuracy. The study’s primary approach, the Dynamic Optimised Theta Model, outperformed all benchmarks methods, constituting, in all likelihood, the highest-performing method for this data set available in the literature.
Métodos precisos e robustos para prever séries temporais são muito importantes em diversas áreas. Uma vez que os dados históricos são utilizados para o planejamento estratégico de operações futuras, como compra ou venda de determinados produtos para controle de estoque e demanda. Neste contexto, várias competições para métodos de previsão de séries temporais univariadas foram realizadas, sendo a Competição M3 a maior. Ao vencer a Competição M3, o método Theta intrigou pesquisadores por sua capacidade preditiva e simplicidade. O método Theta é uma combinação de outros métodos, o qual propõe decompor a série temporal (desazonalizada) em outras duas séries temporais chamadas de "linhas thetas". A primeira linha theta remove completamente a curvatura dos dados, sendo assim um estimador para a tendência a longo prazo. A segunda linha theta dobra a curvatura da série sendo assim um estimador para a componente de curto prazo. Várias questões relacionadas ao método Theta foram levantadas, algumas pelos próprios autores, como parâmetros ideais para as linhas thetas, pesos para combinar as linhas thetas, construção de intervalos de predição, número ideal de linhas thetas, entre outras. Nesta tese algumas dessas questões são solucionadas. Pesos ótimos para a combinação de linhas thetas são derivados, esses resultados são utilizados para a construção de modelos estatísticos que generalizam/aproximam o método Theta padrão. A metodologia estatística é empregada para estimação dos parâmetros e construção de intervalos de predição. Os pesos ótimos também são utilizados para propor métodos que consideram duas ou mais linhas thetas. Parte da metodologia proposta é implementada em um pacote para a linguagem de programação R. Em um estudo empírico com mais de 3000 séries temporais do conjunto de dados da competição M3, os métodos/modelos propostos mostraram-se acurados. A nossa principal abordagem, o modelo DOTM ("Dynamic Optimised Theta Model") superou todos os concorrentes, sendo possivelmente o método com o melhor desempenho nesse conjunto de dados já disponibilizado na literatura.
38

Billah, Baki 1965. "Model selection for time series forecasting models." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/8840.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Montagnon, Chris. "Singular value decomposition and time series forecasting." Thesis, Imperial College London, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.535012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

ABELEM, ANTONIO JORGE GOMES. "ARTIFICIAL NEURAL NETWORKS IN TIME SERIES FORECASTING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1994. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8489@1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Esta dissertação investiga a utilização de Redes Neurais Artificiais (RNAs) na previsão de séries temporais, em particular de séries financeiras, consideradas uma classe especial de séries temporais, caracteristicamente ruídos e sem periodicidade aparente. O trabalho envolve quatro partes principais: um estudo sobre redes neurais artificiais e séries temporais; a modelagem das RNAs para previsão de séries temporais; o desenvolvimento de um ambiente de simulação; e o estudo de caso. No estudo sobre Redes Neurais Artificiais e séries temporais fez-se um levantamento preliminar das aplicações de RNAs na previsão de séries. Constatou-se a predominância do uso do algoritmos de retropropagação do erro para o treinamento das redes, bem como dos modelos estatísticos de regressão, de médias móveis e de alisamento exponencial nas comparações com os resultados da rede. Na modelagem das RNAs de retropropagação do erro considerou-se três fatores determinantes no desempenho da rede: convergência, generalização e escalabilidade. Para o controle destes fatores usou-se mecanismos como; escolha da função de ativação dos neurônios - sigmóide ou tangente hiperbólica; escolha da função erro - MSE (Mean Square Error) ou MAD (Mean Absolutd Deviation); e escolha dos parâmetros de controle do gradiente descendente e do temapo de treinamento - taxa de aprendizado e termo de momento. Por fim, definiu-se a arquitetura da rede em função da técnica utilizada para a identificação de regularidades na série (windowing) e da otimização dos fatores indicadores de desempenho da rede. O ambiente de simulação foi desenvolvido em linguagem C e contém 3.600 linhas de códigos divididas em três módulos principais: interface com o usuário, simulação e funções secundárias. O módulo de interface com o usuário é responsável pela configuração e parametrização da rede, como também pela visualização gráfica dos resultados; módulo de simulação executa as fases de treinamento e testes das RNAs; o módulo de funções secundárias cuida do pré/pós-processamento dos dados, da manipulação de arquivos e dos cálculos dos métodos de avaliação empregados. No estudo de caso, as RNAs foram modeladas para fazer previsões da série do preço do ouro no mercado internacional. Foram feitas previsões univariadas single e multi-step e previsões multivariadas utilizando taxas de câmbio de moedas estrangeiras. Os métodos utilizandos para a avaliação do desempenho da rede foram: coeficiente U de Theil, MSE (Mean Square Error), NRMSE (Normalized Root Mean Square Error), POCID (Percentage Of Change In Direction), scattergram e comparação gráfica. Os resultados obtidos, além de avaliados com os métodos acima, foram comparados com o modelo de Box-Jenkins e comprovaram a superioridade das RNAs no tratamento de dados não-lineares e altamente ruidosos.
This dissertation investigates the use of Artificial Neural Nerworks (ANNs) in time series forecastig, especially financial time series, which are typically noisy and with no apparent periodicity. The dissertation covers four major parts: the study of Artificial Neural Networks and time series; the desing of ANNs applied to time series forecasting; the development of a simulation enironment; and a case study. The first part of this dissertation involved the study of Artficial Neural Netwrks and time series theory, resulting in an overview of ANNs utilization in time series forecasting. This overview confirmed the predominance of Backpropagations as the training algorithm, as well as the employment of statistical models, such as regression and moving average, for the Neural Network evaluation. In the design of ANNS, three performance measures were considered: covergence, generalization and scalability. To control these parameters, the following methods were applied: choice of activation function - sigmoid or hiperbolic tangent; choice of cost function - MSE (Mean Square Error) or MAD (Mean Absolute Deviation); choise of parameteres for controlling the gradiente descendent and learning times - the learning rate and momentum term; and network architecture. The simulation environment was developed in C language, with 3,600 lines of code distributed in three main modules: the user interface, the simulaton and the support functions modules. The user interface module is responsaible for the network configuration and for the graphical visualization. The simulation module performs the training and testing of ANNs. The support functions module takes care of the pre and pos processin, the files management and the metrics calculation. The case study concerned with the designing of an ANN to forescast the gold price in the international market. Two kinds of prediction were used: univariate - single and multi-step, and multivariate. The metrics used to evaluate the ANN performance were: U of Theil`s coeficient, MSE (Mean Square Error), NRMSE (Normalized Mean Saquare Error), POCID (Percentage Of Cnage In Direction), scattergram and graphical comparison. The results were also comapred with the Box-Jenkins model, confirming the superiority of ANN in handling non-linear and noisy data.
41

ZANDONADE, ELIANA. "USING NEURAL NETWORK IN TIME SERIES FORECASTING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1993. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8641@1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Este trabalho associa previsão de Séries Temporais a uma nova metodologia de processamento de informação: REDE NEURAL. Usaremos o modelo de Retropropagação, que consiste em uma Rede Neural multicamada com as unidades conectadas apenas com a unidades conectadas apenas com as unidades da camada subseqüente e com a informação passando em uma única direção. Aplicaremos o modelo de retropropagação na análise de quatro séries temporais: uma série ruidosa. Uma série com tendência, uma série sazonal e uma série de Consumo de Energia Elétrica da cidade de Uruguaiana, RS. Os resultados obtidos serão comparados com os modelos ARIMA de Box e Jenkins e um modelo com intervenção
This work join the Times-Séries Forecasting to a new information processing metodoligy: NEURAL NETWORK. We will use the Back-Propagation model, that consist in an arquitecture of a feed-forward network with hidden layers. We will apply the Back-Propagation model in an analysis to four times series: a noisy series, a series with trend, a seasonal series and an electrical energy consuption series of Uruguaiana, RS. The results will be compare with the Box and jenkins´ ARIMA models and a model with intervention.
42

Qu, Haizhou. "Financial forecasting using time series and news." Thesis, University of York, 2018. http://etheses.whiterose.ac.uk/22508/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis focuses on the field of financial forecasting. Most studies that use the financial news as an input in the prediction process, take it for granted that news has an effect on financial markets. The starting point for this research is the need to question this assumption, and if confirmed, to attempt to quantify it. Therefore, the first study investigates the correlation between news and stock performance based on a dataset covering both trading data and news of 25 companies. We propose a novel framework to quantify the relationship based on two matrices of pairwise distances between companies. The first matrix represents distances between sets of news articles, while the other represents the pairwise distances between the financial performances. The detected correlation varies with time and reaches statistically significant. The next study focuses on testing if news can be used as a proxy for future financial performance in a profitable trading strategy. The one proposed here uses our previous findings to select the stock for which news affects most strongly on financial performance. The results show that this strategy outperforms competitive baselines. Based on the proposed framework, a textual feature ranking method is proposed. This method assigns weights for textual features, and those weights are optimised to maximise the value of the relation to be quantified. A gradient descent algorithm is applied to obtain the optimal weights. There are two findings: first, named entity related words are weighted more than other words. second, optimal weights lead to a significantly better indicator for selecting winner stocks hence better profitable strategies. Lastly, the popular convolutional neural work is used to implement a novel financial forecasting approach, which uses the stock chart as input. The results show that this approach can provide effective predictions of future stock price movements.
43

Bhurtel, Bidur Prasad. "CNNs VERSUS LSTMs FOR TIME SERIES FORECASTING." OpenSIUC, 2021. https://opensiuc.lib.siu.edu/theses/2830.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The goal of this thesis is to compare the performances of long short-term memory (LSTM) recurrent neural networks and feedforward convolution neural networks (CNNs) in time series forecasting. The forecasting problem focuses on predicting the future values of a time series using the current and a set of previous (lagged) values of the time series. LSTMs are used extensively in time series forecasting problems because they are specifically designed to process sequential and temporal data. CNNs on the other hand are not designed to process such sequential data. Although CNNs appear to be a poor choice for time series forecasting, it would be informative to compare the performances of CNNs and LSTMs by using exactly the same set of current and previous values of several combinations of multiple time-series. The specific forecasting problem considered involves predicting the outflow for a creek sub-basin using outflow, temperature, and precipitation time series consisting of 185,544 hourly datapoints. The Granger causality (GC) test is used to confirm that temperature and precipitation Granger cause outflow and should, therefore, be helpful in predicting outflow. The GC test also provides information on the lag required to influence outflow prediction. The forecasting problem is divided into developing 3 distinct types of models: one-hour forecast models, 2-hour forecast models, and a one-hour and 2-hour extreme-event forecast models. The one-hour forecast models are trained to predict the next-hour outflow using the current and a set of previous values of the time series. The 2-hour forecast models are trained to predict the outflow 2 hours ahead using the current and previous values of the time series. A variation of the two-hour model is trained with the previous, current, and the next-hour prediction of the one-hour model. The extreme-event forecast models are trained only with segments of the time series containing extreme events. The performance of the LSTM and CNN implementations of the models are compared objectively using the mean square error and subjectively by comparing the predictions visually. The results from various combinations of the creek sub-basin time series show that the CNN models outperform the LSTM models. These results are quite unexpected given that CNNs appear to be a poor choice for time series forecasting whereas LSTMs are a good choice. The primary reason for the CNN models yielding superior performance is that through the choice of appropriate filters, CNNs are capable of generating complex features which are coupled simultaneously across time series predictor variables and across time lags in the series of convolution layers. However, it cannot be concluded that CNNs will, in general, outperform LSTMs without testing the models on a large ensemble of diverse data sets.
44

Stensholt, B. K. "Statistical analysis of multivariate bilinear time series models." Thesis, University of Manchester, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.582853.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the last thirty years there has been extensive research in the analysis of linear time series models. In analyzing univariate and multivariate time series the assumption of linearity is, in many cases, unrealistic. With this in view, recently, many nonlinear models for the analysis of time series have been proposed, mainly for univariate series. One class of models proposed which has received considerable interest, is the class of bilinear models. In particular has the theory of univariate bilinear time series been considered in a number of papers (d. Granger and Andersen (1978), Subba Rao (1981) and Bhaskara Rao et. al. (1983) and references therein); these models are analogues of the bilinear systems as proposed and studied previously by control theorists. Recently several analytic properties of these time series models have been investigated, and their estimation and applications have been reported in Subba Rao and Gabr (1983). But it is important to study the relationship between two or more time series, also 10 the presence of nonlinearity. Therefore, multivariate generalizations of the bilinear models have been considered by Subba Rao (1985) and Stensholt and Tj(llstheim (1985, 1987). Here we consider some theoretical aspects of multivariate bilinear time series models (such as strict and second order stationarity, ergodicity, invertibility, and, for special cases. strong consistency of least squares estimates). The theory developed is illustrated with simulation results. Two applications to real bivariate data (mink-muskrat data and "housing starts-houses soldll data) and the FORTRAN programs developed in this project are also included.
45

Marquier, Basile. "Novel Bayesian methods on multivariate cointegrated time series." Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/19341/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Many economic time series exhibit random walk or trend dynamics and other persistent non-stationary behaviour (e.g. stock prices, exchange rates, unemployment rate and net trading). If a time series is not stationary, then any shock can be permanent and there is no tendency for its level to return to a constant mean over time; moreover, in the long run, the volatility of the process is expected to grow without bound, and the time series cannot be predicted based on historical observations. Cointegration allows the identification of economic integrated time series that exhibit similar dynamics in the long run and the estimation of their relationships, by exploiting the stationary linear combinations of these time series. This thesis proposes three Bayesian estimation methods of the well-known Vector Error Correction Model (VECM) about difference stationary time series in order to extract the long-run equilibrium relationships. Each method used in this thesis is implemented using Markov Chain Monte Carlo (MCMC) and illustrated on synthetic data, and then on real economic data sets. The first method consists of a static model, where we compare comovements between Eurozone economic time series comprising net trading, long-term interest rates and the harmonised unemployment rate. Primiceri (2005) established a time-varying model for the vector autoregressive model. Following Primiceri and the idea of the static model seen in the first method, we are constructing a time-varying model for our VECM, from which we extract information about the time-varying cointegration matrix, and more interestingly about its time-varying rank (i.e. the cointegration rank) and independent cointegration relationships. These two first methods are based on the singular value decomposition of the cointegration matrix from the error correction model and the so-called irrelevance criterion, a flexible thresholding approach to determine its rank. In these two methods, the joint estimation of the cointegration rank and the cointegration relationships is deducted from synthetic data sets before applying them to real data sets (European economies and major stock market exchange indices). The last main chapter of this thesis covers the use of a prior singular distribution on the long-run relationship matrix of the VECM given the cointegration rank. Based on the definition of the singular matrix normal distribution, we also learn about the space definition and the density of such a distribution. We also remind the singular Inverse-Wishart distribution and in our discussion, we eventually open the issues arising in implementing a dynamic model, by developing the idea of a singular Inverse-Wishart distribution on the variance covariance matrix of the transition equation (see Chapter 6).
46

Rawizza, Mark Alan. "Time-series analysis of multivariate manufacturing data sets." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10895.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Shah, Nauman. "Statistical dynamical models of multivariate financial time series." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:428015e6-8a52-404e-9934-0545c80da4e1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The last few years have witnessed an exponential increase in the availability and use of financial market data, which is sampled at increasingly high frequencies. Extracting useful information about the dependency structure of a system from these multivariate data streams has numerous practical applications and can aid in improving our understanding of the driving forces in the global financial markets. These large and noisy data sets are highly non-Gaussian in nature and require the use of efficient and accurate interaction measurement approaches for their analysis in a real-time environment. However, most frequently used measures of interaction have certain limitations to their practical use, such as the assumption of normality or computational complexity. This thesis has two major aims; firstly, to address this lack of availability of suitable methods by presenting a set of approaches to dynamically measure symmetric and asymmetric interactions, i.e. causality, in multivariate non-Gaussian signals in a computationally efficient (online) framework, and secondly, to make use of these approaches to analyse multivariate financial time series in order to extract interesting and practically useful information from financial data. Most of our proposed approaches are primarily based on independent component analysis, a blind source separation method which makes use of higher-order statistics to capture information about the mixing process which gives rise to a set of observed signals. Knowledge about this information allows us to investigate the information coupling dynamics, as well as to study the asymmetric flow of information, in multivariate non-Gaussian data streams. We extend our multivariate interaction models, using a variety of statistical techniques, to study the scale-dependent nature of interactions and to analyse dependencies in high-dimensional systems using complex coupling networks. We carry out a detailed theoretical, analytical and empirical comparison of our proposed approaches with some other frequently used measures of interaction, and demonstrate their comparative utility, efficiency and accuracy using a set of practical financial case studies, focusing primarily on the foreign exchange spot market.
48

Yiu, Fu-keung. "Time series analysis of financial index /." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18003047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Saffell, Matthew John. "Knowledge discovery for time series /." Full text open access at:, 2005. http://content.ohsu.edu/u?/etd,247.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Stagge, Anton. "A time series forecasting approach for queue wait-time prediction." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279291.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Waiting in queues is an unavoidable part of life, and not knowing how long the wait is going to be can be a big source of anxiety. In an attempt to mitigate this, and to be able to manage their queues, companies often try to estimate wait-times. This is especially important in healthcare, since the patients are most likely already under some distress. In this thesis the performance of two different machine learning (ML) approaches and a simulation approach were compared on the wait-time prediction problem in a digital healthcare setting. Additionally, a combination approach was implemented, combining the best ML model with the simulation approach. The ML approaches used historical data of the patient queue in order to produce a model which could predict the wait-time for new patients joining the queue. The simulation algorithm mimics the queue in a virtual environment and simulates time moving forward until the new patient joining the queue is assigned a clinician, thus producing a wait-time estimation. The combination approach used the wait-time estimations produced by the simulation algorithm as an additional input feature for the best ML model. A Temporal Convolutional Network (TCN) model and a Long Short-Term Memory network (LSTM) model were implemented and represented the sequence modeling ML approach. A Random Forest Regressor (RF) model and a Support Vector Regressor (SVR) model were implemented and represented the traditional ML approach. In order to introduce the temporal dimension to the traditional ML approach, the exponential smoothing preprocessing technique was applied. The results indicated that there was a statistically significant difference between all models. The TCN model and the simulation algorithm had the lowest Mean Square Error (MSE) of all individual models. Both sequence modeling models had lower MSE compared to both of the traditional ML models. The combination model had the lowest MSE of all, adopting the best performance traits from both the ML approach and the simulation approach. However, the combination model is the most complex, and thus requires the most maintenance. Due to the limitations in the study, no single approach can be concluded as optimal. However, the results suggest that the sequence modeling approach is a viable option in wait-time prediction, and is recommended for future research or applications.
Att vänta i köer är en oundviklig del av livet. Att inte veta hur lång väntan kommer att bli kan framkalla ångest. I ett försök att lindra denna ångestkälla, samt för att kunna hantera sina köer, försöker företag ofta uppskatta väntetiden. Detta är särskilt viktigt inom hälso- och sjukvården, eftersom patienterna troligtvis redan upplever någon typ av oro. Syftet med denna uppsats är att jämföra prestandan hos tre olika metoder för att förutspå väntetiden hos en digital vårdtjänst. Två olika maskininlärningsmetoder (ML) samt en simuleringsmetod jämfördes. Utöver detta jämfördes även en kombinationsmetod, som kombinerade den bästa ML-modellen med simuleringsmetoden. ML-metoderna använde sig av historisk data från patientkön för att skapa en modell som kunde förutsäga väntetiden för nya patienter som ställer sig i kön. Simuleringsalgoritmen imiterar kön i en virtuell miljö och simulerar att tiden går framåt i denna miljö tills den nya patienten som anslöt sig till kön kan tilldelas en ledig kliniker. På detta sätt kan en prediktion av väntetiden ges till patienten. Kombinationsmetoden använde simuleringsprediktionerna som ytterligare indata till den bästa ML-modellen. En Temporal Convolutional Network (TCN)-modell samt en Long Short- Term Memory (LSTM)-modell implementerades och representerade sekvensmodelleringsmetoden (eng: sequence modeling). En Random Forest Regressor (RF)-modell samt en Support Vector Regressor (SVR)-modell implementerades och representerade den traditionella ML-metoden. För att den traditionella ML-metoden skulle få tillgång till tidsdimensionen applicerades förbehandlingstekniken exponentiell utjämning på dess data. Resultatet visade att det fanns en statistiskt signifikant skillnad i kvadratfelet mellan alla modellerna. TCN-modellen samt simulationsalgoritmen hade lägst medelkvadratfel av de ensamstående modellerna. Båda sekvensmodelleringsmodellerna hade lägre medelkvadratfel än de traditionella ML-modellerna. Kombinationsmodellen hade absolut lägst medelkvadratfel, då modellen behöll fördelarna från både ML- samt simuleringsmetoden. Däremot är kombinationsmetoden den metod som kräver mest underhåll. På grund av begränsningarna i studien kan ingen enstaka metod hävdas vara optimal. Resultaten tyder emellertid på att sekvensmodelleringsmetoden kan användas för väntetidsprediktion i ett kösystem, och rekommenderas därför för framtida forskning eller applikationer.

До бібліографії