Dissertations / Theses on the topic 'Série chronologique'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Série chronologique.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Boughrara, Adel. "Sur la modélisation dynamique retrospective et prospective des séries temporelles : une étude méthodologique." Aix-Marseille 3, 1997. http://www.theses.fr/1997AIX32054.
Full textThe past years have witnessed intensive competition among economic and econometric methodologies attempting to explain macroeconomic behaviour. Alternative schools have made claims with respect both to the purity of their methodology and to their ability to explain the facts. This thesis investigates the epistemological foundations of the major competitors, namely, the new classical school with its links to prospective econometric modelling on the one hand, and the retrospective modelling which is more close to inductive methods, on the other hand. The main conclusion of the thesis is that none of the rival schools has a very tight link with the popperien epistemology of falsificationism
Guillemé, Maël. "Extraction de connaissances interprétables dans des séries temporelles." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S102.
Full textEnergiency is a company that sells a platform to allow manufacturers to analyze their energy consumption data represented in the form of time series. This platform integrates machine learning models to meet customer needs. The application of such models to time series encounters two problems: on the one hand, some classical machine learning approaches have been designed for tabular data and must be adapted to time series, on the other hand, the results of some approaches are difficult for end users to understand. In the first part, we adapt a method to search for occurrences of temporal rules on time series from machines and industrial infrastructures. A temporal rule captures successional relationships between behaviors in time series . In industrial series, due to the presence of many external factors, these regular behaviours can be disruptive. Current methods for searching the occurrences of a rule use a distance measure to assess the similarity between sub-series. However, these measurements are not suitable for assessing the similarity of distorted series such as those in industrial settings. The first contribution of this thesis is the proposal of a method for searching for occurrences of temporal rules capable of capturing this variability in industrial time series. For this purpose, the method integrates the use of elastic distance measurements capable of assessing the similarity between slightly deformed time series. The second part of the thesis is devoted to the interpretability of time series classification methods, i.e. the ability of a classifier to return explanations for its results. These explanations must be understandable by a human. Classification is the task of associating a time series with a category. For an end user inclined to make decisions based on a classifier’s results, understanding the rationale behind those results is of great importance. Otherwise, it is like having blind confidence in the classifier. The second contribution of this thesis is an interpretable time series classifier that can directly provide explanations for its results. This classifier uses local information on time series to discriminate against them. The third and last contribution of this thesis, a method to explain a posteriori any result of any classifier. We carried out a user study to evaluate the interpretability of our method
Houfaidi, Souad. "Robustesse et comportement asymptotique d'estimateurs des paramètres d'une série chronologique : (AR(P) et ARMA(P, Q))." Nancy 1, 1986. http://www.theses.fr/1986NAN10065.
Full textGagnon, Jean-François. "Prévision humaine de séries temporelles." Doctoral thesis, Université Laval, 2014. http://hdl.handle.net/20.500.11794/25243.
Full textIraqui, Samir. "Détection statique en temps semi réel de valeurs aberrantes dans une série chronologique de données bactériologiques." Rouen, 1986. http://www.theses.fr/1986ROUES042.
Full textChukunyere, Amenan Christiane. "Les modèles VAR(p)." Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/35696.
Full textThis thesis aims to study a family of methods to jointly model several time series. We use these methods to predict the behavior of five US time series and to highlight the dynamic links that might exist between them. To do this, we use the p-order autoregressive vector models proposed by Sims (1980), which are a multivariate generalization of the Box and Jenkins models. First, we define a set of concepts and statistical tools that will be useful for the understanding of notions used later in this thesis. Follows the presentation of the models and the method of Box and Jenkins. This method is applied to each of the five series in order to have univariate models. Then, we present the VAR(p) models and we test the fit of these models to a vector series whose components are the five aforementioned series. We discuss the added value of multivariate analysis compared to the five univariate analyzes.
Desrosiers, Maxime. "Le prix du risque idiosyncrasique : une analyse en séries temporelles et coupe transversale." Master's thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/67076.
Full textBenson, Marie Anne. "Pouvoir prédictif des données d'enquête sur la confiance." Master's thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/69497.
Full textConfidence survey data are time series containting the responses to questions aiming to measure confidence and expectations of economic agents about future economic activity. The richness of these data and their availability in real time attracts the interest of many forecasters who see it as a way to improve their traditional forecasts. In this thesis, I assess the predictive power of survey data for the future evolution of Canadian GDP, while comparing the forecasting performance of the Conference Board of Canada own confidence indices to the indicators I construct using principal component analysis. Using three simple linear models, I carry out an out-of-sample forecasting experiment with rolling windows on the period 1980 to 2019. The results show that principal component analysis provides better-performing indicators than the indices produced by the Conference Board. However, the results of the study cannot show that clear that confidence improves forecasting unambiguently once the lagged growth rate of GDP is added to the analysis.
Tatsa, Sylvestre. "Modélisation et prévision de la consommation horaire d'électricité au Québec : comparaison de méthodes de séries temporelles." Thesis, Université Laval, 2014. http://www.theses.ulaval.ca/2014/30329/30329.pdf.
Full textThis work explores the dynamics of residential electricity consumption in Quebec using hourly data from January 2006 to December 2010. We estimate three standard autoregressive models in time series analysis: the Holt-Winters exponential smoothing, the seasonal ARIMA model (SARIMA) and the seasonal ARIMA model with exogenous variables (SARIMAX). For the latter model, we focus on the effect of climate variables (temperature, relative humidity and dew point and cloud cover). Climatic factors have a significant impact on the short-term electricity consumption. The intra-sample and out-of-sample predictive performance of each model is evaluated with various adjustment indicators. Three out-of-sample time horizons are tested: 24 hours (one day), 72 hours (three days) and 168 hours (1 week). The SARIMA model provides the best out-of-sample predictive performance of 24 hours. The SARIMAX model reveals the most powerful out-of-sample time horizons of 72 and 168 hours. Additional research is needed to obtain predictive models fully satisfactory from a methodological point of view. Keywords: modeling, electricity, Holt-Winters, SARIMA, SARIMAX.
Bougas, Constantinos. "Forecasting Air Passenger Traffic Flows in Canada : An Evaluation of Time Series Models and Combination Methods." Thesis, Université Laval, 2013. http://www.theses.ulaval.ca/2013/30093/30093.pdf.
Full textThis master’s thesis studies the Canadian air transportation sector, which has experienced significant growth over the past fifteen years. It provides short and medium term forecasts of the number of enplaned/ deplaned air passengers in Canada for three geographical subdivisions of the market: domestic, transborder (US) and international flights. It uses various time series forecasting models: harmonic regression, Holt-Winters exponential smoothing, autoregressive-integrated-moving average (ARIMA) and seasonal autoregressive-integrated-moving average (SARIMA) regressions. In addition, it examines whether or not combining forecasts from each single model helps to improve forecasting accuracy. This last part of the study is done by applying two forecasting combination techniques: simple averaging and a variety of variance-covariance methods. Our results indicate that all models provide accurate forecasts, with MAPE and RMSPE scores below 10% on average. All adequately capture the main statistical characteristics of the Canadian air passenger series. Furthermore, combined forecasts from the single models always outperform those obtained from the single worst model. In some instances, they even dominate the forecasts from the single best model. Finally, these results should encourage the Canadian government, air transport authorities, and the airlines operating in Canada to use combination techniques to improve their short and medium term forecasts of passenger flows. Key Words: Air passengers, Forecast combinations, Time Series, ARIMA, SARIMA, Canada.
Houndetoungan, Elysée Aristide. "Essays on Social Networks and Time Series with Structural Breaks." Doctoral thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/69494.
Full textThis dissertation, composed of three (03) separate chapters, develops new econometric modelsfor peer effects analysis and time series modelling.The first chapter (a joint work with Professor Vicent Boucher) studies a method for estimatingpeer effects through social networks when researchers do not observe the network structure. We assume that researchers know (a consistent estimate of) the distribution of the network. We show that this assumption is sufficient for the estimation of peer effects using a linear-in-means model. We propose an instrumental variables estimator and a Bayesian estimator. We present and discuss important examples where our methodology can be applied. We also present an application with the widely used Add Health database which presents many missing links. We estimate a model of peer effects on students’ academic achievement. We show that our Bayesian estimator reconstructs these missing links and leads to a valid estimate of peer effects. In particular, we show that disregarding missing links underestimates the endogenous peer effect on academic achievement. In the second chapter, I present a structural model of peer effects in which the dependent variable is counting (Number of cigarettes smoked, frequency of restaurant visits, frequency of participation in activities). The model is based on a static game with incomplete information in which individuals interact through a directed network and are influenced by their belief over the choice of their peers. I provide sufficient conditions under which the equilibrium of the game is unique. I show that using the standard linear-in-means spatial autoregressive (SAR) model or the SAR Tobit model to estimate peer effects on counting variables generated from the game asymptotically underestimates the peer effects. The estimation bias decreases when the range of the dependent counting variable increases. I estimate peer effects on the number of extracurricular activities in which students are enrolled. I find that increasing the number of activities in which a student’s friends are enrolled by one implies an increase in the number of activities in which the student is enrolled by 0.295, controlling for the endogeneity of the network. I also show that the peer effects are underestimated at 0.150 when ignoring the counting nature of the dependent variable. The third chapter (a joint work with Professor Arnaud Dufays and Professor Alain Coen) presents an approach for time series modelling. Change-point (CP) processes are one flexible approach to model long time series. Considering a linear-in-means models, we propose a method to relax the assumption that a break triggers a change in all the model parameters. To do so, we first estimate the potential break dates exhibited by the series and then we use a penalized likelihood approach to detect which parameters change. Because some segments in the CP regression can be small, we opt for a (nearly) unbiased penalty function, called the seamless-L0 (SELO) penalty function. We prove the consistency of the SELO estimator in detecting which parameters indeed vary over time and we suggest using a deterministic annealing expectation-maximisation (DAEM) algorithm to deal with the multimodality of the objective function. Since the SELO penalty function depends on two tuning parameters, we use a criterion to choose the best tuning parameters and as a result the best model. This new criterion exhibits a Bayesian interpretation which makes possible to assess the parameters’ uncertainty as well as the model’s uncertainty. Monte Carlo simulations highlight that the method works well for many time series models including heteroskedastic processes. For a sample of 14 Hedge funds (HF) strategies, using an asset based style pricing model, we shed light on the promising ability of our method to detect the time-varying dynamics of risk exposures as well as to forecast HF returns.
Ouldali, Naïm. "Impact à moyen terme de l'implémentation du vaccin conjugué pneumococcique 13 valences en pédiatrie : analyse de séries chronologiques interrompues." Thesis, Université de Paris (2019-....), 2020. https://wo.app.u-paris.fr/cgi-bin/WebObjects/TheseWeb.woa/wa/show?t=4437&f=28972.
Full textBackground. Due to serotype replacement, the long-term impact of pneumococcal conjugate vaccines (PCVs) implementation remains to be evaluated. We aimed to assess, in children, the impact of PCV13 implementation on: (i) pneumococcal meningitis, (ii) community acquired pneumonia (CAP), and (iii) antibiotic susceptibility of pneumococcal strains in nasopharyngeal carriage. Finally, we conducted a methodological systematic review of the literature on assessing the impact of PCVs implementation. Methods. We used the quasi-experimental interrupted time series (ITS) analysis design with data from three French surveillance systems: (i) the national network of pediatric bacterial meningitis (230 centres), (ii) the CAP pediatric network (8 pediatric emergency departments), and (iii) an ambulatory network of pneumococcal carriage (121 pediatricians). A segmented regression model with autoregressive error was used, taking into account pre-intervention time trend, seasonality and autocorrelation. The methodological systematic review included all studies assessing the impact of PCVs implementation in children and adults, using PubMed, Embase, and references of selected articles. Results. After a 38% (95% CI [20; 56]) decrease of pneumococcal meningitis incidence following PCV13 implementation in 2010 in France, a rebound was observed since January 2015, mainly linked to the emergence of non-PCV13 serotypes. CAP rate also decreased significantly following PCV13 implementation (44% decrease, 95% CI [32; 56]), but since June 2014, only a slight increase was observed since June 2014. Regarding pneumococcal susceptibility in carriage, after a significant reduction of penicillin non-susceptibility following PCV13 implementation, a steady increase is observed since January 2014. Finally, 377 studies were included in the systematic review, from 2001 to 2018. Among them, 296 (78,5%) used the before-after design, and only 69 (18,3%) used the ITS design. Conclusions. After an important impact of PCV13, the consequences of serotype replacement in France may vary between pneumococcal disease. These findings may still evolve in the coming years, underlining the need of continuous active surveillance of these outcomes. Despite Cochrane recommendations, the use of ITS to assess PCVs impact remains largely infrequent worldwide, and needs to be promoted to adequately analyze the complex evolution of this pathogen over time
Himdi, Khalid El. "Séries chronologiques binaires avec récompenses : Applications à la modélisation en climatologie." Grenoble 1, 1986. http://tel.archives-ouvertes.fr/tel-00320012.
Full textMadinier, Hubert. "Estimations conjointes de séries chronologiques interdépendantes." Paris 10, 1985. http://www.theses.fr/1985PA100027.
Full textPicard, Nico. "Exposants de Lyapounov et séries chronologiques." Nancy 1, 1991. http://www.theses.fr/1991NAN10006.
Full textBahamonde, Natalia. "Estimation de séries chronologiques avec données manquantes." Paris 11, 2007. http://www.theses.fr/2007PA112115.
Full textFrancq, Christian. "Identification et minimalité dans les séries chronologiques." Montpellier 2, 1989. http://www.theses.fr/1989MON20210.
Full textZakoian, Jean-Michel. "Modèles autorégressifs à seuil de séries chronologiques." Paris 9, 1990. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1990PA090014.
Full textEl-Taib, El-Rafehi Ahmed. "Estimation des données manquantes dans les séries chronologiques." Montpellier 2, 1992. http://www.theses.fr/1992MON20239.
Full textSoltane, Marius. "Statistique asymptotique de certaines séries chronologiques à mémoire." Thesis, Le Mans, 2020. http://cyberdoc-int.univ-lemans.fr/Theses/2020/2020LEMA1027.pdf.
Full textThis thesis is devoted to asymptotic inferenre of differents chronological models driven by a noise with memory. In these models, the least squares estimator is not consistent and we consider other estimators. We begin by studying the almost-sureasymptotic properties of the maximum likelihood estimator of the autoregressive coefficient in an autoregressive process drivenby a stationary Gaussian noise. We then present a statistical procedure in order to detect a change of regime within this model,taking inspiration from the classic case driven by a strong white noise. Then we consider an autoregressive model where the coefficients are random and have a short memory. Here again, the least squares estimator is not consistent and we correct the previous statistic in order to correctly estimate the parameters of the model. Finally we study a new joint estimator of the Hurst exponent and the variance in a fractional Gaussian noise observed at high frequency whose qualities are comparable to the maximum likelihood estimator
Alexander, Miranda Abhilash. "Spectral factor model for time series learning." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209812.
Full textmassive amounts of streaming data.
In many applications, data is collected for modeling the processes. The process model is hoped to drive objectives such as decision support, data visualization, business intelligence, automation and control, pattern recognition and classification, etc. However, we face significant challenges in data-driven modeling of processes. Apart from the errors, outliers and noise in the data measurements, the main challenge is due to a large dimensionality, which is the number of variables each data sample measures. The samples often form a long temporal sequence called a multivariate time series where any one sample is influenced by the others.
We wish to build a model that will ensure robust generation, reviewing, and representation of new multivariate time series that are consistent with the underlying process.
In this thesis, we adopt a modeling framework to extract characteristics from multivariate time series that correspond to dynamic variation-covariation common to the measured variables across all the samples. Those characteristics of a multivariate time series are named its 'commonalities' and a suitable measure for them is defined. What makes the multivariate time series model versatile is the assumption regarding the existence of a latent time series of known or presumed characteristics and much lower dimensionality than the measured time series; the result is the well-known 'dynamic factor model'.
Original variants of existing methods for estimating the dynamic factor model are developed: The estimation is performed using the frequency-domain equivalent of the dynamic factor model named the 'spectral factor model'. To estimate the spectral factor model, ideas are sought from the asymptotic theory of spectral estimates. This theory is used to attain a probabilistic formulation, which provides maximum likelihood estimates for the spectral factor model parameters. Then, maximum likelihood parameters are developed with all the analysis entirely in the spectral-domain such that the dynamically transformed latent time series inherits the commonalities maximally.
The main contribution of this thesis is a learning framework using the spectral factor model. We term learning as the ability of a computational model of a process to robustly characterize the data the process generates for purposes of pattern matching, classification and prediction. Hence, the spectral factor model could be claimed to have learned a multivariate time series if the latent time series when dynamically transformed extracts the commonalities reliably and maximally. The spectral factor model will be used for mainly two multivariate time series learning applications: First, real-world streaming datasets obtained from various processes are to be classified; in this exercise, human brain magnetoencephalography signals obtained during various cognitive and physical tasks are classified. Second, the commonalities are put to test by asking for reliable prediction of a multivariate time series given its past evolution; share prices in a portfolio are forecasted as part of this challenge.
For both spectral factor modeling and learning, an analytical solution as well as an iterative solution are developed. While the analytical solution is based on low-rank approximation of the spectral density function, the iterative solution is based on the expectation-maximization algorithm. For the human brain signal classification exercise, a strategy for comparing similarities between the commonalities for various classes of multivariate time series processes is developed. For the share price prediction problem, a vector autoregressive model whose parameters are enriched with the maximum likelihood commonalities is designed. In both these learning problems, the spectral factor model gives commendable performance with respect to competing approaches.
Les processus informatisés actuels génèrent des quantités massives de flux de données. Dans nombre d'applications, ces flux de données sont collectées en vue de modéliser les processus. Les modèles de processus obtenus ont pour but la réalisation d'objectifs tels que l'aide à la décision, la visualisation de données, l'informatique décisionnelle, l'automatisation et le contrôle, la reconnaissance de formes et la classification, etc. La modélisation de processus sur la base de données implique cependant de faire face à d’importants défis. Outre les erreurs, les données aberrantes et le bruit, le principal défi provient de la large dimensionnalité, i.e. du nombre de variables dans chaque échantillon de données mesurées. Les échantillons forment souvent une longue séquence temporelle appelée série temporelle multivariée, où chaque échantillon est influencé par les autres. Notre objectif est de construire un modèle robuste qui garantisse la génération, la révision et la représentation de nouvelles séries temporelles multivariées cohérentes avec le processus sous-jacent.
Dans cette thèse, nous adoptons un cadre de modélisation capable d’extraire, à partir de séries temporelles multivariées, des caractéristiques correspondant à des variations - covariations dynamiques communes aux variables mesurées dans tous les échantillons. Ces caractéristiques sont appelées «points communs» et une mesure qui leur est appropriée est définie. Ce qui rend le modèle de séries temporelles multivariées polyvalent est l'hypothèse relative à l'existence de séries temporelles latentes de caractéristiques connues ou présumées et de dimensionnalité beaucoup plus faible que les séries temporelles mesurées; le résultat est le bien connu «modèle factoriel dynamique». Des variantes originales de méthodes existantes pour estimer le modèle factoriel dynamique sont développées :l'estimation est réalisée en utilisant l'équivalent du modèle factoriel dynamique au niveau du domaine de fréquence, désigné comme le «modèle factoriel spectral». Pour estimer le modèle factoriel spectral, nous nous basons sur des idées relatives à la théorie des estimations spectrales. Cette théorie est utilisée pour aboutir à une formulation probabiliste, qui fournit des estimations de probabilité maximale pour les paramètres du modèle factoriel spectral. Des paramètres de probabilité maximale sont alors développés, en plaçant notre analyse entièrement dans le domaine spectral, de façon à ce que les séries temporelles latentes transformées dynamiquement héritent au maximum des points communs.
La principale contribution de cette thèse consiste en un cadre d'apprentissage utilisant le modèle factoriel spectral. Nous désignons par apprentissage la capacité d'un modèle de processus à caractériser de façon robuste les données générées par le processus à des fins de filtrage par motif, classification et prédiction. Dans ce contexte, le modèle factoriel spectral est considéré comme ayant appris une série temporelle multivariée si la série temporelle latente, une fois dynamiquement transformée, permet d'extraire les points communs de façon fiable et maximale. Le modèle factoriel spectral sera utilisé principalement pour deux applications d'apprentissage de séries multivariées :en premier lieu, des ensembles de données sous forme de flux venant de différents processus du monde réel doivent être classifiés; lors de cet exercice, la classification porte sur des signaux magnétoencéphalographiques obtenus chez l'homme au cours de différentes tâches physiques et cognitives; en second lieu, les points communs obtenus sont testés en demandant une prédiction fiable d'une série temporelle multivariée étant donnée l'évolution passée; les prix d'un portefeuille d'actions sont prédits dans le cadre de ce défi.
À la fois pour la modélisation et pour l'apprentissage factoriel spectral, une solution analytique aussi bien qu'une solution itérative sont développées. Tandis que la solution analytique est basée sur une approximation de rang inférieur de la fonction de densité spectrale, la solution itérative est basée, quant à elle, sur l'algorithme de maximisation des attentes. Pour l'exercice de classification des signaux magnétoencéphalographiques humains, une stratégie de comparaison des similitudes entre les points communs des différentes classes de processus de séries temporelles multivariées est développée. Pour le problème de prédiction des prix des actions, un modèle vectoriel autorégressif dont les paramètres sont enrichis avec les points communs de probabilité maximale est conçu. Dans ces deux problèmes d’apprentissage, le modèle factoriel spectral atteint des performances louables en regard d’approches concurrentes.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Alj, Abdelkamel. "Contribution to the estimation of VARMA models with time-dependent coefficients." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209651.
Full textvectoriels ou VARMA, `a coefficients dépendant du temps, et avec une matrice de covariance
des innovations dépendant du temps. Ces modèles sont appel´es tdVARMA. Les éléments
des matrices des coefficients et de la matrice de covariance sont des fonctions déterministes
du temps dépendant d’un petit nombre de paramètres. Une première partie de la thèse
est consacrée à l’étude des propriétés asymptotiques de l’estimateur du quasi-maximum
de vraisemblance gaussienne. La convergence presque sûre et la normalité asymptotique
de cet estimateur sont démontrées sous certaine hypothèses vérifiables, dans le cas o`u les
coefficients dépendent du temps t mais pas de la taille des séries n. Avant cela nous considérons les propriétés asymptotiques des estimateurs de modèles non-stationnaires assez
généraux, pour une fonction de pénalité générale. Nous passons ensuite à l’application de
ces théorèmes en considérant que la fonction de pénalité est la fonction de vraisemblance
gaussienne (Chapitre 2). L’étude du comportement asymptotique de l’estimateur lorsque
les coefficients du modèle dépendent du temps t et aussi de n fait l’objet du Chapitre 3.
Dans ce cas, nous utilisons une loi faible des grands nombres et un théorème central limite
pour des tableaux de différences de martingales. Ensuite, nous présentons des conditions
qui assurent la consistance faible et la normalité asymptotique. Les principaux
résultats asymptotiques sont illustrés par des expériences de simulation et des exemples
dans la littérature. La deuxième partie de cette thèse est consacrée à un algorithme qui nous
permet d’évaluer la fonction de vraisemblance exacte d’un processus tdVARMA d’ordre (p, q) gaussien. Notre algorithme est basé sur la factorisation de Cholesky d’une matrice
bande partitionnée. Le point de départ est une généralisation au cas multivarié de Mélard
(1982) pour évaluer la fonction de vraisemblance exacte d’un modèle ARMA(p, q) univarié. Aussi, nous utilisons quelques résultats de Jonasson et Ferrando (2008) ainsi que les programmes Matlab de Jonasson (2008) dans le cadre d’une fonction de vraisemblance
gaussienne de modèles VARMA à coefficients constants. Par ailleurs, nous déduisons que
le nombre d’opérations requis pour l’évaluation de la fonction de vraisemblance en fonction de p, q et n est approximativement le double par rapport à un modèle VARMA à coefficients
constants. L’implémentation de cet algorithme a été testée en comparant ses résultats avec
d’autres programmes et logiciels très connus. L’utilisation des modèles VARMA à coefficients
dépendant du temps apparaît particulièrement adaptée pour la dynamique de quelques
séries financières en mettant en évidence l’existence de la dépendance des paramètres en
fonction du temps.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
El, Ghini Ahmed. "Contribution à l'identification de modèles de séries temporelles." Lille 3, 2008. http://www.theses.fr/2008LIL30017.
Full textThis PhD dissertation consists of two parts dealing with the probelms of identification and selection in econometrics. Two mains topics are considered : (1) time series model identification by using (inverse) autocorrelation and (inverse) partial autocorrelation functions ; (2) estimation of inverse autocorrelation function in the framework of nonlinear tima series. The two parts are summarized below. In the first part of this work, we consider time series model identification y using (inverse) autocorrelation and (inverse) partial autocorrelation functions. We construct statistical tests based on estimators of these functions and establish their asymptotic distribution. Using Bahadur and Pitman approaches, we compare the performance of (inverse) autocorelations and (inverse) partial autocorrelations in detecting the order of moving average and autoregressive model. Next, we study the identification of the inverse process of an ARMA model and their probalistic properties. Finally, we characterize the time reversibility by means of the dual and inverse processes. The second part is devoted to estimation of the inverse autocorrelation function in the framework of nonlinear time series. Undes some regularity conditions, we study the asymptotic properties of empirical inverse autocorrelations for stationary and strongly mixing process. We establish the consistency and the asymptotic normality of the estimators. Next, we consider the case of linear process with GARCH errors and obtain means of some examples that the standard formula can be misleading if the generating process is non linear. Finally, we apply our previous results to prove the asymptotic normality of the parameter estimates of weak moving average. Our results are illustrated by Monte Carlo experiments and real data experiences
Della, Penna Gabriella. "Méthodes perturbatives et numériques pour l'étude des mouvements réguliers et chaotiques avec application à la mécanique céleste." Nice, 2001. http://www.theses.fr/2001NICE5611.
Full textBahri, El Mostafa. "L'identification automatique des processus ARIMA : une approche par système expert." Aix-Marseille 3, 1991. http://www.theses.fr/1991AIX32043.
Full textArima approach is an important contribution in fore casting economic time series but indentifying such processes is a crucial task, both manualy ans automatically we suggest that the expert system approach is an adequate solution for this problem. We have written a prototype in poss for this purpose and we propose neural network as complementary technique for automatic identification of series procecesses
Gautier, Antony. "Modèles de séries temporelles à coefficients dépendants du temps." Lille 3, 2004. http://www.theses.fr/2004LIL30034.
Full textGuégan, Dominique. "Modèles bilinéaires et polynomiaux de séries chronologiques : étude probabiliste et analyse statistique." Grenoble 1, 1988. http://tel.archives-ouvertes.fr/tel-00330671.
Full textNouira, Leïla. "Mémoire longue non stationnaire : estimations et applications." Aix-Marseille 2, 2006. http://www.theses.fr/2006AIX24006.
Full textThis thesis is interested in studying the long memory in both stationary and nonstationary cases as well as the fractional cointegration. After the definition of the two most known models of long memory : the fractional Gaussian noise and the ARFIMA (p, d, q), we examined the estimation methods of the long memory parameter. Indeed, we explored, by simulations, the asymptotic properties (consistency and asymptotic normality) of some estimators and compared the performance of some estimation methods in the stationary case (i. E. -1/2 < d < 1/2). However, many economic time series exhibit a nonstationary behaviour (i. E. D ≥ 1/2), which motivated us to study this case. Indeed, to estimate the long memory parameter, we used the data tapering procedure suggested by Velasco (1999a). Since, the shape of this tapering depends on its order p’, we studied, by simulations, the effect of this order on the asymptotic properties of some semiparametric estimators. We also showed that the optimal choice is p’ = [d + 1/2] +1. The thesis was finally taken up the fractional cointegration. An empirical application on the Tunisian economy was been done in order to find a long-term relation between the effective real exchange rate and its fundamentals
Ltaifa, Marwa. "Tests optimaux pour détecter les signaux faibles dans les séries chronologiques." Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0189.
Full textThis thesis focuses on the construction of locally asymptotically optimal tests to detect breaks in the mean of Conditional Heteroskedastic AutoRegressive Nonlinear (CHARN) models described by the following stochastic equation: begin{equation} X_t=T(Z_{t-1})+gamma^{top}omega(t)+V(Z_{t-1})varepsilon_t,quad tinzz, end{equation} where «gamma=(gamma_1,ldots,gamma_k,gamma_{k+1})^{top} inrr^{k+1}» and for «t_1,ldots,t_k,» «1< t_10,forall xinrr^p» and «n» the number of observations. The model (2) contains a large class of time series models like AR, MA, ARMA, ARIMA, ARCH etc. Attention is paid to small breaks. Those which are difficult to observe with the naked eye, unlike those considered in the literature. Such a study does not appear to have already been carried out in the context of time series. The test studied is the likelihood ratio test to test ««H_0:gamma=gamma_0text{ against } H^{(n)}_beta:gamma=gamma_0+dfrac{beta}{sqrt{n}}=gamma_{n},quad n>1,»« for «gamma_0inrr^{k+1}» and «betainrr^{k+1}» characterizing respectively the situation where there is no break, and that where there is at least one break to be found. This document is organized as follows: Chapter 1 constitutes the general introduction to the thesis. There, some useful basic concepts and tools are recalled. Chapter 2 reviews the state of the art on the detection of breaks in time series. This chapter is divided into two parts. The first concerns the estimation of breaks and their locations. The second concerns the tests for the existence of break-points. Chapter 3 deals with the case where the functions « T « and « V « are known, and when they are known but depend on unknown parameters. In the latter case, the situation where the parameter « gamma_0 « is known and the one where it is unknown are studied. When it is unknown, it is estimated by the maximum likelihood method. The study of the test is based essentially on the asymptotic local property (LAN) stated for example in cite{droesbeke1996}. Chapter 4 is a generalization of chapter 3. Here, the magnitude of the jump is arbitrary and unknown. Therefore, one has to test ««H_0:gamma=gamma_0text{ against }H^{(n)}=displaystylebigcup_{betainrr^{k+1}}^{}{ H^{(n)}_beta}.»« A Cramer-Von-Mises type test is constructed. The techniques of cite{ngatchou2009} are used to find the asymptotic distribution of the test under the alternative hypothesis. Chapter 5 presents numerical results obtained using software R. The results obtained for simulated data are first presented and commented. This is followed by those for applications with several real datasets. Chapter 6 concludes the thesis and sets out some prospects
Cherkaoui, Abdelhai. "Modélisation des séries temporelles par des méthodes de décomposition et applications." Aix-Marseille 3, 1987. http://www.theses.fr/1987AIX24011.
Full textOur work has centered on the study of decomposition methods time series. This was done in four stages. First, we analyse classical methods of linear regression and of smoothing by moving averages. Secondly, we examine the decomposition method of an arima model. In the third stage, we present a method based on the recurrent algorithm of kalman. In the fourth stage, we illustrate our theoretical results and attempt to compare the box-jenkins method and the method of smoothing by moving average
Kouamo, Olaf. "Analyse des séries chronologiques à mémoire longue dans le domaine des ondelettes." Phd thesis, Paris, Télécom ParisTech, 2011. https://pastel.hal.science/pastel-00565656.
Full textThe theme of our work focuses on statistical process long memory, for which we propose and validate tools statistics from the wavelet analysis. In recent years these methods for estimating the memory setting became very popular. However, rigorously validating the theoretical results estimators for semiparametric models classic long memory are new (cf. The articles by E. Moulines, F. Roueff and M. Taqqu since 2007). The results we propose in this thesis are a direct extension of this work. We have proposed a test procedure for detecting changes on the generalized spectral density. In the wavelet domain, the test becomes a test of change in the variance of wavelet coefficients. We then developed an algorithm for fast computation of covariance matrix of wavelet coefficients. Two applications of this algorithm are proposed, first to estimate d and the other part to improve the test proposed in the previous chapter. Finally, we studied the robust estimators of the memory parameter in the wavelet domain, based on three estimators of the variance of wavelet coefficients at scale. The major contribution of this chapter is the central limit theorem obtained for three estimators in the context of Gaussian processes M (d)
Kouamo, Olaf. "Analyse des séries chronologiques à mémoire longue dans le domaine des ondelettes." Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00565656.
Full textGuercin, François. "Un langage formel pour le traitement des chroniques." Aix-Marseille 1, 1990. http://www.theses.fr/1990AIX10012.
Full textThis thesis presents a formal approach to the study of time series. This approach led us to define time series as a function of the temporal organization of complex activities found in psychology,and then to characterize some descriptive statistical processing in terms of the selection and comparison of sub-series. Five formal operations suffice to account for selection on the set of the sub-series, fitted with a boolean lattice structure. Questions about how temporally-coded data might be formatted and compared, and how formal operations might be used in function of formulated psychological hypothesis, are illustrated widely. Three processing are examined closely. Each one is studied in its mathematical frame. Then we show how it allows the studying of strategies understood like sequential combination of procedures in differend fields of thepsychology. In particular, in the field of text revision, an exemple is given o how the distance betwen time series can be used to measure the difference beetween a subject's production and simulated production in application of one of six strategies based on a few functional principles. Thus our formalisation has allowed to develop new tools for descriptive statistical processing. These tools are suited to the formals requirements of data and to theoreticals requirements of cognitive psychology
Honobé, Hoang Erik. "Évaluation stratégique d'entreprises par méthodes neuronales." Paris 2, 2000. http://www.theses.fr/2000PA020104.
Full textAssaad, Mohammad. "Un nouvel algorithme de boosting pour les réseaux de neurones récurrents : application au traitement des données sequentielles." Tours, 2006. http://www.theses.fr/2006TOUR4024.
Full textThe work of this thesis deals with the proposal of a new boosting algorithm dedicated to the problem of learning time-dependencies for the time series prediction, using recurrent neural networks as regressors. This algorithm is based on the boosting algorith and allows concentrating the training on difficult examples. A new parameter is introduced to regulate the influence of boosting. To evaluate our algorithm, systematic experiments were carried out on two types of problems of time series prediction : single-step ahead predicton and multi-step ahead prediction. The results obtained from several series of reference are close to the best results reported in the literature
Degerine, Serge. "Fonction d'autocorrélation partielle et estimation autorégressive dans le domaine temporel." Grenoble 1, 1988. http://tel.archives-ouvertes.fr/tel-00243761.
Full textNjimi, Hassane. "Mise en oeuvre de techniques de modélisation récentes pour la prévision statistique et économique." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210441.
Full textGoldfarb, Bernard. "Etude structurelle des séries temporelles : les moyens de l'analyse spectrale." Paris 9, 1997. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1997PA090007.
Full textRenard, Xavier. "Time series representation for classification : a motif-based approach." Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066593.
Full textOur research described in this thesis is about the learning of a motif-based representation from time series to perform automatic classification. Meaningful information in time series can be encoded across time through trends, shapes or subsequences usually with distortions. Approaches have been developed to overcome these issues often paying the price of high computational complexity. Among these techniques, it is worth pointing out distance measures and time series representations. We focus on the representation of the information contained in the time series. We propose a framework to generate a new time series representation to perform classical feature-based classification based on the discovery of discriminant sets of time series subsequences (motifs). This framework proposes to transform a set of time series into a feature space, using subsequences enumerated from the time series, distance measures and aggregation functions. One particular instance of this framework is the well-known shapelet approach. The potential drawback of such an approach is the large number of subsequences to enumerate, inducing a very large feature space and a very high computational complexity. We show that most subsequences in a time series dataset are redundant. Therefore, a random sampling can be used to generate a very small fraction of the exhaustive set of subsequences, preserving the necessary information for classification and thus generating a much smaller feature space compatible with common machine learning algorithms with tractable computations. We also demonstrate that the number of subsequences to draw is not linked to the number of instances in the training set, which guarantees the scalability of the approach. The combination of the latter in the context of our framework enables us to take advantage of advanced techniques (such as multivariate feature selection techniques) to discover richer motif-based time series representations for classification, for example by taking into account the relationships between the subsequences. These theoretical results have been extensively tested on more than one hundred classical benchmarks of the literature with univariate and multivariate time series. Moreover, since this research has been conducted in the context of an industrial research agreement (CIFRE) with Arcelormittal, our work has been applied to the detection of defective steel products based on production line's sensor measurements
Gueguen, Lionel. "Extraction d'information et compression conjointes des séries temporelles d'images satellitaires." Paris, ENST, 2007. http://www.theses.fr/2007ENST0025.
Full textNowadays, new data which contain interesting information can be produced : the Satellite Image Time Series which are observations of Earth’s surface evolution. These series constitute huge data volume and contain complex types of information. For example, numerous spatio-temporal events, such as harvest or urban area expansion, can be observed in these series and serve for remote surveillance. In this framework, this thesis deals with the information extraction from Satellite Image Time Series automatically in order to help spatio-temporal events comprehension and the compression in order to reduce storing space. Thus, this work aims to provide methodologies which extract information and compress jointly these series. This joint processing provides a compact representation which contains an index of the informational content. First, the concept of joint extraction and compression is described where the information extraction is grasped as a lossy compression of the information. Secondly, two methodologies are developed based on the previous concept. The first one provides an informational content index based on the Information Bottleneck principle. The second one provides a code or a compact representation which integrates an informational content index. Finally, both methodologies are validated and compared with synthetic data, then are put into practice successfully with Satellite Image Time Series
Duchesne, Pierre. "Quelques contributions en théorie de l'échantillonnage et dans l'analyse des séries chronologiques multidimensionnelles." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ52102.pdf.
Full textBoiron, Marie-Aurélie. "Modélisation phénoménologique de systèmes complexes non-linéaires à partir de séries chronologiques scalaires." Lyon 1, 2005. http://www.theses.fr/2005LYO10044.
Full textKodia, Banzouzi Bernédy Nel Messie. "Mesures de dépendance pour une modélisation alpha-stable : application aux séries chronologiques stables." Toulouse 3, 2011. http://thesesups.ups-tlse.fr/1468/.
Full textThis thesis is a contribution to the study of the dependence between heavy tails random variables, and especially symmetric a-stable random variables, by introducing a new coefficient of dependence: the signed symmetric covariation coefficient. We use this coefficient and the generalized association parameter introduced by Paulauskas (1976), in the context of time series, for Identification of MA and AR stable processes. In the first chapter, we give an overview of a-stable laws. We recall the basic concepts, some representations of associated random variables in both the univariate and multivariate cases. The spectral measure carries all the information about the dependence structure of an a-stable random vector. Its form is given for two sub-families of laws : the sub-Gaussian random vectors and linear combinations of independent random variables. Covariation and codifference are presented. We introduce the signed symmetric covariation coefficient in the second chapter. This coefficient has most of the properties of the correlation coefficient of Pearson. In the case of sub-Gaussian random vectors, it coincides with the generalized association parameter. The consistency of the proposed estimators for these quantities is demonstrated. The results of a study on the asymptotic behavior of estimators are presented. In the third chapter, we introduce the concepts of signed symmetric autocovariation and generalized auto-association for linear stationary processes. We use these coefficients for identifying the order of a MA stable process. We propose a statistic acting as a partial autocorrelation coefficient. We compare this statistic with quadratic statistics asymptotically invariant based on the ranks and used by Garel and Hallin (1999) for the identification of AR stables. A study of the results is performed using simulations
Barkaoui, Ahmed. "La désagrégation temporelle des séries d'observations économiques." Paris 10, 1995. http://www.theses.fr/1995PA100055.
Full textThis thesis treats the problem encountered in econometric modeling when temporal observations on certain variables are available only in a temporally aggregated fora. The practical solution consists of estimating disaggregated series which are consistent with the observed data (temporal disaggregation procedures). This estimation can be done only by aggregated series or by using some related series -if available- observed in the suit time periods. The analysis of the principal procedures used in statistic organisms or in econometric software’s, has allowed to propose -for a given problem- a criterion of selection based on a priori known on time series and on statistical tests. The problem of temporal disaggregation was extended to the case of a vector of variables when the sum of the latters is observed in disaggregated data. Two applications were performed: the first was performed on the construction of the quarterly series of productive investment (fbcf) by activity branch. The second on the estimation of monthly merchant gdp. If we assume that the disaggregated series can be generated by whatever completely specified dynamic model, new methods can resolve the problem using the states space representation and the kalean filter and smoothing technics
Bates, Samuel. "Effectivité des canaux de transmission de la politique monétaire." Antilles-Guyane, 2006. http://merlin.u-picardie.fr/login?url=http://u-picardie.cyberlibris.fr/book/10195420.
Full textIf the theoretical channels of monetary policy transmission to the real sphere are weIl defined, empirical doubts concerning their intensity remain. Thus, the aim is to suggest a new procedure to measure the transmission channels macroeconomic effectiveness. It is based on the creation of a causal intensity coefficient useful whatever the series of interest. It offers the ability to classifyat short and long term the transmission mechanisms intensity according to a hierarchical system. The identification of the determining factors in the change of the monetary policy effects on the real sphere is ensued
Ferrara, Laurent. "Processus longue mémoire généralisés : estimation, prévision et applications." Paris 13, 2000. http://www.theses.fr/2000PA132033.
Full textPollet, Arnaud. "Combinaison des techniques de géodésie spatiale : contributions aux réalisations des systèmes de référence et à la détermination de la rotation de la Terre." Observatoire de Paris (1667-....), 2011. https://hal.science/tel-02094987.
Full textThis PhD Thesis deals with the combinations of observations provided by the space geodetic techniques DORIS, GPS, SLR, and VLBI. These combinations are currently under investigation, especially in the framework of the IERS working group COL (Combination at the Observation Level). In order to obtain the best possible results with this approach, a homogeneous combined terrestrial frame is needed. Effort has also been made here to obtain the best possible realization of the terrestrial reference system. To achieve this goal, I have tested several approaches of combinations at the observation level. A new model of combination is proposed, which allows us to obtain a homogeneous frame. The contribution of the local ties between co-located stations and their impact regarding the homogeneity of the weekly combined frames are analysed too. To do that, I have adapted a GPS data processing by sub-networks in order to have a dense GPS network and a large number of co-located stations. To strengthen the links between the techniques, the uses of common zenithal tropospheric delays and spatial links via multi-technique satellites, are studied and I have proved their relevance. Finally, a combination at the observation level is performed for the year 2005. This work has also allowed to obtain EOP and station positions time series which take advantage of the processing consistency and of the best qualities, regarding the temporal resolution and the accuracy, of each technique used in the combination
Bac, Catherine. "Saisonnalité et non stationnarité : une analyse en termes de séries temporelles avec applications à la boucle prix salaire et à la dynamique des stocks." Paris 1, 1994. http://www.theses.fr/1994PA010039.
Full textThis study concerns the analysis of seasonal problems with regard to statistical analysis of time series and to economic as well as econometric modelling. Many economic time series are characterised by large seasonal variations. In the first part, unit root tests are reported. These tests are performed over american wage price series. This application highlight some drawbacks of seasonal filtering. Then lee's test have been extended for the application of seasonal cointegration tothe monthly data case. The application to stocks and productioin variables for american industry enabled to point out long term equilibrium relationships for the textile industry. In a second part, we have study periodic models. Non stationarity test for a univariate serie with periodic structure has been studied. Estimation problems for a periodic structurein a multivariate model are also examined. The application to inventories series exhibits a seasonal behavior. The production smoothing hypothesis has been reexamined in a third part, taking into account seasonal variations. The moldel is estimated with a maximum likelihood criterion, using the dalman filter. However, the data donot validate this hypothesis. Finally, the various aspects examined in this study have contributed to outline the information carried by seasonal movements
Boné, Romuald. "Réseaux de neurones récurrents pour la prévision de séries temporelles." Tours, 2000. http://www.theses.fr/2000TOUR4003.
Full textBen, Taieb Souhaib. "Machine learning strategies for multi-step-ahead time series forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209234.
Full textHistorically, time series forecasting has been mainly studied in econometrics and statistics. In the last two decades, machine learning, a field that is concerned with the development of algorithms that can automatically learn from data, has become one of the most active areas of predictive modeling research. This success is largely due to the superior performance of machine learning prediction algorithms in many different applications as diverse as natural language processing, speech recognition and spam detection. However, there has been very little research at the intersection of time series forecasting and machine learning.
The goal of this dissertation is to narrow this gap by addressing the problem of multi-step-ahead time series forecasting from the perspective of machine learning. To that end, we propose a series of forecasting strategies based on machine learning algorithms.
Multi-step-ahead forecasts can be produced recursively by iterating a one-step-ahead model, or directly using a specific model for each horizon. As a first contribution, we conduct an in-depth study to compare recursive and direct forecasts generated with different learning algorithms for different data generating processes. More precisely, we decompose the multi-step mean squared forecast errors into the bias and variance components, and analyze their behavior over the forecast horizon for different time series lengths. The results and observations made in this study then guide us for the development of new forecasting strategies.
In particular, we find that choosing between recursive and direct forecasts is not an easy task since it involves a trade-off between bias and estimation variance that depends on many interacting factors, including the learning model, the underlying data generating process, the time series length and the forecast horizon. As a second contribution, we develop multi-stage forecasting strategies that do not treat the recursive and direct strategies as competitors, but seek to combine their best properties. More precisely, the multi-stage strategies generate recursive linear forecasts, and then adjust these forecasts by modeling the multi-step forecast residuals with direct nonlinear models at each horizon, called rectification models. We propose a first multi-stage strategy, that we called the rectify strategy, which estimates the rectification models using the nearest neighbors model. However, because recursive linear forecasts often need small adjustments with real-world time series, we also consider a second multi-stage strategy, called the boost strategy, that estimates the rectification models using gradient boosting algorithms that use so-called weak learners.
Generating multi-step forecasts using a different model at each horizon provides a large modeling flexibility. However, selecting these models independently can lead to irregularities in the forecasts that can contribute to increase the forecast variance. The problem is exacerbated with nonlinear machine learning models estimated from short time series. To address this issue, and as a third contribution, we introduce and analyze multi-horizon forecasting strategies that exploit the information contained in other horizons when learning the model for each horizon. In particular, to select the lag order and the hyperparameters of each model, multi-horizon strategies minimize forecast errors over multiple horizons rather than just the horizon of interest.
We compare all the proposed strategies with both the recursive and direct strategies. We first apply a bias and variance study, then we evaluate the different strategies using real-world time series from two past forecasting competitions. For the rectify strategy, in addition to avoiding the choice between recursive and direct forecasts, the results demonstrate that it has better, or at least has close performance to, the best of the recursive and direct forecasts in different settings. For the multi-horizon strategies, the results emphasize the decrease in variance compared to single-horizon strategies, especially with linear or weakly nonlinear data generating processes. Overall, we found that the accuracy of multi-step-ahead forecasts based on machine learning algorithms can be significantly improved if an appropriate forecasting strategy is used to select the model parameters and to generate the forecasts.
Lastly, as a fourth contribution, we have participated in the Load Forecasting track of the Global Energy Forecasting Competition 2012. The competition involved a hierarchical load forecasting problem where we were required to backcast and forecast hourly loads for a US utility with twenty geographical zones. Our team, TinTin, ranked fifth out of 105 participating teams, and we have been awarded an IEEE Power & Energy Society award.
Doctorat en sciences, Spécialisation Informatique
info:eu-repo/semantics/nonPublished