Academic literature on the topic 'Horizons de prédiction variable'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Horizons de prédiction variable.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Horizons de prédiction variable"

1

Le Losq, Charles, and Matthieu Micoulaut. "Simuler le verre." Reflets de la physique, no. 74 (December 2022): 34–38. http://dx.doi.org/10.1051/refdp/202274034.

Full text
Abstract:
Simuler les propriétés des verres et fontes vitreuses est un besoin fondamental pour résoudre différents problèmes scientifiques et industriels, mais aussi pour mieux décrire le phénomène de la transition vitreuse dont la compréhension complète nous échappe. Parmi les méthodes de prédiction des propriétés des matériaux, les simulations par dynamique moléculaire (classique ou ab initio) apportent une masse de connaissances importantes et permettent de mieux comprendre la formation et les propriétés des verres. L’apprentissage machine permet maintenant d’épauler ces simulations et aussi de valoriser de nombreuses mesures expérimentales existantes. Il offre ainsi de nouveaux horizons pour la compréhension et l’utilisation du verre dans de multiples domaines, de l’industrie à la volcanologie.
APA, Harvard, Vancouver, ISO, and other styles
2

Abdelkader, Bougara, Ezziane Karim, and Kadri Abdelkader. "Prédiction des résistances du ciment au laitier durcissant sous une temperature variable." Canadian Journal of Civil Engineering 28, no. 4 (August 1, 2001): 555–61. http://dx.doi.org/10.1139/l01-017.

Full text
Abstract:
The prediction of concrete strength has become a major concern that is forcing the construction industry to look closely at determining the appropriate time when to strip the form work or to apply prestress forces to new concrete. Normal concrete has different constitutions and can be subjected to different curing methods depending on the means available. Its characteristics are defined by the presence of mineral additives used to improve its efficiency. This has led us to establish a work plan to predict the strength of slag concrete, tested at different temperatures, from the data obtained from some control specimens of normal concrete, made only from clinker and subjected to a constant temperature of 20°C. The slag material was obtained from El-Hadjar (Algeria).Key words: slag, activation, temperature, finesse, thermal treatment, prediction, equivalent time, mortar, compression, cement, additives.
APA, Harvard, Vancouver, ISO, and other styles
3

Chachuat, B., N. Roche, and M. A. Latifi. "Réduction du modèle ASM 1 pour la commande optimale des petites stations d'épuration à boues activées." Revue des sciences de l'eau 16, no. 1 (April 12, 2005): 5–26. http://dx.doi.org/10.7202/705496ar.

Full text
Abstract:
L'adoption par l'Union Européenne de normes de rejets plus contraignantes implique une meilleure gestion des stations d'épuration. L'utilisation de modèles de simulation dynamique dans des schémas de commande en boucle fermée constitue une alternative intéressante pour répondre à ce problème. Sur la base du modèle ASM 1, un modèle réduit est ici élaboré pour le procédé à boues activées en aération séquentielle, en vue de la commande optimale du système d'aération. Les simplifications considérées sont de deux types : (i) les dynamiques lentes du système sont identifiées au moyen d'une méthode d'homotopie, puis éliminées du modèle ; (ii) des simplifications plus heuristiques consistant à prendre en compte un composé organique unique et à éliminer la concentration des composés organiques azotés sont ensuite appliquées. Elles conduisent à un modèle simplifié de 5 variables. L'application d'une procédure d'identification paramétrique permet alors de démontrer que le comportement dynamique du modèle simplifié est en bonne adéquation avec celui du modèle ASM 1 sur un horizon de prédiction de plusieurs heures, même lorsque les concentrations de l'influent ne sont pas connues. Il est également vérifié que le modèle proposé est observable et structurellement identifiable, sous des conditions d'aérobiose et d'anoxie, à partir des mesures en ligne des concentrations en oxygène dissous, ammoniaque et nitrate. Le modèle simplifié développé présente ainsi toutes les propriétés requises pour une future utilisation au sein de schémas de commande en boucle fermée, en vue de la commande optimale des petites stations d'épuration à boues activées.
APA, Harvard, Vancouver, ISO, and other styles
4

Finkelhor3, David, Anne Shattuck, Heather Turner, and Sherry Hamby. "La polyvictimisation comme facteur de risque de revictimisation sexuelle12." Criminologie 47, no. 1 (March 25, 2014): 41–58. http://dx.doi.org/10.7202/1024006ar.

Full text
Abstract:
L’objectif était de tester l’hypothèse selon laquelle une exposition générale à la victimisation, ou victimisation multiple, expliquerait une conclusion de recherche fréquente : la victimisation sexuelle accroît le risque de victimisation sexuelle ultérieure. L’étude utilise les données de deux phases de la National Survey of Children’s Exposure to Violence (NatSCEV), menées en 2008 et en 2010. La NatSCEV est une enquête téléphonique auprès d’un échantillon représentatif d’enfants des États-Unis dont les ménages ont été sélectionnés par une composition aléatoire des numéros de téléphone. La présente analyse porte sur les 1186 enfants qui ont participé aux deux phases et qui étaient âgés de 10 à 17 ans lors de la Phase 1. Le nombre total de victimisations à la Phase 1 constituait la meilleure variable prédictive de la victimisation sexuelle à la Phase 2. À la Phase 1, la victimisation sexuelle n’apportait aucune contribution indépendante lorsque d’autres victimisations non sexuelles étaient incluses dans la prédiction. Les recherches futures sur la prédiction de la victimisation sexuelle et sur la récidive de la victimisation sexuelle devront également inclure et contrôler un large éventail d’autres victimisations non sexuelles.
APA, Harvard, Vancouver, ISO, and other styles
5

Carter, R. E., and L. E. Lowe. "Lateral variability of forest floor properties under second-growth Douglas-fir stands and the usefulness of composite sampling techniques." Canadian Journal of Forest Research 16, no. 5 (October 1, 1986): 1128–32. http://dx.doi.org/10.1139/x86-197.

Full text
Abstract:
Lateral variability of forest floor physical and chemical properties is examined in LF and H horizons under six naturally regenerated, second-growth Douglas-fir (Pseudotsugamenziesii (Mirb.) Franco) stands in coastal southwestern British Columbia. The number of samples required to predict a mean value at two confidence levels (P = 0.01 and 0.05) and two allowable errors (10 and 20%) are given for each variable. Total C, N, P, S, Zn, pH, and lipids were the least variable, requiring 2–13 samples to estimate a plot mean with a 10% allowable error at the 95% confidence level in LF horizons and 3–51 samples per lot in H horizons. Total K, Cu, and Mn were found to have moderately high lateral variability, while total Ca, Mn, Al, and Fe all required large numbers of samples to estimate the plot mean. In the second part of the paper, composite samples weighted by field depth and bulk density are compared with the depth and bulk density–weighted arithmetic mean of subsamples analyzed individually. Values from analysis of composite samples were within one standard deviation of the mean, with the exception of P and Cu in the LF horizons and lipids in both horizons. Composite values and mean values were significantly correlated across the six sites for all variables except lipids in LF horizons and total C and Mn in both horizons. Composite samples are suggested to provide an adequate estimate of the mean value of subsamples analyzed individually for most purposes and, for some variables (i.e., Ca, Fe, Al, and Mn), the only feasible method of obtaining an estimate of the mean.
APA, Harvard, Vancouver, ISO, and other styles
6

Camus, F., M. Gabsi, and B. Multon. "Prédiction des vibrations du stator d'une machine à réluctance variable en fonction du courant absorbé." Journal de Physique III 7, no. 2 (February 1997): 387–404. http://dx.doi.org/10.1051/jp3:1997129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Acevedo-Sandoval, Otilio A., Francisco Prieto-García, Judith Prieto-Méndez, Yolanda Marmolejo-Santillán, and Claudia Romo-Gómez. "Chemical weathering in hardened volcanic horizons (tepetates) of the State of Mexico." Revista Mexicana de Ciencias Geológicas 39, no. 2 (July 26, 2022): 116–27. http://dx.doi.org/10.22201/cgeo.20072902e.2022.2.1644.

Full text
Abstract:
Weathering is one of the most important phenomena that affect the balance dynamics of the Earth's crust. The chemical composition of soil samples and hardened horizons from seven profiles (P1 to P7) of the State of Mexico was compared to detect the degree of alteration by means of weathering indices. In hardened horizons, the dominant elements are SiO2, Al2O3, Na2O, K2O and TiO2, corresponding to 86.8 % of the total oxides. The weathering indices based on the mobility and immobilization of alkaline and alkaline earth elements reveal that the B horizons and the hardened horizons of profiles P4, P5, P6 and P7 are generally more altered than the surface horizons, therefore they have a greater pedogenic development. Profiles P1, P2 and P3 show incipient weathering. The geochemical indices and chemical relationships used in this study to evaluate weathering and associated basic alteration processes showed consistent results. These coincide in indicating incipient to moderate weathering acting in the seven profiles, with variable intensity in all the hardened horizons. The intensity variation defines a sequence of chemical weathering for the hardened horizons: P6> P4> P5> P7> P2> P3> P1, where P6 has the highest degree of weathering in the indices: CIA, CPA, CIA-K, CIW, PIA,IPark, V.
APA, Harvard, Vancouver, ISO, and other styles
8

Goldstein, Benjamin A., Michael J. Pencina, Maria E. Montez-Rath, and Wolfgang C. Winkelmayer. "Predicting mortality over different time horizons: which data elements are needed?" Journal of the American Medical Informatics Association 24, no. 1 (June 29, 2016): 176–81. http://dx.doi.org/10.1093/jamia/ocw057.

Full text
Abstract:
Objective: Electronic health records (EHRs) are a resource for “big data” analytics, containing a variety of data elements. We investigate how different categories of information contribute to prediction of mortality over different time horizons among patients undergoing hemodialysis treatment. Material and Methods: We derived prediction models for mortality over 7 time horizons using EHR data on older patients from a national chain of dialysis clinics linked with administrative data using LASSO (least absolute shrinkage and selection operator) regression. We assessed how different categories of information relate to risk assessment and compared discrete models to time-to-event models. Results: The best predictors used all the available data (c-statistic ranged from 0.72–0.76), with stronger models in the near term. While different variable groups showed different utility, exclusion of any particular group did not lead to a meaningfully different risk assessment. Discrete time models performed better than time-to-event models. Conclusions: Different variable groups were predictive over different time horizons, with vital signs most predictive for near-term mortality and demographic and comorbidities more important in long-term mortality.
APA, Harvard, Vancouver, ISO, and other styles
9

JONG, E. DE, D. F. ACTON, and H. B. STONEHOUSE. "ESTIMATING THE ATTERBERG LIMITS OF SOUTHERN SASKATCHEWAN SOILS FROM TEXTURE AND CARBON CONTENTS." Canadian Journal of Soil Science 70, no. 4 (November 1, 1990): 543–54. http://dx.doi.org/10.4141/cjss90-057.

Full text
Abstract:
The soil water contents at the liquid and plastic limits (the Atterberg limits) are widely used in the classification of soils for engineering purposes. Approximately 500 soil samples (129 Ap horizons and 417 B and C horizons) collected over several years as part of the ongoing soil survey program in Saskatchewan were analyzed for texture and Atterberg limits. On about half of the samples water retention (−33 kPa and −1500 kPa matric potential and air dryness), and organic and inorganic C were also determined. The relationship between the Atterberg limits and soil properties was explored through correlation and regression analysis. Clay and organic matter content explained most of the observed variation in the Atterberg limits of the Ap horizons. Clay was the most important independent variable in the B and C horizons, while inorganic C had only a relatively small impact. Key words: Atterberg limits, texture, organic and inorganic C
APA, Harvard, Vancouver, ISO, and other styles
10

Orlova, L. A., and V. A. Panychev. "The Reliability of Radiocarbon Dating Buried Soils." Radiocarbon 35, no. 3 (1993): 369–77. http://dx.doi.org/10.1017/s0033822200060379.

Full text
Abstract:
Variable 14C ages of paleosol organic matter (OM) cause difficulties in interpreting 14C data. We attempt to determine the reliability of OM 14C dates by examining different carbon-containing materials from soil horizons and paleosol fractions.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Horizons de prédiction variable"

1

Amor, Yasmine. "Ιntelligent apprοach fοr trafic cοngestiοn predictiοn." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMR129.

Full text
Abstract:
La congestion routière constitue un défi majeur pour les zones urbaines, car le volume de véhicules continue de croître plus rapidement que la capacité globale du réseau routier. Cette croissance a des répercussions sur l'activité économique, la durabilité environnementale et la qualité de vie. Bien que des stratégies visant à atténuer la congestion routière ont connu des améliorations au cours des dernières décennies, de nombreux pays ont encore du mal à la gérer efficacement.Divers modèles ont été développés pour aborder ce problème. Cependant, les approches existantes peinent souvent à fournir des prédictions en temps réel et localisées qui peuvent s'adapter à des conditions de trafic complexes et dynamiques. La plupart de ces approches s'appuient sur des horizons de prédiction fixes et manquent de l'infrastructure intelligente nécessaire à la flexibilité. Cette thèse comble ces lacunes en proposant une approche intelligente, décentralisée et basée sur l'infrastructure pour l'estimation et la prédiction de la congestion routière.Nous commençons par étudier l'Estimation du Trafic. Nous examinons les mesures de congestion possibles et les sources de données requises pour différents contextes pouvant être étudiés. Nous établissons une relation tridimensionnelle entre ces axes. Un système de recommandation basé sur des règles est développé pour aider les chercheurs et les opérateurs du trafic à choisir les mesures de congestion les plus appropriées en fonction du contexte étudié.Nous passons ensuite à la Prédiction du Trafic, où nous introduisons notre approche DECOTRIVMS. Cette dernière utilise des panneaux intelligents à messages variables pour collecter des données detrafic en temps réel et fournir des prédictions à court terme avec des horizons de prédiction variables.Nous avons utilisé des Réseaux de Graphes avec Attention en raison de leur capacité à capturer des relations complexes et à gérer des données structurées en graphes. Ils sont bien adaptés pour modéliser les interactions entre différents segments routiers étudiés.Nous avons aussi employé des méthodes d'apprentissage en ligne, spécifiquement la Descente de Gradient Stochastique et la Descente de Gradient Adaptative. Bien que ces méthodes ont été utilisées avec succès dans divers autres domaines, leur application à la prédiction de la congestion routière reste sous-explorée. Dans notre thèse, nous visons à combler cette lacune en explorant leur efficacité dans le contexte de la prédiction de la congestion routière en temps réel.Enfin, nous validons l'efficacité de notre approche à travers deux études de cas réalisées à Mascate, Oman, et à Rouen, France. Une analyse comparative est effectuée, évaluant divers modèles de prédiction, y compris les Réseaux de Graphes avec Attention, les Réseaux de Graphes Convolutionnels et des méthodes d'apprentissage en ligne. Les résultats obtenus soulignent le potentiel de DECOTRIVMS, démontrant son efficacité pour une prédiction précise et efficace de la congestion routière dans divers contextes urbains
Traffic congestion presents a critical challenge to urban areas, as the volume of vehicles continues to grow faster than the system’s overall capacity. This growth impacts economic activity, environmental sustainability, and overall quality of life. Although strategies for mitigating traffic congestion have seen improvements over the past few decades, many cities still struggle to manage it effectively. While various models have been developed to tackle this issue, existing approaches often fall short in providing real-time, localized predictions that can adapt to complex and dynamic traffic conditions. Most rely on fixed prediction horizons and lack the intelligent infrastructure needed for flexibility. This thesis addresses these gaps by proposing an intelligent, decentralized, infrastructure-based approach for traffic congestion estimation and prediction.We start by studying Traffic Estimation. We examine the possible congestion measures and data sources required for different contexts that may be studied. We establish a three-dimensional relationship between these axes. A rule-based system is developed to assist researchers and traffic operators in recommending the most appropriate congestion measures based on the specific context under study. We then proceed to Traffic Prediction, introducing our DECentralized COngestion esTimation and pRediction model using Intelligent Variable Message Signs (DECOTRIVMS). This infrastructure-based model employs intelligent Variable Message Signs (VMSs) to collect real-time traffic data and provide short-term congestion predictions with variable prediction horizons.We use Graph Attention Networks (GATs) due to their ability to capture complex relationships and handle graph-structured data. They are well-suited for modeling interactions between different road segments. In addition to GATs, we employ online learning methods, specifically, Stochastic Gradient Descent (SGD) and ADAptive GRAdient Descent (ADAGRAD). While these methods have been successfully used in various other domains, their application in traffic congestion prediction remains under-explored. In our thesis, we aim to bridge that gap by exploring their effectiveness within the context of real-time traffic congestion forecasting.Finally, we validate our model’s effectiveness through two case studies conducted in Muscat, Oman, and Rouen, France. A comprehensive comparative analysis is performed, evaluating various prediction techniques, including GATs, Graph Convolutional Networks (GCNs), SGD and ADAGRAD. The achieved results underscore the potential of DECOTRIVMS, demonstrating its potential for accurate and effective traffic congestion prediction across diverse urban contexts
APA, Harvard, Vancouver, ISO, and other styles
2

Parrang, Sylvain. "Prédiction du niveau de bruit aéroacoustique d'une machine haute vitesse à reluctance variable." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN044/document.

Full text
Abstract:
Simples à produire et robustes, les machines électriques à reluctance variable sont adaptées à des conditions sévères de fonctionnement, notamment à vitesse élevée. Les machines à reluctance variable (MRV) ne sont cependant que rarement utilisées principalement en raison du niveau élevé de bruit qu'elles émettent. Les travaux menés au cours de cette thèse visent, dans un premier temps, à qualifier le bruit émis par une machine à reluctance variable à haute vitesse de rotation. Conformément à ce qui est communément admis, il a été établi que le bruit émis en haute vitesse par la machine étudiée est dominé par le bruit aérocoustique. Le bruit aéroacoustique consiste en l'ensemble des émissions sonores issues de phénomènes aérodynamiques qui prennent naissance dans l'entrefer de la machine. Le second chapitre de l'étude est consacré à la mise en place d'une méthode d'estimation quantitative du bruit aéroacoustique émis par la machine étudiée. Le bruit aéroacoustique n'ayant pas encore été étudié de manière quantitative pour les machines électriques, l'étude se tourne vers les machines tournantes (turboréacteur, ventilateurs, ...) pour lesquelles la littérature sur le bruit aéroacoustique est abondante. Une méthode d'estimation du bruit aéroacosutique émis par la machine est alors construite. Cet estimateur se base sur une simulation de dynamique des fluides en deux dimensions de l'écoulement turbulent dans l'entrefer. Vient ensuite une confrontation des niveaux de bruit estimés avec des données expérimentales. Le bruit émis par la machine étudiée est calculé et mesuré pour deux géométries différentes du rotor sur une large plage de vitesses de rotation. La cohérence observée entre les résultats expérimentaux et numériques valide les hypothèses formulées au chapitre précédent tout en soulignant, comme attendu, les limites de la méthode de calcul en deux dimensions. Enfin, dans un quatrième chapitre, la méthode d'estimation du niveau de bruit aéroacoustique est utilisée afin d'explorer l'influence des paramètres géométriques de la machine sur son niveau de bruit
Due to its simple construction and robustness, Switched Reluctance Machine (SRM) is well suited for high rotation rates. SRM applications are however quite rare mainly because of the high level of noise this machine produces. First, this work aims to describe the noise emmitted by the studied SRM at high rotation rates. In accordance with the common understanding, it was proven that noise emitted by high rotation rates SRM is dominated by aeroacoustic noise. The aeroacoustic noise consists of the whole soundemission comig out of aerodynamic phenoma located in the air gap of the machine. Chapter two is concerned with the implementation of an estimation method for aeroacoustic noise level dedicated to the studied SRM. Aeroacoustic noise for electrical machines has not been quantitatively studied yet. Conversely, studies about aeroacoustic noise of rotating machinery (turboreactor, fan, ...) is quite abundant in the litterature. Consequently, this study focuses onrotating machinery to build an aeroacoustic noise estimation method for SRM. This estimationtool is based on a Computational Fluid Dynamics (CFD) calculation of the turbulent ow in theair gap. Estimated noise levels are then compared with experimental data. Emitted noise level is estimated and measured for two distinct rotor geometries over a wide range of rotation rates. Calculation assumptions are validated by the consistency between experimental and numerical results. Asexpected, the 2D CFD simulation brings an over estimation of noise level. Finally, the fourth chapter deals with the use of the aeroacoustic noise estimation tool to study the influence of geometrical parameters of a SRM on its noise emission level
APA, Harvard, Vancouver, ISO, and other styles
3

Hmamouche, Youssef. "Prédiction des séries temporelles larges." Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0480.

Full text
Abstract:
De nos jours, les systèmes modernes sont censés stocker et traiter des séries temporelles massives. Comme le nombre de variables observées augmente très rapidement, leur prédiction devient de plus en plus compliquée, et l’utilisation de toutes les variables pose des problèmes pour les modèles classiques.Les modèles de prédiction sans facteurs externes sont parmi les premiers modèles de prédiction. En vue d’améliorer la précision des prédictions, l’utilisation de multiples variables est devenue commune. Ainsi, les modèles qui tiennent en compte des facteurs externes, ou bien les modèles multivariés, apparaissent, et deviennent de plus en plus utilisés car ils prennent en compte plus d’informations.Avec l’augmentation des données liées entre eux, l’application des modèles multivariés devient aussi discutable. Le challenge dans cette situation est de trouver les facteurs les plus pertinents parmi l’ensemble des données disponibles par rapport à une variable cible.Dans cette thèse, nous étudions ce problème en présentant une analyse détaillée des approches proposées dans la littérature. Nous abordons le problème de réduction et de prédiction des données massives. Nous discutons également ces approches dans le contexte du Big Data.Ensuite, nous présentons une méthodologie complète pour la prédiction des séries temporelles larges. Nous étendons également cette méthodologie aux données très larges via le calcul distribué et le parallélisme avec une implémentation du processus de prédiction proposé dans l’environnement Hadoop/Spark
Nowadays, storage and data processing systems are supposed to store and process large time series. As the number of variables observed increases very rapidly, their prediction becomes more and more complicated, and the use of all the variables poses problems for classical prediction models.Univariate prediction models are among the first models of prediction. To improve these models, the use of multiple variables has become common. Thus, multivariate models and become more and more used because they consider more information.With the increase of data related to each other, the application of multivariate models is also questionable. Because the use of all existing information does not necessarily lead to the best predictions. Therefore, the challenge in this situation is to find the most relevant factors among all available data relative to a target variable.In this thesis, we study this problem by presenting a detailed analysis of the proposed approaches in the literature. We address the problem of prediction and size reduction of massive data. We also discuss these approaches in the context of Big Data.The proposed approaches show promising and very competitive results compared to well-known algorithms, and lead to an improvement in the accuracy of the predictions on the data used.Then, we present our contributions, and propose a complete methodology for the prediction of wide time series. We also extend this methodology to big data via distributed computing and parallelism with an implementation of the prediction process proposed in the Hadoop / Spark environment
APA, Harvard, Vancouver, ISO, and other styles
4

Hmamouche, Youssef. "Prédiction des séries temporelles larges." Electronic Thesis or Diss., Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0480.

Full text
Abstract:
De nos jours, les systèmes modernes sont censés stocker et traiter des séries temporelles massives. Comme le nombre de variables observées augmente très rapidement, leur prédiction devient de plus en plus compliquée, et l’utilisation de toutes les variables pose des problèmes pour les modèles classiques.Les modèles de prédiction sans facteurs externes sont parmi les premiers modèles de prédiction. En vue d’améliorer la précision des prédictions, l’utilisation de multiples variables est devenue commune. Ainsi, les modèles qui tiennent en compte des facteurs externes, ou bien les modèles multivariés, apparaissent, et deviennent de plus en plus utilisés car ils prennent en compte plus d’informations.Avec l’augmentation des données liées entre eux, l’application des modèles multivariés devient aussi discutable. Le challenge dans cette situation est de trouver les facteurs les plus pertinents parmi l’ensemble des données disponibles par rapport à une variable cible.Dans cette thèse, nous étudions ce problème en présentant une analyse détaillée des approches proposées dans la littérature. Nous abordons le problème de réduction et de prédiction des données massives. Nous discutons également ces approches dans le contexte du Big Data.Ensuite, nous présentons une méthodologie complète pour la prédiction des séries temporelles larges. Nous étendons également cette méthodologie aux données très larges via le calcul distribué et le parallélisme avec une implémentation du processus de prédiction proposé dans l’environnement Hadoop/Spark
Nowadays, storage and data processing systems are supposed to store and process large time series. As the number of variables observed increases very rapidly, their prediction becomes more and more complicated, and the use of all the variables poses problems for classical prediction models.Univariate prediction models are among the first models of prediction. To improve these models, the use of multiple variables has become common. Thus, multivariate models and become more and more used because they consider more information.With the increase of data related to each other, the application of multivariate models is also questionable. Because the use of all existing information does not necessarily lead to the best predictions. Therefore, the challenge in this situation is to find the most relevant factors among all available data relative to a target variable.In this thesis, we study this problem by presenting a detailed analysis of the proposed approaches in the literature. We address the problem of prediction and size reduction of massive data. We also discuss these approaches in the context of Big Data.The proposed approaches show promising and very competitive results compared to well-known algorithms, and lead to an improvement in the accuracy of the predictions on the data used.Then, we present our contributions, and propose a complete methodology for the prediction of wide time series. We also extend this methodology to big data via distributed computing and parallelism with an implementation of the prediction process proposed in the Hadoop / Spark environment
APA, Harvard, Vancouver, ISO, and other styles
5

Chagneau, Pierrette. "Modélisation bayésienne hiérarchique pour la prédiction multivariée de processus spatiaux non gaussiens et processus ponctuels hétérogènes d'intensité liée à une variable prédite : application à la prédiction de la régénération en forêt tropicale humide." Montpellier 2, 2009. http://www.theses.fr/2009MON20157.

Full text
Abstract:
Un des points faibles des modèles de dynamique forestière spatialement explicites est la modélisation de la régénération. Un inventaire détaillé du peuplement et des conditions environnementales a permis de mettre en évidence les effets de ces deux facteurs sur la densité locale de juvéniles. Mais en pratique, la collecte de telles données est coûteuse et ne peut être réalisée à grande échelle : seule une partie des juvéniles est échantillonnée et l'environnement n'est connu que partiellement. L'objectif est ici de proposer une approche pour prédire la répartition spatiale et le génotype des juvéniles sur la base d'un échantillonnage raisonnable des juvéniles, des adultes et de l'environnement. La position des juvéniles est considérée comme la réalisation d'un processus ponctuel marqué, les marques étant constituées par les génotypes. L'intensité du processus traduit les mécanismes de dispersion à l'origine de l'organisation spatiale et de la diversité génétique des juvéniles. L'intensité dépend de la survie des graines, qui dépend elle-même des conditions environnementales. Il est donc nécessaire de prédire l'environnement sur toute la zone d'étude. L'environnement, représenté par un champ aléatoire multivarié, est prédit grâce à un modèle hiérarchique spatial capable de traiter simultanément des variables de nature différente. Contrairement aux modèles existants où les variables environnementales sont considérées comme connues, le modèle de régénération proposé doit prendre en compte les erreurs liées à la prédiction de l'environnement. La méthode est appliquée à la prédiction de la régénération des juvéniles en forêt tropicale (Guyane française)
One of the weak points of forest dynamics models is the recruitment. Classically, ecologists make the assumption that recruitment mainly depends on both spatial pattern of mature trees and environment. A detailed inventory of the stand and the environmental conditions enabled them to show the effects of these two factors on the local density of seedlings. In practice, such information is not available: only a part of seedlings is sampled and the environment is partially observed. The aim of the paper is to propose an approach in order to predict the spatial distribution and the seedlings genotype on the basis of a reasonable sampling of seedling, mature trees and environmental conditions. The spatial pattern of the seedlings is assumed to be a realization of a marked point process. The intensity of the process is not only related to the seed and pollen dispersal but also to the sapling survival. The sapling survival depends on the environment; so the environment must be predicted on the whole study area. The environment is characterized through spatial variables of different nature and predictions are obtained using a spatial hierarchical model. Unlike the existing models which assume the environmental covariables as exactly known, the recruitment model we propose takes into account the error related to the prediction of the environment. The prediction of seedling recruitment in tropical rainforest in French Guiana illustrates our approach
APA, Harvard, Vancouver, ISO, and other styles
6

Shimagaki, Kai. "Advanced statistical modeling and variable selection for protein sequences." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS548.

Full text
Abstract:
Au cours des dernières décennies, des techniques de séquençage de protéines ont été développées et des expériences continues ont été menées. Grâce à tous ces efforts, de nos jours, nous avons obtenu plus de deux-cents millions données relative à des séquences de protéines. Afin de traiter une telle quantité de données biologiques, nous avons maintenant besoin de théories et de technologies pour extraire des informations de ces données que nous pouvons comprendre et pour apporter des idées. L'idée clé pour résoudre ce problème est la physique statistique et l'état de l'art de le Machine Learning (ML). La physique statistique est un domaine de la physique qui peut décrire avec succès de nombreux systèmes complexes en extrayant ou en réduisant les variables pour en faire des variables interprétables basées sur des principes simples.ML, d'autre part, peut représenter des données (par exemple en les reconstruisant ou en les classifiant) sans comprendre comment les données ont été générées, c'est-à-dire le phénomène physique à l'origine de la création de ces données. Dans cette thèse, nous rapportons des études de modélisation générative de séquences protéiques et de prédictions de contacts protéines-résidus à l'aide de la modélisation statistique inspirée de la physique et de méthodes orientées ML. Dans la première partie, nous passons en revue le contexte général de la biologie et de la génomique. Ensuite, nous discutons des modélisations statistiques pour la séquence des protéines. En particulier, nous passons en revue l'analyse de couplage direct (DCA), qui est la technologie de base de notre recherche
Over the last few decades, protein sequencing techniques have been developed and continuous experiments have been done. Thanks to all of these efforts, nowadays, we have obtained more than two hundred million protein sequence data. In order to deal with such a huge amount of biological data, now, we need theories and technologies to extract information that we can understand and interpret.The key idea to resolve this problem is statistical physics and the state of the art of machine learning (ML). Statistical physics is a field of physics that can successfully describe many complex systems by extracting or reducing variables to be interpretable variables based on simple principles. ML, on the other hand, can represent data (such as reconstruction and classification) without assuming how the data was generated, i.e. physical phenomenon behind of data. In this dissertation, we report studies of protein sequence generative modeling and protein-residue contact predictions using statistical physics-inspired modeling and ML-oriented methods. In the first part, we review the general background of biology and genomics. Then we discuss statistical modelings for protein sequence. In particular, we review Direct Coupling Analysis (DCA), which is the core technology of our research. We also discuss the effects of higher-order statistics contained in protein sequences and introduces deep learning-based generative models as a model that can go beyond pairwise interaction
APA, Harvard, Vancouver, ISO, and other styles
7

Naud, Hélène. "Prédire le comportement suicidaire des détenus avec le Suicide Probability Scale et des variables actuarielles." Thèse, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/2787.

Full text
Abstract:
La problématique du suicide en milieu carcéral est connue et décrite dans plusieurs recherches. Toutefois les outils de dépistage du risque suicidaire ont surtout été développés pour des populations à risque non carcérales et la capacité de prédiction de ces échelles n'a été inférée qu'indirectement. Depuis un peu plus de dix ans, le questionnaire Suicide Probability Scale (Cull et Gill, 1988) est utilisée auprès de détenus québécois qui débutent une sentence fédérale. Cependant le survol de la littérature n'a pas permis de retrouver d'étude psychométrique liée à la validité prédictive de ce questionnaire auprès d'une population spécifiquement carcérale. L'objectif de la présente recherche était donc d'évaluer la valeur prédictive du questionnaire Suicide Probability Scale (SPS) auprès d'une population masculine carcérale.La recherche vise à vérifier si les détenus dépistés à risque modéré ou élevé en 1995-1996 par le SPS ont effectivement eu des comportements suicidaires par la suite, pendant qu'ils étaient encore sous la responsabilité des services correctionnels. Les résultats sont basés sur une période d'observation globale de 11 ans et demi (entre 1995 et 2006) et confirment que le SPS, dans sa forme actuelle, permet de prédire le comportement suicidaire. Une amélioration de la prédiction du risque suicidaire est démontrée si le point de découpage est modifié de 50 (point de démarcation actuel des auteurs du SPS) à 40 pour la clientèle spécifique des hommes incarcérés dans un pénitencier. Les résultats obtenus au SPS permettent aussi de dépister les détenus à risque de comportement hétéro-agressif en milieu carcéral. Le deuxième article a évalué la valeur prédictive de 24 variables actuarielles, connues en début de sentence, en combinaison avec le SPS, afin d'augmenter la prédiction et de la rendre plus spécifique en réduisant le nombre de faux négatifs.La capacité de prédiction a été analysée avec des modèles de régression logistique et la valeur sous la courbe ROC. L'ajout d'une ou de deux variables actuarielles permet d'améliorer le dépistage des comportements suicidaires sur une période de 24 mois et même de 120 mois.
APA, Harvard, Vancouver, ISO, and other styles
8

Brunot, Mathieu. "Identification of rigid industrial robots - A system identification perspective." Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/20776/1/BRUNOT_Mathieu_20776.pdf.

Full text
Abstract:
In modern manufacturing, industrial robots are essential components that allow saving cost, increase quality and productivity for instance. To achieve such goals, high accuracy and speed are simultaneously required. The design of control laws compliant with such requirements demands high-fidelity mathematical models of those robots. For this purpose, dynamic models are built from experimental data. The main objective of this thesis is thus to provide robotic engineers with automatic tools for identifying dynamic models of industrial robot arms. To achieve this aim, a comparative analysis of the existing methods dealing with robot identification is made. That allows discerning the advantages and the limitations of each method. From those observations, contributions are presented on three axes. First, the study focuses on the estimation of the joint velocities and accelerations from the measured position, which is required for the model construction. The usual method is based on a home-made prefiltering process that needs a reliable knowledge of the system’s bandwidths, whereas the system is still unknown. To overcome this dilemma, we propose a method able to estimate the joint derivatives automatically, without any setting from the user. The second axis is dedicated to the identification of the controller. For the vast majority of the method its knowledge is indeed required. Unfortunately, for copyright reasons, that is not always available to the user. To deal with this issue, two methods are suggested. Their basic philosophy is to identify the control law in a first step before identifying the dynamic model of the robot in a second one. The first method consists in identifying the control law in a parametric way, whereas the second one relies on a non-parametric identification. Finally, the third axis deals with the home-made setting of the decimate filter. The identification of the noise filter is introduced similarly to methods developed in the system identification community. This allows estimating automatically the dynamic parameters with low covariance and it brings some information about the noise circulation through the closed-loop system. All the proposed methodologies are validated on an industrial robot with 6 degrees of freedom. Perspectives are outlined for future developments on robotic systems identification and other complex problems.
APA, Harvard, Vancouver, ISO, and other styles
9

Youssfi, Younès. "Exploring Risk Factors and Prediction Models for Sudden Cardiac Death with Machine Learning." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG006.

Full text
Abstract:
La mort subite de l'adulte est définie comme une mort inattendue sans cause extracardiaque évidente, survenant avec un effondrement rapide en présence d'un témoin, ou en l'absence de témoin dans l'heure après le début des symptômes. Son incidence est estimée à 350,000 personnes par an en Europe et 300,000 personnes aux Etats-Unis, ce qui représente 10 à 20% des décès dans les pays industrialisés. Malgré les progrès réalisés dans la prise en charge, le pronostic demeure extrêmement sombre. Moins de 10% des patients sortent vivants de l'hôpital après la survenue d'une mort subite. Les défibrillateurs automatiques implantables offrent une solution thérapeutique efficace chez les patients identifiés à haut risque de mort subite. Leur identification en population générale demeure donc un enjeu de santé publique majeur, avec des résultats jusqu'à présent décevants. Cette thèse propose des outils statistiques pour répondre à ce problème, et améliorer notre compréhension de la mort subite en population générale. Nous analysons les données du Centre d'Expertise de la Mort Subite et les bases médico-administratives de l'Assurance Maladie, pour développer trois travaux principaux :- La première partie de la thèse vise à identifier de nouveaux sous-groupes de mort subite pour améliorer les modèles actuels de stratification du risque, qui reposent essentiellement sur des variables cardiovasculaires. Nous utilisons des modèles d'analyse du langage naturel et de clustering pour construire une nouvelle représentation pertinente de l'historique médical des patients.- La deuxième partie vise à construire un modèle de prédiction de la mort subite, capable de proposer un score de risque personnalisé et explicable pour chaque patient, et d'identifier avec précision les individus à très haut risque en population générale. Nous entraînons pour cela un algorithme de classification supervisée, combiné avec l'algorithme SHapley Additive exPlanations, pour analyser l'ensemble des consommations de soin survenues jusqu'à 5 ans avant l'événement.- La dernière partie de la thèse vise à identifier le niveau optimal d'information à sélectionner dans des bases médico-administratives de grande dimension. Nous proposons un algorithme de sélection de variables bi-niveaux pour des modèles linéaires généralisés, permettant de distinguer les effets de groupe des effets individuels pour chaque variable. Cet algorithme repose sur une approche bayésienne et utilise une méthode de Monte Carlo séquentiel pour estimer la loi a posteriori de sélection des variables
Sudden cardiac death (SCD) is defined as a sudden natural death presumed to be of cardiac cause, heralded by abrupt loss of consciousness in the presence of witness, or in the absence of witness occurring within an hour after the onset of symptoms. Despite progress in clinical profiling and interventions, it remains a major public health problem, accounting for 10 to 20% of deaths in industrialised countries, with survival after SCD below 10%. The annual incidence is estimated 350,000 in Europe, and 300,000 in the United States. Efficient treatments for SCD management are available. One of the most effective options is the use of implantable cardioverter defibrillators (ICD). However, identifying the best candidates for ICD implantation remains a difficult challenge, with disappointing results so far. This thesis aims to address this problem, and to provide a better understanding of SCD in the general population, using statistical modeling. We analyze data from the Paris Sudden Death Expertise Center and the French National Healthcare System Database to develop three main works:- The first part of the thesis aims to identify new subgroups of SCD to improve current stratification guidelines, which are mainly based on cardiovascular variables. To this end, we use natural language processing methods and clustering analysis to build a meaningful representation of medical history of patients.- The second part aims to build a prediction model of SCD in order to propose a personalized and explainable risk score for each patient, and accurately identify very-high risk subjects in the general population. To this end, we train a supervised classification algorithm, combined with the SHapley Additive exPlanation method, to analyze all medical events that occurred up to 5 years prior to the event.- The last part of the thesis aims to identify the most relevant information to select in large medical history of patients. We propose a bi-level variable selection algorithm for generalized linear models, in order to identify both individual and group effects from predictors. Our algorithm is based on a Bayesian approach and uses a Sequential Monte Carlo method to estimate the posterior distribution of variables inclusion
APA, Harvard, Vancouver, ISO, and other styles
10

Soret, Perrine. "Régression pénalisée de type Lasso pour l’analyse de données biologiques de grande dimension : application à la charge virale du VIH censurée par une limite de quantification et aux données compositionnelles du microbiote." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0254.

Full text
Abstract:
Dans les études cliniques et grâce aux progrès technologiques, la quantité d’informations recueillies chez un même patient ne cesse de croître conduisant à des situations où le nombre de variables explicatives est plus important que le nombre d’individus. La méthode Lasso s'est montrée appropriée face aux problèmes de sur-ajustement rencontrés en grande dimension.Cette thèse est consacrée à l'application et au développement des régressions pénalisées de type Lasso pour des données cliniques présentant des structures particulières.Premièrement, chez des patients atteints du virus de l'immunodéficience humaine des mutations dans les gènes du virus peuvent être liées au développement de résistances à tel ou tel traitement.La prédiction de la charge virale à partir des mutations (potentiellement grand) permet d'orienter le choix des traitements.En dessous d'un seuil, la charge virale est indétectable, on parle de données censurées à gauche.Nous proposons deux nouvelles approches Lasso basées sur l'algorithme Buckley-James consistant à imputer les valeurs censurées par une espérance conditionnelle. En inversant la réponse, on peut se ramener à un problème de censure à droite, pour laquelle des estimations non-paramétriques de l'espérance conditionnelle ont été proposées en analyse de survie. Enfin, nous proposons une estimation paramétrique qui repose sur une hypothèse Gaussienne.Deuxièmement, nous nous intéressons au rôle du microbiote dans la détérioration de la santé respiratoire. Les données du microbiote sont sous forme d'abondances relatives (proportion de chaque espèce par individu, dites données compositionnelles) et elles présentent une structure phylogénétique.Nous avons dressé un état de l'art des méthodes d'analyses statistiques de données du microbiote. En raison de la nouveauté, peu de recommandations existent sur l'applicabilité et l'efficacité des méthodes proposées. Une étude de simulation nous a permis de comparer la capacité de sélection des méthodes de pénalisation proposées spécifiquement pour ce type de données.Puis nous appliquons ces recherches à l'analyse de l'association entre les bactéries/champignons et le déclin de la fonction pulmonaire chez des patients atteints de la mucoviscidose du projet MucoFong
In clinical studies and thanks to technological progress, the amount of information collected in the same patient continues to grow leading to situations where the number of explanatory variables is greater than the number of individuals. The Lasso method proved to be appropriate to circumvent over-adjustment problems in high-dimensional settings.This thesis is devoted to the application and development of Lasso-penalized regression for clinical data presenting particular structures.First, in patients with the human immunodeficiency virus, mutations in the virus's genetic structure may be related to the development of drug resistance. The prediction of the viral load from (potentially large) mutations allows guiding treatment choice.Below a threshold, the viral load is undetectable, data are left-censored. We propose two new Lasso approaches based on the Buckley-James algorithm, which imputes censored values ​​by a conditional expectation. By reversing the response, we obtain a right-censored problem, for which non-parametric estimates of the conditional expectation have been proposed in survival analysis. Finally, we propose a parametric estimation based on a Gaussian hypothesis.Secondly, we are interested in the role of the microbiota in the deterioration of respiratory health. The microbiota data are presented as relative abundances (proportion of each species per individual, called compositional data) and they have a phylogenetic structure.We have established a state of the art methods of statistical analysis of microbiota data. Due to the novelty, few recommendations exist on the applicability and effectiveness of the proposed methods. A simulation study allowed us to compare the selection capacity of penalization methods proposed specifically for this type of data.Then we apply this research to the analysis of the association between bacteria / fungi and the decline of pulmonary function in patients with cystic fibrosis from the MucoFong project
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Horizons de prédiction variable"

1

Lionheart, Jeffery. Horizons: Variable. Independently Published, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

New Horizons In Timedomain Astronomy Proceedings Of The 285th Symposium Of The International Astronomical Union Held In Oxford United Kingdom September 1923 2011. Cambridge University Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Horizons de prédiction variable"

1

Maddox, R. Neil. "Behavioral Intentions as an Intervening Variable in Housing Decisions: A Longitudinal Study." In Marketing Horizons: A 1980's Perspective, 28–32. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10966-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Auer, Emma. "Empathy: Is it the Missing Independent Dispositional Variable in the Study of Innovative Behavior?" In Marketing Horizons: A 1980's Perspective, 64–67. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10966-4_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

MacPherson, Ronnie, Amy Jersild, Dennis Bours, and Caroline Holo. "Assessing the Evaluability of Adaptation-Focused Interventions: Lessons from the Adaptation Fund." In Transformational Change for People and the Planet, 173–86. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-78853-7_12.

Full text
Abstract:
AbstractEvaluability assessments (EAs) have differing definitions, focus on various aspects of evaluation, and have been implemented inconsistently in the last several decades. Climate change adaptation (CCA) programming presents particular challenges for evaluation given shifting baselines, variable time horizons, adaptation as a moving target, and uncertainty inherent to climate change and its extreme and varied effects. The Adaptation Fund Technical Evaluation Reference Group (AF-TERG) developed a framework to assess the extent to which the Fund’s portfolio of projects has in place structures, processes, and resources capable of supporting credible and useful monitoring, evaluation, and learning (MEL). The framework was applied on the entire project portfolio to determine the level of evaluability and make recommendations for improvement. This chapter explores the assessment’s findings on designing programs and projects to help minimize the essential challenges in the field. It discusses how the process of EA can help identify opportunities for strengthening both evaluability and a project’s MEL more broadly. A key conclusion was that the strength and quality of a project’s overall approach to MEL is a major determinant of a project’s evaluability. Although the framework was used retroactively, EAs could also be used prospectively as quality assurance tools at the pre-implementation stage.
APA, Harvard, Vancouver, ISO, and other styles
4

Danka J. "Probability of failure calculation of dikes based on Monte Carlo simulation." In Geotechnical Engineering: New Horizons. IOS Press, 2011. https://doi.org/10.3233/978-1-60750-808-3-181.

Full text
Abstract:
The floods of the first half of the year of 2010 in our country have revealed to us again the importance of probability of failure calculations of soil dikes. In this case we have to find an effective routine like the Monte Carlo analysis, which was developed for modeling multiple problems using random variables. In the situation of soil dikes the random variables are the soil parameters, geometrical parameters and loads. Each variable can be defined by its statistical parameters and distribution, so first we must determine the factors that we would like to build into our calculation, and next we have to define them with the aid of statistical analysis. The practice of probability of failure calculation in areas of geotechnology is not wide-spread so it is hard to find any commercial software, which could solve the global problem. The article shows an example of the probability of failure calculation for a homogenous dike section.
APA, Harvard, Vancouver, ISO, and other styles
5

Emerson, Robert M., and Blair Paley. "Organizational Horizons and Complaint-Filing." In The Uses Of Discretion, 231–48. Oxford University PressOxford, 1993. http://dx.doi.org/10.1093/oso/9780198257622.003.0007.

Full text
Abstract:
Abstract Many socio-legal analyses conceptualize discretion as decisions made relatively unfettered by rules, and advocate ‘confining, structuring and checking’ decision-making as an antidote to the resulting ‘problems and abuses’. For example, in his classic analysis of discretion, Davis argues that a ‘public officer has discretion whenever the effective limits on his power leave him free to make a choice among possible courses of action or inaction’ (1969: 4). In this view discretion involves decisions that are unconstrained by rules, whether because rules do not apply, are vague, or multiple. Nondiscretionary decisions, in contrast, are those that accord with (and presumably are determined and constrained by) rules. Such an approach therefore dichotomizes ‘rules’ and ‘discretion’: the more of the former the less of the latter, and vice versa.1 It is now well established that a sharp rules-discretion dichotomy presumes and perpetuates overly narrow and mechanistic treatments of discretionary decision-making. This dichotomy obscures the contingent and variable processes of rule interpretation inevitably unspecified by the rules themselves. It specifically neglects the use of discretion to determine the relevant ‘facts’ or ‘situation’ to which specific rules apply, since, in the words of H. L. A. Hart, no rule can ‘itself step forward to claim its own instances’ (1961: 123). (See also Heritage 1984: 120-9).
APA, Harvard, Vancouver, ISO, and other styles
6

Racinais J. and Plomteux C. "Design of slab-on-grades supported with soil reinforced by rigid inclusions." In Geotechnical Engineering: New Horizons. IOS Press, 2011. https://doi.org/10.3233/978-1-60750-808-3-105.

Full text
Abstract:
The design of industrial and logistic building's slab-on-grades is a complex exercise. The design needs to consider the different loading types and configurations (uniform or alternated loading, racks, live loadings…) together with the relative positions from the hinged constructions joints to the loads, whose position and intensity can vary during the life of the structure. The non-uniform stress reaction distribution in the soil reinforced with rigid inclusions creates an additional stress in the slab with a different pattern than the ones of the loads and of the joints. The optimization of the design of the slab becomes a complex problem with three different intertwined patterns (loading, joints, and rigid inclusions) that can move relative to one another with usually no typical symmetry conditions. Existing code of practice dedicated to slab-on-grades are only able to consider uniform soil conditions and the typical size of those structures forbid the modelling of the full extent of the slab. Through the decomposition of this complex problem into the sum of three unit variable-separated problems, this paper presents a simple and comprehensive method to take into account all the parameters of the equation. This method is a powerful solution which is easy to use while allowing for the precise optimization of the design of slab-on-grades. The approach has been validated and calibrated with an extensive number of finite element calculations and has been integrated in the French ASIRI national research program in France.
APA, Harvard, Vancouver, ISO, and other styles
7

Stock, James H., and Mark W. Watson. "Variable Trends in Economic Time Series." In Long-Run Economic Relationships, 17–50. Oxford University PressOxford, 1991. http://dx.doi.org/10.1093/oso/9780198283393.003.0002.

Full text
Abstract:
Abstract The two most striking historical features of aggregate output are its sustained long run growth and its recurrent fluctuations around this growth path. Real per capita GNP, consumption and investment in the United States during the postwar era are plotted in Figure 1. Both growth and deviations from the growth trend-often referred to as ‘business cycles’-are apparent in each series. Over horizons of a few years, these shorter cyclical swings can be. pronounced; for example, the 1953, 1957, and 1974 recessions are .evident as substantial temporary declines in aggregate activity. These cyclical fluctuations are, however, dwarfed in magnitude by the secular expansion of output. But just as there are cyclical swings in output, so too are there variations in the growth trend: growth in GNP in the 1960s was much stronger than it was in the 1950s. Thus, changes in long-run patterns of growth are an important feature of postwar aggregate economic activity.
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Mo Yee, Siu-Man Ng, Pamela Pui Yu Leung, and Cecilia Lai Wan Chan. "Spiritual Growth and Transformation: Expanding Life’s Horizons." In Integrative Body–Mind–Spirit Social Work, 171–96. Oxford University PressNew York, NY, 2009. http://dx.doi.org/10.1093/oso/9780195301021.003.0008.

Full text
Abstract:
Abstract Despite the fact that the social work profession was developed by pioneers who had strong spiritual motivations for services, the social work profession continues to downplay spirituality as an integral part of social work practice. As a profession, social work chose to build its professional base upon scientific and humanistic traditions and divorced from spirituality. However, spirituality is a necessary component of an ordinary, even of a secular, life. Problems like domestic violence, burnout in the workplace, gambling addiction, substance abuse, and so on are related to a sense of inner emptiness, meaninglessness, and the absence of compassion and love in individuals and families. These are more or less spiritual problems. In fact, alcoholism has been described as a futile search for spirituality. In a review by Hodge (Hodge, 2001a), spirituality was found to be a significant variable in recovery from divorce, homelessness, sexual assault, and substance abuse. Studies in a healthcare setting also found that spiritual resources foster meaning and a sense of life affirmation and personal growth in cancer patients at different stages (Gall & Cornblat, 2002; Parry, 2003) and that daily spiritual experiences may mitigate physical, cognitive, and emotional burnout (Holland & Neimeyer, 2005).
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, T. R. "Introduction." In Chemistry of Variable Charge Soils. Oxford University Press, 1997. http://dx.doi.org/10.1093/oso/9780195097450.003.0004.

Full text
Abstract:
The constitution and properties of soils have their macroscopic and microscopic aspects. Macroscopically, the profile of a soil consists of several horizons, each containing numerous aggregates and blocks of soil particles of different sizes. These structures are visible to the naked eye. Microscopically, a soil is composed of many kinds of minerals and organic matter interlinked in a complex manner. In addition, a soil is always inhabited by numerous microorganisms which can be observed by modern scientific instruments. To study these various aspects, several branches of soil science, such as soil geography, soil mineralogy, and soil microbiology, have been developed. If examined on a more minute scale, it can be found that most of the chemical reactions in a soil occur at the interface between soil colloidal surface and solution or in the solution adjacent to this interface. This is because these colloidal surfaces carry negative as well as positive charges, thus reacting with ions, protons, and electrons of the solution. The presence of surface charge is the basic cause of the fertility of a soil and is also the principal criterion that distinguishes soil from pure sand. The chief objective of soil chemical research is to deal with the interactions among charged particles (colloids, ions, protons, electrons) and their chemical consequences in soils. As depicted in Fig. 1.1, these charged particles are closely interrelated. The surface charge of soil colloids is the basic reason that a soil possesses a series of chemical properties. At present, considerable knowledge has been accumulated about the permanent charge of soils. On the other hand, our understanding is still at an early stage about the mechanisms and the affecting factors of variable charge. The quantity of surface charge determines the amount of ions that a soil can adsorb, whereas the surface charge density is the determining factor of adsorbing strength for these ions. Because of the complexities in the composition of soils, the distribution of positive and negative charges is uneven on the surface of soil colloidal particles. Insight into the origin and the distribution of these charges should contribute to a sound foundation of the surface chemistry of soils.
APA, Harvard, Vancouver, ISO, and other styles
10

Spera, S. J., and J. R. Kyle. "Preliminary Petrographic and Isotopic Investigation of the Main Pass 299 Cap Rock-Hosted Sulfur Deposit." In Selected Mineral Deposits of the Gulf Coast and Southeastern United States, 85–96. Society of Economic Geologists, 1995. http://dx.doi.org/10.5382/gb.24.05.

Full text
Abstract:
Abstract Selected samples from a representative core (Hole SW-1 -27A) through the Main Pass 299 cap rock-hosted sulfur deposit were studied petrographically, and complementary geochemical and stable isotope analysis were conducted. The intervals selected include core from the oil-rich, sulfur-barren cap rock above the oil-water contact through the sulfur-bearing zone to the anhydrite cap rock zone (from 1,670 ft to 1,961 ft subsea). Descriptions of texture and morphology and whole-rock composition, including any hydrocarbon and sulfur occurrence, are provided. The various lithologies examined in this petrographic analysis included cores and thin sections taken from the sulfur barren, oil-rich cap rock, oil/water contact interval, and the sulfur-rich and sulfur-barren anhydrite horizons. The core in this well is consistent in composition and texture with a bacterially produced salt dome calcite cap rock. Much of the upper, sulfur barren cap rock is composed of numerous micro-faulted and calcite-cemented, imbricate structures and breccias. Most of the rock in these zones consists of a micritic matrix with variable amounts of rhombic dolomite (probably representing a residual concentration from the diapiric salt); variable amounts of secondary calcite and celestite occur in late pores and fractures. The sulfur-bearing horizon typically occurs in areas of relatively high porosity (10-30%) and is associated with sparry calcite, in addition to micritic limestone. Hydrocarbon (oil) horizons typically occur in the uppermost part of the well, from ∼1,670 ft to ∼1,722 ft subsea. Hydrocarbon stringers, from late-stage migrations, were also observed occurring in a deeper zone, from ∼1,879 ft to 1,922 ft subsea,
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Horizons de prédiction variable"

1

Zidan, A., and E. F. El-Saadany. "Network reconfiguration in balanced distribution systems with variable load demand and variable renewable resources generation." In 2012 IEEE Power & Energy Society General Meeting. New Energy Horizons - Opportunities and Challenges. IEEE, 2012. http://dx.doi.org/10.1109/pesgm.2012.6345160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Crowley, Daniel, Bradford Robertson, Rebecca Douglas, Dimitri Mavris, and Barry Hellman. "Aerodynamic Surrogate Modeling of Variable Geometry." In 50th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2012. http://dx.doi.org/10.2514/6.2012-268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pavleski, Aleksandar. "RE-THINKING SECURITY THROUGH THE PRISM OF THE SECURITY STUDIES APPROACH." In SECURITY HORIZONS. Faculty of Security- Skopje, 2022. http://dx.doi.org/10.20544/icp.3.6.22.p14.

Full text
Abstract:
Security is a variable category. What makes people and states safe or feel safe depends upon two basic phenomena. The most obvious source of challenge to our security is a factor that threatens the values of the people and the values of the state. The first and most objective of these are physical threats, such as military threats. The second phenomenon is related to the interpretation of the impact of the environment on the security meaning, or how the environment influences perceptions and interpretations of security and insecurity as well. The focus of the paper analysis is on exploring different security interpretations, specifically through the prism of the four generations of security studies. Actually, such four generations of security studies provide the necessary theoretical framework for comprehensive analysis in this regard. Hence, the paper is analyzing the evolutionary nature of security understanding in a variety of historical and environmental aspects. Keywords: security, insecurity, security studies, securitization, Copenhagen School of security studies
APA, Harvard, Vancouver, ISO, and other styles
4

Ross, Michael, Greg Meess, and Ephrahim Garcia. "Dynamically Variable Blade Geometry for Wind Energy." In 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2011. http://dx.doi.org/10.2514/6.2011-835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tong, Xiaoling, and E. Luke. "An Error Transport Equation in Primitive Variable Formulation." In 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2011. http://dx.doi.org/10.2514/6.2011-1295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Whitley, Ryan, and Cesar Ocampo. "Direct Multiple Shooting Optimization with Variable Problem Parameters." In 47th AIAA Aerospace Sciences Meeting including The New Horizons Forum and Aerospace Exposition. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2009. http://dx.doi.org/10.2514/6.2009-803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Antunes, Eduardo, Andre Silva, and Jorge Barata. "Evaluation of Numerical Variable Density Approach to Cryogenic Jets." In 50th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2012. http://dx.doi.org/10.2514/6.2012-1282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ruisheng Diao, N. Samaan, Y. Makarov, R. Hafen, and Jian Ma. "Planning for variable generation integration through balancing authorities consolidation." In 2012 IEEE Power & Energy Society General Meeting. New Energy Horizons - Opportunities and Challenges. IEEE, 2012. http://dx.doi.org/10.1109/pesgm.2012.6345108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ellis, A., R. Nelson, E. Von Engeln, J. MacDowell, L. Casey, E. Seymour, W. Peter, and J. R. Williams. "Review of existing reactive power requirements for variable generation." In 2012 IEEE Power & Energy Society General Meeting. New Energy Horizons - Opportunities and Challenges. IEEE, 2012. http://dx.doi.org/10.1109/pesgm.2012.6345555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yamazaki, Wataru, and Dimitri Mavriplis. "Derivative-Enhanced Variable Fidelity Surrogate Modeling for Aerodynamic Functions." In 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2011. http://dx.doi.org/10.2514/6.2011-1172.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Horizons de prédiction variable"

1

Djamai, N., R. A. Fernandes, L. Sun, F. Canisius, and G. Hong. Python version of Simplified Level 2 Prototype Processor for retrieving canopy biophysical variables from Sentinel-2 multispectral data. Natural Resources Canada/CMSS/Information Management, 2024. http://dx.doi.org/10.4095/p8stuehwyc.

Full text
Abstract:
La mission Sentinel-2 de Copernicus est conçue pour fournir des données pouvant être utilisées pour cartographier les variables biophysiques de la végétation a une échelle globale. Les estimations des variables biophysiques de la végétation ne sont pas encore produites de manière opérationnelle par le segment au sol de Sentinel-2. Plutôt, un algorithme de prédiction, appelé Simplified Level 2 Prototype Processor (SL2P), a été défini par l'Agence Spatiale Européenne. SL2P utilise deux réseaux neuronaux à rétropropagation, un pour estimer le variable biophysique de la végétation et l’autre pour quantifier l'incertitude de l'estimation, en utilisant une base de données de conditions de canopée globalement représentatives peuplée à l'aide de simulations de modèle de transfert radiatif de la canopée. SL2P a été mis en œuvre dans la boîte à outils LEAF du Centre Canadien de Télédétection qui s'appuie sur Google Earth Engine. Ce document décrit une implémentation PYTHON de SL2P (SL2P-PYTHON) qui fournit des estimations identiques estimations obtenues avec LEAF en utilisant la même image Sentinel-2 en entrée.
APA, Harvard, Vancouver, ISO, and other styles
2

Broto, Carmen, and Olivier Hubert. Desertification in Spain: Is there any impact on credit to firms? Madrid: Banco de España, February 2025. https://doi.org/10.53479/39119.

Full text
Abstract:
We study whether the process of desertification in Spain has an impact on the volume of credit granted to Spanish non-financial corporations (NFCs). To this end, we use a panel data model at the municipal level from 1984 to 2019 for bank loans obtained from the Banco de España’s central credit register, where the main explanatory variable is the aridity index. Given that aridity is a long-term climatic phenomenon, we also estimate the model with local projections (Jordà, 2005) to disentangle the impact of aridity on credit to NFCs over longer horizons. Consistent with the literature, we find that higher aridity leads to lower credit to firms, at both short and long-term horizons. We also show that the effect of aridity on credit is sector-specific and depends on the climate zone. Credit to the agricultural sector is most negatively affected by this climatic hazard, while this phenomenon leads to more credit to the tourism sector in the most humid regions.
APA, Harvard, Vancouver, ISO, and other styles
3

Cárdenas-Cárdenas, Julián Alonso, Deicy J. Cristiano-Botia, and Nicolás Martínez-Cortés. Colombian inflation forecast using Long Short-Term Memory approach. Banco de la República, June 2023. http://dx.doi.org/10.32468/be.1241.

Full text
Abstract:
We use Long Short Term Memory (LSTM) neural networks, a deep learning technique, to forecast Colombian headline inflation one year ahead through two approaches. The first one uses only information from the target variable, while the second one incorporates additional information from some relevant variables. We employ sample rolling to the traditional neuronal network construction process, selecting the hyperparameters with criteria for minimizing the forecast error. Our results show a better forecasting capacity of the network with information from additional variables, surpassing both the other LSTM application and ARIMA models optimized for forecasting (with and without explanatory variables). This improvement in forecasting accuracy is most pronounced over longer time horizons, specifically from the seventh month onwards.
APA, Harvard, Vancouver, ISO, and other styles
4

Hunter, Fraser, and Martin Carruthers. Iron Age Scotland. Society for Antiquaries of Scotland, September 2012. http://dx.doi.org/10.9750/scarf.09.2012.193.

Full text
Abstract:
The main recommendations of the panel report can be summarised under five key headings:  Building blocks: The ultimate aim should be to build rich, detailed and testable narratives situated within a European context, and addressing phenomena from the longue durée to the short-term over international to local scales. Chronological control is essential to this and effective dating strategies are required to enable generation-level analysis. The ‘serendipity factor’ of archaeological work must be enhanced by recognising and getting the most out of information-rich sites as they appear. o There is a pressing need to revisit the archives of excavated sites to extract more information from existing resources, notably through dating programmes targeted at regional sequences – the Western Isles Atlantic roundhouse sequence is an obvious target. o Many areas still lack anything beyond the baldest of settlement sequences, with little understanding of the relations between key site types. There is a need to get at least basic sequences from many more areas, either from sustained regional programmes or targeted sampling exercises. o Much of the methodologically innovative work and new insights have come from long-running research excavations. Such large-scale research projects are an important element in developing new approaches to the Iron Age.  Daily life and practice: There remains great potential to improve the understanding of people’s lives in the Iron Age through fresh approaches to, and integration of, existing and newly-excavated data. o House use. Rigorous analysis and innovative approaches, including experimental archaeology, should be employed to get the most out of the understanding of daily life through the strengths of the Scottish record, such as deposits within buildings, organic preservation and waterlogging. o Material culture. Artefact studies have the potential to be far more integral to understandings of Iron Age societies, both from the rich assemblages of the Atlantic area and less-rich lowland finds. Key areas of concern are basic studies of material groups (including the function of everyday items such as stone and bone tools, and the nature of craft processes – iron, copper alloy, bone/antler and shale offer particularly good evidence). Other key topics are: the role of ‘art’ and other forms of decoration and comparative approaches to assemblages to obtain synthetic views of the uses of material culture. o Field to feast. Subsistence practices are a core area of research essential to understanding past society, but different strands of evidence need to be more fully integrated, with a ‘field to feast’ approach, from production to consumption. The working of agricultural systems is poorly understood, from agricultural processes to cooking practices and cuisine: integrated work between different specialisms would assist greatly. There is a need for conceptual as well as practical perspectives – e.g. how were wild resources conceived? o Ritual practice. There has been valuable work in identifying depositional practices, such as deposition of animals or querns, which are thought to relate to house-based ritual practices, but there is great potential for further pattern-spotting, synthesis and interpretation. Iron Age Scotland: ScARF Panel Report v  Landscapes and regions:  Concepts of ‘region’ or ‘province’, and how they changed over time, need to be critically explored, because they are contentious, poorly defined and highly variable. What did Iron Age people see as their geographical horizons, and how did this change?  Attempts to understand the Iron Age landscape require improved, integrated survey methodologies, as existing approaches are inevitably partial.  Aspects of the landscape’s physical form and cover should be investigated more fully, in terms of vegetation (known only in outline over most of the country) and sea level change in key areas such as the firths of Moray and Forth.  Landscapes beyond settlement merit further work, e.g. the use of the landscape for deposition of objects or people, and what this tells us of contemporary perceptions and beliefs.  Concepts of inherited landscapes (how Iron Age communities saw and used this longlived land) and socal resilience to issues such as climate change should be explored more fully.  Reconstructing Iron Age societies. The changing structure of society over space and time in this period remains poorly understood. Researchers should interrogate the data for better and more explicitly-expressed understandings of social structures and relations between people.  The wider context: Researchers need to engage with the big questions of change on a European level (and beyond). Relationships with neighbouring areas (e.g. England, Ireland) and analogies from other areas (e.g. Scandinavia and the Low Countries) can help inform Scottish studies. Key big topics are: o The nature and effect of the introduction of iron. o The social processes lying behind evidence for movement and contact. o Parallels and differences in social processes and developments. o The changing nature of houses and households over this period, including the role of ‘substantial houses’, from crannogs to brochs, the development and role of complex architecture, and the shift away from roundhouses. o The chronology, nature and meaning of hillforts and other enclosed settlements. o Relationships with the Roman world
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography