To see the other types of publications on this topic, follow the link: Ensembles somme.

Journal articles on the topic 'Ensembles somme'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Ensembles somme.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Richarté-Manfredi, Chaterine. "Céramiques glaçurées et à décor vert et brun des épaves islamiques de Provence (fin IXe-début Xe siècle)." Arqueología y Territorio Medieval 27 (December 22, 2020): 63–77. http://dx.doi.org/10.17561/aytm.v27.5433.

Full text
Abstract:
Aborder le thème des mobiliers céramiques glaçurés, qui constituent une part, somme toute, assez modeste des cargaisons étudiées des épaves islamiques de Provence, revêt toutefois une importance cruciale en terme de diffusion des techniques et savoir-faire en Méditerranée du haut Moyen Âge. Les produits transportés par les navires à partir des côtes du sud-est et du Levant ibérique permettent de cerner, au plus juste, la datation de ces « ensembles clos » engloutis. Par ailleurs, la confrontation de ces données avec celles du sud-est péninsulaire (Pechina/Alméria) est tout à fait indispensable pour la compréhension de cette dynamique commerciale du haut Moyen Âge méditerranéen.
APA, Harvard, Vancouver, ISO, and other styles
2

Mc Andrew, Marie, and Michel Ledoux. "La concentration ethnique dans les écoles de langue française de l’île de Montréal : un portrait statistique." Cahiers québécois de démographie 24, no. 2 (March 25, 2004): 343–70. http://dx.doi.org/10.7202/010192ar.

Full text
Abstract:
RÉSUMÉ À l'aide d'un indicateur composé qui correspond à la somme des sous-ensembles mutuellement exclusifs des élèves nés à l'étranger ou nés au Québec de parents nés à l'étranger ou allophones, les auteurs analysent le phénomène de la concentration ethnique dans les écoles françaises de l'île de Montréal. La majorité des élèves de cette population cible sont scolarisés dans des écoles où ils constituent plus de 50 % de la clientèle. Cependant, la plupart des écoles comptent plusieurs groupes ethniques, ce qui interdit de parler d'un phénomène de ghettos. Par ailleurs, la concentration ethnique varie énormément selon les commissions scolaires et selon la communauté envisagée, les communautés récentes étant plus concentrées que les anciennes, du moins au secteur français, qui fait seul l'objet de cette étude. Finalement, la mise en rapport de la concentration ethnique avec l'indice de défavorisation des écoles montre que ces deux facteurs ne sont pas corrélês de manière claire.
APA, Harvard, Vancouver, ISO, and other styles
3

DURU, M. "Le volume d’herbe disponible par vache : un indicateur synthétique pour évaluer et conduire un pâturage tournant." INRAE Productions Animales 13, no. 5 (October 22, 2000): 325–36. http://dx.doi.org/10.20870/productions-animales.2000.13.5.3801.

Full text
Abstract:
Gérer un système de pâturage nécessite de définir de manière cohérente trois types de règles. Les règles de planification s’appuient le plus souvent sur des référentiels régionaux (chargement par saison en fonction des apports d’azote). Les règles opératoires (hauteur d’herbe résiduelle, intervalle entre deux utilisations) sont issues d’expérimentations. Les règles d’adaptation ne sont généralement pas spécifiées précisément ; cependant des moyens d’adaptation sont proposés (ajout ou retrait de parcelles) ainsi que des indicateurs de déclenchement de ces adaptations (état de l’herbe par exemple). La difficulté de conduite d’un système de pâturage est de rendre cohérent ces trois ensembles de règles. Il s’agit par exemple de dimensionner la surface allouée par vache de façon à obtenir une hauteur d’herbe résiduelle permettant une efficience agronomique élevée, tout en minimisant les adaptations à faire entre saisons et années. A partir de suivis de pâturage dans trois réseaux d’élevage, nous montrons que le volume d’herbe disponible par équivalent vache - VHD - (somme des produits surface x hauteur de l’herbe des différentes parcelles, divisée par le nombre de vaches) est un indicateur permettant de relier les différentes règles de conduite du pâturage. En outre, les observations détaillées de la structure de la prairie pâturée pour différentes valeurs de VHD permettent de définir des seuils pour concilier les recommandations à l’échelle de la parcelle et la simplicité de conduite.
APA, Harvard, Vancouver, ISO, and other styles
4

WINDEATT, T., and G. ARDESHIR. "DECISION TREE SIMPLIFICATION FOR CLASSIFIER ENSEMBLES." International Journal of Pattern Recognition and Artificial Intelligence 18, no. 05 (August 2004): 749–76. http://dx.doi.org/10.1142/s021800140400340x.

Full text
Abstract:
The goal of designing an ensemble of simple classifiers is to improve the accuracy of a recognition system. However, the performance of ensemble methods is problem-dependent and the classifier learning algorithm has an important influence on ensemble performance. In particular, base classifiers that are too complex may result in overfitting. In this paper, the performance of Bagging, Boosting and Error-Correcting Output Code (ECOC) is compared for five decision tree pruning methods. A description is given for each of the pruning methods and the ensemble techniques. AdaBoost.OC which is a combination of Boosting and ECOC is compared with the pseudo-loss based version of Boosting, AdaBoost.M2 and the influence of pruning on the performance of the ensembles is studied. Motivated by the result that both pruned and unpruned ensembles made by AdaBoost.OC give similar accuracy, pruned ensembles are compared with ensembles of Decision Stumps. This leads to the hypothesis that ensembles of simple classifiers may give better performance for some problems. Using the application of face recognition, it is shown that an AdaBoost.OC ensemble of Decision Stumps outperforms an ensemble of pruned C4.5 trees for face identification, but is inferior for face verification. The implication is that in some real-world tasks to achieve best accuracy of an ensemble, it may be necessary to select base classifier complexity.
APA, Harvard, Vancouver, ISO, and other styles
5

Hsu, Kuo-Wei. "A Theoretical Analysis of Why Hybrid Ensembles Work." Computational Intelligence and Neuroscience 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/1930702.

Full text
Abstract:
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Peijing, Yun Su, Qianqian Huang, Jun Li, and Jingxian Xu. "Experimental study on the thermal regulation performance of winter uniform used for high school students." Textile Research Journal 89, no. 12 (July 31, 2018): 2316–29. http://dx.doi.org/10.1177/0040517518790977.

Full text
Abstract:
To understand the effectiveness of some garment adjustment designs for high school uniform in winter, manikin tests and subjective wear trials were carried out. Five series of school uniform ensembles were involved in the experiments. They were further collocated into 17 ensemble configurations with detachable designs (ensembles A and B) and opening structures (ensembles C, D, and E). As manikin test results showed, the thermal insulation of ensembles A, B and C varied most significantly due to their adjustment design. The possible thermal insulation regulation levels were approximately 68% and 80% for ensembles A and B, and 60% and 90% for ensemble C. Two human trials that simulated students’ daily movements between indoor and outdoor classes were conducted with ensemble A. Two climate chambers were used at the same time for indoor and outdoor environment simulation. In Case X, where ensemble A was assumed to be non-detachable, skin temperatures that were 0.6℃ lower were finally observed compared to Case Y, where ensemble A was detachable. Moreover, significantly ( p < 0.1) better thermal comfort and thermal sensation evaluations were given during low-intensity activities in Case Y, especially for the torso segments. The detachable high school uniform design was finally proved to be efficient in improving human thermal comfort under various class environments. It was also concluded that more protective measures should be adopted for the hands and face in the school uniform design process.
APA, Harvard, Vancouver, ISO, and other styles
7

Descamps, L., and O. Talagrand. "On Some Aspects of the Definition of Initial Conditions for Ensemble Prediction." Monthly Weather Review 135, no. 9 (September 1, 2007): 3260–72. http://dx.doi.org/10.1175/mwr3452.1.

Full text
Abstract:
Abstract Four methods for initialization of ensemble forecasts are systematically compared, namely the methods of singular vectors (SV) and bred modes (BM), as well as the ensemble Kalman filter (EnKF) and the ensemble transform Kalman filter (ETKF). The comparison is done on synthetic data with two models of the flow, namely, a low-order model introduced by Lorenz and a three-level quasigeostrophic atmospheric model. For the latter, both cases of a perfect and an imperfect model are considered. The performance of the various initialization methods is assessed in terms of the statistical reliability and resolution of the ensuing predictions. The relative performance of the four methods, which is statistically significant to a range of about 6 days, is in the order EnKF &gt; ETKF &gt; BM &gt; SV. The difference between the former two methods and the latter two is on the whole more significant than the differences between EnKF and ETKF, or between BM and SV separately. The general conclusion is that, if the quality of ensemble predictions is assessed by the degree to which the predicted ensembles statistically sample the uncertainty on the future state of the flow, the best initial ensembles are those that best statistically sample the uncertainty on the present state of the flow.
APA, Harvard, Vancouver, ISO, and other styles
8

BERNUI, ARMANDO, THYRSO VILLELA, and IVAN FERREIRA. "ANALYSIS OF THE ANGULAR DISTRIBUTION OF COSMIC OBJECTS." International Journal of Modern Physics D 13, no. 07 (August 2004): 1189–95. http://dx.doi.org/10.1142/s0218271804005304.

Full text
Abstract:
We investigate a method that reveals anisotropies in the angular distribution of cosmic objects. In particular, we investigate ensembles with ~ 1000 objects, which is the interesting case of some astronomical catalogs. Considering test ensembles generated with a variable degree of anisotropy, we calculate the probability of any pair of objects to be separated by a given angular distance and compare this result with the same probability for a purely isotropic ensemble. We show that the use of sub-ensembles of the original full sky test ensemble, namely partial catalogs containing objects in the polar cap regions, can reveal, at any scale, possible angular correlations in the original full sky distribution. We also show the robustness of this method by comparing it with the Kolmogorov–Smirnov and χ2 statistical tests.
APA, Harvard, Vancouver, ISO, and other styles
9

Yamaguchi, Munehiko, and Naohisa Koide. "Tropical Cyclone Genesis Guidance Using the Early Stage Dvorak Analysis and Global Ensembles." Weather and Forecasting 32, no. 6 (November 21, 2017): 2133–41. http://dx.doi.org/10.1175/waf-d-17-0056.1.

Full text
Abstract:
Abstract TC genesis guidance using the early stage Dvorak analysis technique (EDA) and global ensembles is investigated as one of the statistical–dynamical TC genesis guidance schemes. The EDA is a scheme that enables the analysis of tropical disturbances at earlier stages by adding T numbers of 0.0 and 0.5 to the conventional Dvorak technique. This unique analysis method has been in operation at JMA since 2001. The global ensembles used in this study are the ECMWF, JMA, NCEP, and UKMO ensembles covering from 2010 to 2013. First, probabilities that tropical disturbances analyzed with the EDA reach tropical storm intensity within a certain lead time up to 5 days are statistically investigated. For example, the probabilities that a tropical disturbance analyzed with T numbers of 0.0, 0.5, and 1.0 reaches tropical storm intensity within 2 days are 15%, 23%, and 57%, respectively. While the false alarm ratio (FAR) is found to decrease if the global ensembles simulate the tropical disturbance analyzed with the EDA in the models, it tends to decrease with the increasing number of such ensemble members. Also, it should be noted that the probability of detection (POD) decreases with the increasing number of such ensemble members. One of the potential uses of these verification results is that forecasters could issue TC genesis forecasts by counting ensemble members that successfully simulate a targeted tropical disturbance and then refer to the FAR and POD corresponding to the number of the ensemble members. These would provide some confidence information of the forecasts.
APA, Harvard, Vancouver, ISO, and other styles
10

Velázquez, J. A., F. Anctil, and C. Perrin. "Performance and reliability of multimodel hydrological ensemble simulations based on seventeen lumped models and a thousand catchments." Hydrology and Earth System Sciences 14, no. 11 (November 18, 2010): 2303–17. http://dx.doi.org/10.5194/hess-14-2303-2010.

Full text
Abstract:
Abstract. This work investigates the added value of ensembles constructed from seventeen lumped hydrological models against their simple average counterparts. It is thus hypothesized that there is more information provided by all the outputs of these models than by their single aggregated predictors. For all available 1061 catchments, results showed that the mean continuous ranked probability score of the ensemble simulations were better than the mean average error of the aggregated simulations, confirming the added value of retaining all the components of the model outputs. Reliability of the simulation ensembles is also achieved for about 30% of the catchments, as assessed by rank histograms and reliability plots. Nonetheless this imperfection, the ensemble simulations were shown to have better skills than the deterministic simulations at discriminating between events and non-events, as confirmed by relative operating characteristic scores especially for larger streamflows. From 7 to 10 models are deemed sufficient to construct ensembles with improved performance, based on a genetic algorithm search optimizing the continuous ranked probability score. In fact, many model subsets were found improving the performance of the reference ensemble. This is thus not essential to implement as much as seventeen lumped hydrological models. The gain in performance of the optimized subsets is accompanied by some improvement of the ensemble reliability in most cases. Nonetheless, a calibration of the predictive distribution is still needed for many catchments.
APA, Harvard, Vancouver, ISO, and other styles
11

Velázquez, J. A., F. Anctil, and C. Perrin. "Performance and reliability of multimodel hydrological ensemble simulations based on seventeen lumped models and a thousand catchments." Hydrology and Earth System Sciences Discussions 7, no. 3 (June 29, 2010): 4023–58. http://dx.doi.org/10.5194/hessd-7-4023-2010.

Full text
Abstract:
Abstract. This work investigates the added value of ensembles constructed from seventeen lumped hydrological models against their simple average counterparts. It is thus hypothesized that there is more information provided by all the outputs of these models than by their single aggregated predictors. For all available 1061 catchments, results showed that the mean continuous ranked probability score of the ensemble simulations were better than the mean average error of the aggregated simulations, confirming the added value of retaining all the components of the model outputs. Reliability of the simulation ensembles is also achieved for about 30% of the catchments, as assessed by rank histograms and reliability plots. Nonetheless this imperfection, the ensemble simulations were shown to have better skills than the deterministic simulations at discriminating between events and non-events, as confirmed by relative operating characteristic scores especially for larger streamflows. From 7 to 10 models are deemed sufficient to construct ensembles with improved performance, based on a genetic algorithm search optimizing the continuous ranked probability score. In fact, many model subsets were found improving the performance of the reference ensemble. This is thus not essential to implement as much as seventeen lumped hydrological models. The gain in performance of the optimized subsets is accompanied by some improvement of the ensemble reliability in most cases. Nonetheless, a calibration of the predictive distribution is still needed for many catchments.
APA, Harvard, Vancouver, ISO, and other styles
12

Fang, Di, Seung-Yeal Ha, and Shi Jin. "Emergent behaviors of the Cucker–Smale ensemble under attractive–repulsive couplings and Rayleigh frictions." Mathematical Models and Methods in Applied Sciences 29, no. 07 (June 24, 2019): 1349–85. http://dx.doi.org/10.1142/s0218202519500234.

Full text
Abstract:
In this paper, we revisit an interaction problem of two homogeneous Cucker–Smale (in short CS) ensembles with attractive–repulsive couplings, possibly under the effect of Rayleigh friction, and study three sufficient frameworks leading to bi-cluster flocking in which two sub-ensembles evolve to two clusters departing from each other. In the previous literature, the interaction problem has been studied in the context of attractive couplings. In our interaction problem, inter-ensemble and intra-ensemble couplings are assumed to be repulsive and attractive, respectively. When the Rayleigh frictional forces are turned on, we show that the total kinetic energy is uniformly bounded so that spatially mixed initial configurations evolve toward the bi-cluster configuration asymptotically fast under some suitable conditions on system parameters, communication weight functions and initial configurations. In contrast, when Rayleigh frictional forces are turned off, the flocking analysis is more delicate mainly due to the possibility of an exponential growth of the kinetic energy. In this case, we employ two mutually disjoint frameworks with constant inter-ensemble communication function and exponentially localized inter-ensemble communication functions, respectively, and prove the bi-clustering phenomenon in both cases. This work extends the previous work on the interaction problem of CS ensembles. We also conduct several numerical experiments and compare them with our theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
13

Nehrkorn, Thomas, and Ross N. Hoffman. "Creating Pseudo–Forecast Ensembles Statistically Using a Characterization of Displacements: A Pilot Study." Journal of Applied Meteorology and Climatology 45, no. 11 (November 1, 2006): 1542–56. http://dx.doi.org/10.1175/jam2428.1.

Full text
Abstract:
Abstract A feature-based statistical method is investigated as a method of generating pseudoensembles of numerical weather prediction forecasts. The goal is to enhance or dress a single dynamical forecast or an ensemble of dynamical forecasts with many realistic perturbations so as to represent better the forecast uncertainty. The feature calibration and alignment method (FCA) is used to characterize forecast differences and to generate the additional ensemble members. FCA is unique in decomposing forecast errors or differences into phase, bias, and residual error or difference components. In a pilot study using 500-hPa geopotential height data, pseudoensembles of weather forecasts are generated from one deterministic forecast and perturbations obtained by randomly sampling FCA displacements based on a priori statistics and applying these displacements to the original deterministic forecast. Comparison with actual dynamical ensembles of 500-hPa geopotential height generated by ECMWF show that important features of the dynamical ensemble, such as the spatial patterns of the ensemble mean and variance, can be approximated by the FCA pseudoensemble. Ensemble verification statistics are presented for the dynamic and FCA ensemble and compared with those of simpler statistically based pseudoensembles. Some limitations of the FCA ensembles are noted, and mitigation approaches are discussed, with a view toward applying the method to mesoscale forecasts for dispersion modeling.
APA, Harvard, Vancouver, ISO, and other styles
14

Duda, Jeffrey D., Xuguang Wang, and Ming Xue. "Sensitivity of Convection-Allowing Forecasts to Land Surface Model Perturbations and Implications for Ensemble Design." Monthly Weather Review 145, no. 5 (May 1, 2017): 2001–25. http://dx.doi.org/10.1175/mwr-d-16-0349.1.

Full text
Abstract:
Abstract In this exploratory study, a series of perturbations to the land surface model (LSM) component of the Weather Research and Forecasting (WRF) Model was developed to investigate the sensitivity of forecasts of severe thunderstorms and heavy precipitation at 4-km grid spacing and whether such perturbations could improve ensemble forecasts at this scale. The perturbations (generated using a combination of perturbing fixed parameters and using separate schemes, one of which—Noah-MP—is new among the WRF modeling community) were applied to a 10-member ensemble including other mixed physics parameterizations and compared against an identically configured ensemble that did not include the LSM perturbations to determine their impact on probabilistic forecasts. A third ensemble using only the LSM perturbations was also configured. The results from 14 (in total) 36-h ensemble forecasts suggested the LSM perturbations resulted in systematic improvement in ensemble dispersion and error characteristics. Lower-tropospheric temperature, moisture, and wind fields were all improved, as were probabilistic precipitation forecasts. Biases were not systematically altered, although some outlier members are present. Examination of near-surface temperature and mixing ratio fields, surface energy fluxes, and soil fields revealed tendencies caused by certain perturbations. A case study featuring tornadic supercells illustrated the physical causes of some of these tendencies. The results of this study suggest LSM perturbations can sample a dimension of model error not yet sampled systematically in most ensembles and should be included in convection-allowing ensembles.
APA, Harvard, Vancouver, ISO, and other styles
15

Bröcker, Jochen, Stefan Siegert, and Holger Kantz. "Comments on “Conditional Exceedance Probabilities”." Monthly Weather Review 139, no. 10 (October 1, 2011): 3322–24. http://dx.doi.org/10.1175/2011mwr3658.1.

Full text
Abstract:
Abstract In a recent paper, Mason et al. propose a reliability test of ensemble forecasts for a continuous, scalar verification. As noted in the paper, the test relies on a very specific interpretation of ensembles, namely, that the ensemble members represent quantiles of some underlying distribution. This quantile interpretation is not the only interpretation of ensembles, another popular one being the Monte Carlo interpretation. Mason et al. suggest estimating the quantiles in this situation; however, this approach is fundamentally flawed. Errors in the quantile estimates are not independent of the exceedance events, and consequently the conditional exceedance probabilities (CEP) curves are not constant, which is a fundamental assumption of the test. The test would reject reliable forecasts with probability much higher than the test size.
APA, Harvard, Vancouver, ISO, and other styles
16

Schwartz, Craig S., Zhiquan Liu, and Xiang-Yu Huang. "Sensitivity of Limited-Area Hybrid Variational-Ensemble Analyses and Forecasts to Ensemble Perturbation Resolution." Monthly Weather Review 143, no. 9 (August 31, 2015): 3454–77. http://dx.doi.org/10.1175/mwr-d-14-00259.1.

Full text
Abstract:
Abstract Dual-resolution (DR) hybrid variational-ensemble analysis capability was implemented within the community Weather Research and Forecasting (WRF) Model data assimilation (DA) system, which is designed for limited-area applications. The DR hybrid system combines a high-resolution (HR) background, flow-dependent background error covariances (BECs) derived from a low-resolution ensemble, and observations to produce a deterministic HR analysis. As DR systems do not require HR ensembles, they are computationally cheaper than single-resolution (SR) hybrid configurations, where the background and ensemble have equal resolutions. Single-observation tests were performed to document some characteristics of limited-area DR hybrid analyses. Additionally, the DR hybrid system was evaluated within a continuously cycling framework, where new DR hybrid analyses were produced every 6 h over ~3.5 weeks. In the DR configuration presented here, the deterministic backgrounds and analyses had 15-km horizontal grid spacing, but the 32-member WRF Model–based ensembles providing flow-dependent BECs for the hybrid had 45-km horizontal grid spacing. The DR hybrid analyses initialized 72-h WRF Model forecasts that were compared to forecasts initialized by an SR hybrid system where both the ensemble and background had 15-km horizontal grid spacing. The SR and DR hybrid systems were coupled to an ensemble adjustment Kalman filter that updated ensembles each DA cycle. On average, forecasts initialized from 15-km DR and SR hybrid analyses were not statistically significantly different, although tropical cyclone track forecast errors favored the SR-initialized forecasts. Although additional studies over longer time periods and at finer grid spacing are needed to further understand sensitivity to ensemble perturbation resolution, these results suggest users should carefully consider whether SR hybrid systems are worth the extra cost.
APA, Harvard, Vancouver, ISO, and other styles
17

Plu, Matthieu, and Philippe Arbogast. "A Cyclogenesis Evolving into Two Distinct Scenarios and Its Implications for Short-Term Ensemble Forecasting." Monthly Weather Review 133, no. 7 (July 1, 2005): 2016–29. http://dx.doi.org/10.1175/mwr2955.1.

Full text
Abstract:
Abstract In a nonlinear quasigeostrophic model with uniform potential vorticity, an idealized initial state sharing some features with atmospheric low-predictability situations is built. Inspired by previous work on idealized cyclogenesis, two different cyclogenesis scenarios are obtained as a result of a small change of the initial location of one structure. This behavior is interpreted by analyzing the baroclinic interaction between upper- and lower-level anomalies. The error growth mechanism is nonlinear; it does not depend on the linear stability properties of the jet, which are the same in both evolutions. The ability of ensemble forecasts to capture these two possible evolutions is then assessed given some realistic error bounds in the knowledge of the initial conditions. First, a reference statistical distribution of each of the evolutions is obtained by means of a large Monte Carlo ensemble. Smaller ensembles with size representative of what is available in current operational implementations are then built and compared to the Monte Carlo reference: several singular-vector-based ensembles, a small Monte Carlo ensemble, and a “coherent structure”-based ensemble. This new technique relies on a sampling of the errors on the precursors of the cyclogenesis: amplitude and position errors. In this context, the precursors are handled as coherent structures that may be amplified or moved within realistic error bounds. It is shown that the singular vector ensemble fails to reproduce the bimodal distribution of the variability if the ensemble is not initially constrained, whereas it is accessible at a relatively low cost to the new coherent structures initialization.
APA, Harvard, Vancouver, ISO, and other styles
18

Jurek, Anna, Yaxin Bi, Shengli Wu, and Chris Nugent. "A survey of commonly used ensemble-based classification techniques." Knowledge Engineering Review 29, no. 5 (May 3, 2013): 551–81. http://dx.doi.org/10.1017/s0269888913000155.

Full text
Abstract:
AbstractThe combination of multiple classifiers, commonly referred to as a classifier ensemble, has previously demonstrated the ability to improve classification accuracy in many application domains. As a result this area has attracted significant amount of research in recent years. The aim of this paper has therefore been to provide a state of the art review of the most well-known ensemble techniques with the main focus on bagging, boosting and stacking and to trace the recent attempts, which have been made to improve their performance. Within this paper, we present and compare an updated view on the different modifications of these techniques, which have specifically aimed to address some of the drawbacks of these methods namely the low diversity problem in bagging or the over-fitting problem in boosting. In addition, we provide a review of different ensemble selection methods based on both static and dynamic approaches. We present some new directions which have been adopted in the area of classifier ensembles from a range of recently published studies. In order to provide a deeper insight into the ensembles themselves a range of existing theoretical studies have been reviewed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
19

AKHAND, M. A. H., MD MONIRUL ISLAM, and KAZUYUKI MURASE. "A COMPARATIVE STUDY OF DATA SAMPLING TECHNIQUES FOR CONSTRUCTING NEURAL NETWORK ENSEMBLES." International Journal of Neural Systems 19, no. 02 (April 2009): 67–89. http://dx.doi.org/10.1142/s0129065709001859.

Full text
Abstract:
Ensembles with several classifiers (such as neural networks or decision trees) are widely used to improve the generalization performance over a single classifier. Proper diversity among component classifiers is considered an important parameter for ensemble construction so that failure of one may be compensated by others. Among various approaches, data sampling, i.e., different data sets for different classifiers, is found more effective than other approaches. A number of ensemble methods have been proposed under the umbrella of data sampling in which some are constrained to neural networks or decision trees and others are commonly applicable to both types of classifiers. We studied prominent data sampling techniques for neural network ensembles, and then experimentally evaluated their effectiveness on a common test ground. Based on overlap and uncover, the relation between generalization and diversity is presented. Eight ensemble methods were tested on 30 benchmark classification problems. We found that bagging and boosting, the pioneer ensemble methods, are still better than most of the other proposed methods. However, negative correlation learning that implicitly encourages different networks to different training spaces is shown as better or at least comparable to bagging and boosting that explicitly create different training spaces.
APA, Harvard, Vancouver, ISO, and other styles
20

Bédard, Joël, Mark Buehner, Jean-François Caron, Seung-Jong Baek, and Luc Fillion. "Practical Ensemble-Based Approaches to Estimate Atmospheric Background Error Covariances for Limited-Area Deterministic Data Assimilation." Monthly Weather Review 146, no. 11 (October 26, 2018): 3717–33. http://dx.doi.org/10.1175/mwr-d-18-0145.1.

Full text
Abstract:
Abstract High-resolution flow-dependent background error covariances can allow for a better usage of dense observation networks in applications of data assimilation for numerical weather prediction. The generation of high-resolution ensembles, however, can be computationally cost prohibitive. In this study, practical and low-cost ensemble generation methods are presented and compared against both global and regional ensemble Kalman filters (G-EnKF and R-EnKF, respectively). The goal is to provide limited-area deterministic assimilation schemes with higher-resolution flow-dependent background error covariances that perform at least as well as those from the G-EnKF when assimilating the same observations. The low-cost methods are based on short-range regional ensemble forecasts initialized from 1) deterministic analysis plus balanced perturbations (filter free approach) and 2) a simplified ensemble square root filter (S-EnSRF), centered on deterministic analyses. The resulting ensembles from the different approaches are used within a 4D ensemble–variational (4D-EnVar) assimilation system covering most of Canada and the northern United States. Diagnostic results show that the mean is an important component of the ensembles. Results also show that the persistence of the homogeneous characteristics of the perturbations in the filter free approach makes this method unsuited for short assimilation time windows since some error structures take longer to develop. The S-EnSRF approach overcomes this limitation by recycling part of the prior perturbations. Results from 1-month assimilation experiments show that the S-EnSRF and R-EnKF experiments provide forecasts of similar quality to those from G-EnKF. Furthermore, results from precipitation verification indicate that the R-EnKF experiment provides the best precipitation accumulation predictions over 24-h periods.
APA, Harvard, Vancouver, ISO, and other styles
21

Hänggi, Peter, Stefan Hilbert, and Jörn Dunkel. "Meaning of temperature in different thermostatistical ensembles." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374, no. 2064 (March 28, 2016): 20150039. http://dx.doi.org/10.1098/rsta.2015.0039.

Full text
Abstract:
Depending on the exact experimental conditions, the thermodynamic properties of physical systems can be related to one or more thermostatistical ensembles. Here, we survey the notion of thermodynamic temperature in different statistical ensembles, focusing in particular on subtleties that arise when ensembles become non-equivalent. The ‘mother’ of all ensembles, the microcanonical ensemble, uses entropy and internal energy (the most fundamental, dynamically conserved quantity) to derive temperature as a secondary thermodynamic variable. Over the past century, some confusion has been caused by the fact that several competing microcanonical entropy definitions are used in the literature, most commonly the volume and surface entropies introduced by Gibbs. It can be proved, however, that only the volume entropy satisfies exactly the traditional form of the laws of thermodynamics for a broad class of physical systems, including all standard classical Hamiltonian systems, regardless of their size. This mathematically rigorous fact implies that negative ‘absolute’ temperatures and Carnot efficiencies more than 1 are not achievable within a standard thermodynamical framework. As an important offspring of microcanonical thermostatistics, we shall briefly consider the canonical ensemble and comment on the validity of the Boltzmann weight factor. We conclude by addressing open mathematical problems that arise for systems with discrete energy spectra.
APA, Harvard, Vancouver, ISO, and other styles
22

Vukicevic, Tomislava, Isidora Jankov, and John McGinley. "Diagnosis and Optimization of Ensemble Forecasts." Monthly Weather Review 136, no. 3 (March 1, 2008): 1054–74. http://dx.doi.org/10.1175/2007mwr2153.1.

Full text
Abstract:
Abstract In the current study, a technique that offers a way to evaluate ensemble forecast uncertainties produced either by initial conditions or different model versions, or both, is presented. The technique consists of first diagnosing the performance of the forecast ensemble and then optimizing the ensemble forecast using results of the diagnosis. The technique is based on the explicit evaluation of probabilities that are associated with the Gaussian stochastic representation of the weather analysis and forecast. It combines an ensemble technique for evaluating the analysis error covariance and the standard Monte Carlo approach for computing samples from a known Gaussian distribution. The technique was demonstrated in a tutorial manner on two relatively simple examples to illustrate the impact of ensemble characteristics including ensemble size, various observation strategies, and configurations including different model versions and varying initial conditions. In addition, the authors assessed improvements in the consensus forecasts gained by optimal weighting of the ensemble members based on time-varying, prior-probabilistic skill measures. The results with different observation configurations indicate that, as observations become denser, there is a need for larger-sized ensembles and/or more accuracy among individual members for the ensemble forecast to exhibit prediction skill. The main conclusions relative to ensembles built up with different physics configurations were, first, that almost all members typically exhibited some skill at some point in the model run, suggesting that all should be retained to acquire the best consensus forecast; and, second, that the normalized probability metric can be used to determine what sets of weights or physics configurations are performing best. A comparison of forecasts derived from a simple ensemble mean to forecasts from a mean developed from variably weighting the ensemble members based on prior performance by the probabilistic measure showed that the latter had substantially reduced mean absolute error. The study also indicates that a weighting scheme that utilized more prior cycles showed additional reduction in forecast error.
APA, Harvard, Vancouver, ISO, and other styles
23

Roebber, Paul J. "Using Evolutionary Programs to Maximize Minimum Temperature Forecast Skill." Monthly Weather Review 143, no. 5 (May 1, 2015): 1506–16. http://dx.doi.org/10.1175/mwr-d-14-00096.1.

Full text
Abstract:
Abstract Evolutionary program ensembles are developed and tested for minimum temperature forecasts at Chicago, Illinois, at forecast ranges of 36, 60, 84, 108, 132, and 156 h. For all forecast ranges examined, the evolutionary program ensemble outperforms the 21-member GFS model output statistics (MOS) ensemble when considering root-mean-square error and Brier skill score. The relative advantage in root-mean-square error widens with forecast range, from 0.18°F at 36 h to 1.53°F at 156 h while the probabilistic skill remains positive throughout. At all forecast ranges, probabilistic forecasts of abnormal conditions are particularly skillful compared to the raw GFS guidance. The evolutionary program reliance on particular forecast inputs is distinct from that obtained from considering multiple linear regression models, with less reliance on the GFS MOS temperature and more on alternative data such as upstream temperatures at the time of forecast issuance, time of year, and forecasts of wind speed, precipitation, and cloud cover. This weighting trends away from current observations and toward seasonal (climatological) measures as forecast range increases. Using two different forms of ensemble member subselection, a Bayesian model combination calibration is tested on both ensembles. This calibration had limited effect on evolutionary program ensemble skill but was able to improve MOS ensemble performance, reducing but not eliminating the skill gap between them. The largest skill differentials occurred at the longest forecast ranges, beginning at 132 h. A hybrid, calibrated ensemble was able to provide some further increase in skill.
APA, Harvard, Vancouver, ISO, and other styles
24

Le Carrer, Noémie, and Peter L. Green. "A possibilistic interpretation of ensemble forecasts: experiments on the imperfect Lorenz 96 system." Advances in Science and Research 17 (June 3, 2020): 39–45. http://dx.doi.org/10.5194/asr-17-39-2020.

Full text
Abstract:
Abstract. Ensemble forecasting has gained popularity in the field of numerical medium-range weather prediction as a means of handling the limitations inherent to predicting the behaviour of high dimensional, nonlinear systems, that have high sensitivity to initial conditions. Through small strategical perturbations of the initial conditions, and in some cases, stochastic parameterization schemes of the atmosphere-ocean dynamical equations, ensemble forecasting allows one to sample possible future scenarii in a Monte-Carlo like approximation. Results are generally interpreted in a probabilistic way by building a predictive density function from the ensemble of weather forecasts. However, such a probabilistic interpretation is regularly criticized for not being reliable, because of the chaotic nature of the dynamics of the atmospheric system as well as the fact that the ensembles of forecasts are not, in reality, produced in a probabilistic manner. To address these limitations, we propose a novel approach: a possibilistic interpretation of ensemble predictions, taking inspiration from fuzzy and possibility theories. Our approach is tested on an imperfect version of the Lorenz 96 model and results are compared against those given by a standard probabilistic ensemble dressing. The possibilistic framework reproduces (ROC curve, resolution) or improves (ignorance, sharpness, reliability) the performance metrics of a standard univariate probabilistic framework. This work provides a first step to answer the question whether probability distributions are the right tool to interpret ensembles predictions.
APA, Harvard, Vancouver, ISO, and other styles
25

Doi, Takeshi, Swadhin K. Behera, and Toshio Yamagata. "Merits of a 108-Member Ensemble System in ENSO and IOD Predictions." Journal of Climate 32, no. 3 (February 2019): 957–72. http://dx.doi.org/10.1175/jcli-d-18-0193.1.

Full text
Abstract:
This paper explores merits of 100-ensemble simulations from a single dynamical seasonal prediction system by evaluating differences in skill scores between ensembles predictions with few (~10) and many (~100) ensemble members. A 100-ensemble retrospective seasonal forecast experiment for 1983–2015 is beyond current operational capability. Prediction of extremely strong ENSO and the Indian Ocean dipole (IOD) events is significantly improved in the larger ensemble. It indicates that the ensemble size of 10 members, used in some operational systems, is not adequate for the occurrence of 15% tails of extreme climate events, because only about 1 or 2 members (approximately 15% of 12) will agree with the observations. We also showed an ensemble size of about 50 members may be adequate for the extreme El Niño and positive IOD predictions at least in the present prediction system. Even if running a large-ensemble prediction system is quite costly, improved prediction of disastrous extreme events is useful for minimizing risks of possible human and economic losses.
APA, Harvard, Vancouver, ISO, and other styles
26

SHARKEY, AMANDA J. C., and NOEL E. SHARKEY. "Combining diverse neural nets." Knowledge Engineering Review 12, no. 3 (September 1997): 231–47. http://dx.doi.org/10.1017/s0269888997003123.

Full text
Abstract:
An appropriate use of neural computing techniques is to apply them to problems such as condition monitoring, fault diagnosis, control and sensing, where conventional solutions can be hard to obtain. However, when neural computing techniques are used, it is important that they are employed so as to maximise their performance, and improve their reliability. Their performance is typically assessed in terms of their ability to generalise to a previously unseen test set, although unless the training set is very carefully chosen, 100% accuracy is rarely achieved. Improved performance can result when sets of neural nets are combined in ensembles and ensembles can be viewed as an example of the reliability through redundancy approach that is recommended for conventional software and hardware in safety-critical or safety-related applications. Although there has been recent interest in the use of neural net ensembles, such techniques have yet to be applied to the tasks of condition monitoring and fault diagnosis. In this paper, we focus on the benefits of techniques which promote diversity amongst the members of an ensemble, such that there is a minimum number of coincident failures. The concept of ensemble diversity is considered in some detail, and a hierarchy of four levels of diversity is presented. This hierarchy is then used in the description of the application of ensemble-based techniques to the case study of fault diagnosis of a diesel engine.
APA, Harvard, Vancouver, ISO, and other styles
27

Kipling, Zak, Cristina Primo, and Andrew Charlton-Perez. "Spatiotemporal Behavior of the TIGGE Medium-Range Ensemble Forecasts." Monthly Weather Review 139, no. 8 (August 2011): 2561–71. http://dx.doi.org/10.1175/2010mwr3556.1.

Full text
Abstract:
AbstractUsing the recently developed mean–variance of logarithms (MVL) diagram, together with The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) archive of medium-range ensemble forecasts from nine different centers, an analysis is presented of the spatiotemporal dynamics of their perturbations, showing how the differences between models and perturbation techniques can explain the shape of their characteristic MVL curves. In particular, a divide is seen between ensembles based on singular vectors or empirical orthogonal functions, and those based on bred vector, ensemble transform with rescaling, or ensemble Kalman filter techniques.Consideration is also given to the use of the MVL diagram to compare the growth of perturbations within the ensemble with the growth of the forecast error, showing that there is a much closer correspondence for some models than others. Finally, the use of the MVL technique to assist in selecting models for inclusion in a multimodel ensemble is discussed, and an experiment suggested to test its potential in this context.
APA, Harvard, Vancouver, ISO, and other styles
28

Sekiyama, Tsuyoshi Thomas, Mizuo Kajino, and Masaru Kunii. "Ensemble Dispersion Simulation of a Point-Source Radioactive Aerosol Using Perturbed Meteorological Fields over Eastern Japan." Atmosphere 12, no. 6 (May 22, 2021): 662. http://dx.doi.org/10.3390/atmos12060662.

Full text
Abstract:
We conducted single-model initial-perturbed ensemble simulations to quantify uncertainty in aerosol dispersion modeling, focusing on a point-source radioactive aerosol emitted from the Fukushima Daiichi Nuclear Power Plant (FDNPP) in March 2011. The ensembles of the meteorological variables were prepared using a data assimilation system that consisted of a non-hydrostatic weather-forecast model with a 3-km horizontal resolution and a four-dimensional local ensemble transform Kalman filter (4D-LETKF) with 20 ensemble members. The emission of radioactive aerosol was not perturbed. The weather and aerosol simulations were validated with in-situ measurements at Hitachi and Tokai, respectively, approximately 100 km south of the FDNPP. The ensemble simulations provided probabilistic information and multiple case scenarios for the radioactive aerosol plumes. Some of the ensemble members successfully reproduced the arrival time and intensity of the radioactive aerosol plumes, even when the deterministic simulation failed to reproduce them. We found that a small ensemble spread of wind speed produced large uncertainties in aerosol concentrations.
APA, Harvard, Vancouver, ISO, and other styles
29

MAITRA, ARPITA, and PREETI PARASHAR. "HADAMARD TYPE OPERATIONS FOR QUBITS." International Journal of Quantum Information 04, no. 04 (August 2006): 653–64. http://dx.doi.org/10.1142/s0219749906002055.

Full text
Abstract:
We obtain the most general ensemble of qubits for which it is possible to design a universal Hadamard gate. These states, when geometrically represented on the Bloch sphere, give a new trajectory. We further consider some Hadamard "type" operations and find ensembles of states for which such transformations hold. The unequal superposition of a qubit and its orthogonal complement is also investigated.
APA, Harvard, Vancouver, ISO, and other styles
30

Setiyowati, Eka, Agus Rusgiyono, and Tarno Tarno. "MODEL KOMBINASI ARIMA DALAM PERAMALAN HARGA MINYAK MENTAH DUNIA." Jurnal Gaussian 7, no. 1 (February 28, 2018): 54–63. http://dx.doi.org/10.14710/j.gauss.v7i1.26635.

Full text
Abstract:
Oil is the most important commodity in everyday life, because oil is one of the main sources of energy that is needed for other people. Changes in crude oil prices greatly affect the economic conditions of a country. Therefore, the aim of this study is develop an appropriate model for forecasting crude oil price based on the ARIMA and its ensembles. In this study, ensemble method uses some ARIMA models to create ensemble members which are then combined with averaging and stacking techniques. The data used are the price of world crude oil period 2003-2017. The results showed that ARIMA (1,1,0) model produces the smallest RMSE values for forecasting the next thirty six months. Keywords: Ensemble, ARIMA, Averaging, Stacking, Crude Oil Price
APA, Harvard, Vancouver, ISO, and other styles
31

Baker, L. H., A. C. Rudd, S. Migliorini, and R. N. Bannister. "Representation of model error in a convective-scale ensemble prediction system." Nonlinear Processes in Geophysics 21, no. 1 (January 8, 2014): 19–39. http://dx.doi.org/10.5194/npg-21-19-2014.

Full text
Abstract:
Abstract. In this paper ensembles of forecasts (of up to six hours) are studied from a convection-permitting model with a representation of model error due to unresolved processes. The ensemble prediction system (EPS) used is an experimental convection-permitting version of the UK Met Office's 24-member Global and Regional Ensemble Prediction System (MOGREPS). The method of representing model error variability, which perturbs parameters within the model's parameterisation schemes, has been modified and we investigate the impact of applying this scheme in different ways. These are: a control ensemble where all ensemble members have the same parameter values; an ensemble where the parameters are different between members, but fixed in time; and ensembles where the parameters are updated randomly every 30 or 60 min. The choice of parameters and their ranges of variability have been determined from expert opinion and parameter sensitivity tests. A case of frontal rain over the southern UK has been chosen, which has a multi-banded rainfall structure. The consequences of including model error variability in the case studied are mixed and are summarised as follows. The multiple banding, evident in the radar, is not captured for any single member. However, the single band is positioned in some members where a secondary band is present in the radar. This is found for all ensembles studied. Adding model error variability with fixed parameters in time does increase the ensemble spread for near-surface variables like wind and temperature, but can actually decrease the spread of the rainfall. Perturbing the parameters periodically throughout the forecast does not further increase the spread and exhibits "jumpiness" in the spread at times when the parameters are perturbed. Adding model error variability gives an improvement in forecast skill after the first 2–3 h of the forecast for near-surface temperature and relative humidity. For precipitation skill scores, adding model error variability has the effect of improving the skill in the first 1–2 h of the forecast, but then of reducing the skill after that. Complementary experiments were performed where the only difference between members was the set of parameter values (i.e. no initial condition variability). The resulting spread was found to be significantly less than the spread from initial condition variability alone.
APA, Harvard, Vancouver, ISO, and other styles
32

Tama, Bayu Adhi, Sun Im, and Seungchul Lee. "Improving an Intelligent Detection System for Coronary Heart Disease Using a Two-Tier Classifier Ensemble." BioMed Research International 2020 (April 27, 2020): 1–10. http://dx.doi.org/10.1155/2020/9816142.

Full text
Abstract:
Coronary heart disease (CHD) is one of the severe health issues and is one of the most common types of heart diseases. It is the most frequent cause of mortality across the globe due to the lack of a healthy lifestyle. Owing to the fact that a heart attack occurs without any apparent symptoms, an intelligent detection method is inescapable. In this article, a new CHD detection method based on a machine learning technique, e.g., classifier ensembles, is dealt with. A two-tier ensemble is built, where some ensemble classifiers are exploited as base classifiers of another ensemble. A stacked architecture is designed to blend the class label prediction of three ensemble learners, i.e., random forest, gradient boosting machine, and extreme gradient boosting. The detection model is evaluated on multiple heart disease datasets, i.e., Z-Alizadeh Sani, Statlog, Cleveland, and Hungarian, corroborating the generalisability of the proposed model. A particle swarm optimization-based feature selection is carried out to choose the most significant feature set for each dataset. Finally, a two-fold statistical test is adopted to justify the hypothesis, demonstrating that the performance differences of classifiers do not rely upon an assumption. Our proposed method outperforms any base classifiers in the ensemble with respect to 10-fold cross validation. Our detection model has performed better than current existing models based on traditional classifier ensembles and individual classifiers in terms of accuracy, F1, and AUC. This study demonstrates that our proposed model adds a considerable contribution compared to the prior published studies in the current literature.
APA, Harvard, Vancouver, ISO, and other styles
33

Erdös, P., J. L. Nicolas, and A. Sárkozy. "Sommes de sous-ensembles." Journal de Théorie des Nombres de Bordeaux 3, no. 1 (1991): 55–72. http://dx.doi.org/10.5802/jtnb.42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Duda, Jeffrey D., Xuguang Wang, Fanyou Kong, Ming Xue, and Judith Berner. "Impact of a Stochastic Kinetic Energy Backscatter Scheme on Warm Season Convection-Allowing Ensemble Forecasts." Monthly Weather Review 144, no. 5 (May 2016): 1887–908. http://dx.doi.org/10.1175/mwr-d-15-0092.1.

Full text
Abstract:
The efficacy of a stochastic kinetic energy backscatter (SKEB) scheme to improve convection-allowing probabilistic forecasts was studied. While SKEB has been explored for coarse, convection-parameterizing models, studies of SKEB for convective scales are limited. Three ensembles were compared. The SKMP ensemble used mixed physics with the SKEB scheme, whereas the MP ensemble was configured identically but without using the SKEB scheme. The SK ensemble used the SKEB scheme with no physics diversity. The experiment covered May 2013 over the central United States on a 4-km Weather Research and Forecasting (WRF) Model domain. The SKEB scheme was successful in increasing the spread in all fields verified, especially mid- and upper-tropospheric fields. Additionally, the rmse of the ensemble mean was maintained or reduced, in some cases significantly. Rank histograms in the SKMP ensemble were flatter than those in the MP ensemble, indicating the SKEB scheme produces a less underdispersive forecast distribution. Some improvement was seen in probabilistic precipitation forecasts, particularly when examining Brier scores. Verification against surface observations agree with verification against Rapid Refresh (RAP) model analyses, showing that probabilistic forecasts for 2-m temperature, 2-m dewpoint, and 10-m winds were also improved using the SKEB scheme. The SK ensemble gave competitive forecasts for some fields. The SK ensemble had reduced spread compared to the MP ensemble at the surface due to the lack of physics diversity. These results suggest the potential utility of mixed physics plus the SKEB scheme in the design of convection-allowing ensemble forecasts.
APA, Harvard, Vancouver, ISO, and other styles
35

Gallus, William A. "Application of Object-Based Verification Techniques to Ensemble Precipitation Forecasts." Weather and Forecasting 25, no. 1 (February 1, 2010): 144–58. http://dx.doi.org/10.1175/2009waf2222274.1.

Full text
Abstract:
Abstract Both the Method for Object-based Diagnostic Evaluation (MODE) and contiguous rain area (CRA) object-based verification techniques have been used to analyze precipitation forecasts from two sets of ensembles to determine if spread-skill behavior observed using traditional measures can be seen in the object parameters. One set consisted of two eight-member Weather Research and Forecasting (WRF) model ensembles: one having mixed physics and dynamics with unperturbed initial and lateral boundary conditions (Phys) and another using common physics and a dynamic core but with perturbed initial and lateral boundary conditions (IC/LBC). Traditional measures found that spread grows much faster in IC/LBC than in Phys so that after roughly 24 h, better skill and spread are found in IC/LBC. These measures also reflected a strong diurnal signal of precipitation. The other set of ensembles included five members of a 4-km grid-spacing WRF ensemble (ENS4) and five members of a 20-km WRF ensemble (ENS20). Traditional measures suggested that the diurnal signal was better in ENS4 and spread increased more rapidly than in ENS20. Standard deviations (SDs) of four object parameters computed for the first set of ensembles using MODE and CRA showed the trend of enhanced spread growth in IC/LBC compared to Phys that had been observed in traditional measures, with the areal coverage of precipitation exhibiting the greatest growth in spread with time. The two techniques did not produce identical results; although, they did show the same general trends. A diurnal signal could be seen in the SDs of all parameters, especially rain rate, volume, and areal coverage. MODE results also found evidence of a diurnal signal and faster growth of spread in object parameters in ENS4 than in ENS20. Some forecasting approaches based on MODE and CRA output are also demonstrated. Forecasts based on averages of object parameters from each ensemble member were more skillful than forecasts based on MODE or CRA applied to an ensemble mean computed using the probability matching technique for areal coverage and volume, but differences in the two techniques were less pronounced for rain rate and displacement. The use of a probability threshold to define objects was also shown to be a valid forecasting approach with MODE.
APA, Harvard, Vancouver, ISO, and other styles
36

Castruccio, Stefano, Ziqing Hu, Benjamin Sanderson, Alicia Karspeck, and Dorit Hammerling. "Reproducing Internal Variability with Few Ensemble Runs." Journal of Climate 32, no. 24 (November 18, 2019): 8511–22. http://dx.doi.org/10.1175/jcli-d-19-0280.1.

Full text
Abstract:
Abstract While large climate model ensembles are invaluable tools for physically consistent climate prediction, they also present a large burden in terms of computational resources and storage requirements. A complementary approach to large initial-condition ensembles is to train a stochastic generator on fewer runs. While simulations from a statistical model cannot capture the complexity of climate model runs, they can address some specific scientific questions of interest, such as sampling the variability of regional trends. We demonstrate this potential by comparing simulations from a large ensemble and a stochastic generator trained with only four runs, and show that the variability of regional temperature trends is almost indistinguishable. Training stochastic generators on fewer runs might prove especially useful in the context of large climate model intercomparison projects where creating large ensembles for each model is not possible.
APA, Harvard, Vancouver, ISO, and other styles
37

ZHOU, ZHI-HUA, JIAN-XIN WU, WEI TANG, and ZHAO-QIAN CHEN. "COMBINING REGRESSION ESTIMATORS: GA-BASED SELECTIVE NEURAL NETWORK ENSEMBLE." International Journal of Computational Intelligence and Applications 01, no. 04 (December 2001): 341–56. http://dx.doi.org/10.1142/s1469026801000287.

Full text
Abstract:
Neural network ensemble is a learning paradigm where a collection of neural networks is trained for the same task. In this paper, the relationship between the generalization ability of the neural network ensemble and the correlation of the individual neural networks constituting the ensemble is analyzed in the context of combining neural regression estimators, which reveals that ensembling a selective subset of trained networks is superior to ensembling all the trained networks in some cases. Based on such recognition, an approach named GASEN is proposed. GASEN trains a number of individual neural networks at first. Then it assigns random weights to the individual networks and employs a genetic algorithm to evolve those weights so that they can characterize to some extent the importance of the individual networks in constituting an ensemble. Finally it selects an optimum subset of individual networks based on the evolved weights to make up the ensemble. Experimental results show that, comparing with a popular ensemble approach, i.e., averaging all, and a theoretically optimum selective ensemble approach, i.e. enumerating, GASEN has preferable performance in generating ensembles with strong generalization ability in relatively small computational cost. This paper also analyzes the working mechanism of GASEN from the view of error-ambiguity decomposition, which reveals that GASEN improves generalization ability mainly through reducing the average generalization error of the individual neural networks constituting the ensemble.
APA, Harvard, Vancouver, ISO, and other styles
38

Mirzargar, Mahsa, and Jeffrey L. Anderson. "On Evaluation of Ensemble Forecast Calibration Using the Concept of Data Depth." Monthly Weather Review 145, no. 5 (April 10, 2017): 1679–90. http://dx.doi.org/10.1175/mwr-d-16-0351.1.

Full text
Abstract:
Abstract Various generalizations of the univariate rank histogram have been proposed to inspect the reliability of an ensemble forecast or analysis in multidimensional spaces. Multivariate rank histograms provide insightful information about the misspecification of genuinely multivariate features such as the correlation between various variables in a multivariate ensemble. However, the interpretation of patterns in a multivariate rank histogram should be handled with care. The purpose of this paper is to focus on multivariate rank histograms designed based on the concept of data depth and outline some important considerations that should be accounted for when using such multivariate rank histograms. To generate correct multivariate rank histograms using the concept of data depth, the datatype of the ensemble should be taken into account to define a proper preranking function. This paper demonstrates how and why some preranking functions might not be suitable for multivariate or vector-valued ensembles and proposes preranking functions based on the concept of simplicial depth that are applicable to both multivariate points and vector-valued ensembles. In addition, there exists an inherent identifiability issue associated with center-outward preranking functions used to generate multivariate rank histograms. This problem can be alleviated by complementing the multivariate rank histogram with other well-known multivariate statistical inference tools based on rank statistics such as the depth-versus-depth (DD) plot. Using a synthetic example, it is shown that the DD plot is less sensitive to sample size compared to multivariate rank histograms.
APA, Harvard, Vancouver, ISO, and other styles
39

Glahn, Bob, Matthew Peroutka, Jerry Wiedenfeld, John Wagner, Greg Zylstra, Bryan Schuknecht, and Bryan Jackson. "MOS Uncertainty Estimates in an Ensemble Framework." Monthly Weather Review 137, no. 1 (January 1, 2009): 246–68. http://dx.doi.org/10.1175/2008mwr2569.1.

Full text
Abstract:
Abstract It is being increasingly recognized that the uncertainty in weather forecasts should be quantified and furnished to users along with the single-value forecasts usually provided. Probabilistic forecasts of “events” have been made in special cases; for instance, probabilistic forecasts of the event defined as 0.01 in. or more of precipitation at a point over a specified time period [i.e., the probability of precipitation (PoP)] have been disseminated to the public by the Weather Bureau/National Weather Service since 1966. Within the past decade, ensembles of operational numerical weather prediction models have been produced and used to some degree to provide probabilistic estimates of events easily dealt with, such as the occurrence of specific amounts of precipitation. In most such applications, the number of ensembles restricts this “enumeration” method, and the ensembles are characteristically underdispersive. However, fewer attempts have been made to provide a probability density function (PDF) or cumulative distribution function (CDF) for a continuous variable. The Meteorological Development Laboratory (MDL) has used the error estimation capabilities of the linear regression framework and kernel density fitting applied to individual and aggregate ensemble members of the Global Ensemble Forecast System of the National Centers for Environmental Prediction to develop PDFs and CDFs. This paper describes the method and results for temperature, dewpoint, daytime maximum temperature, and nighttime minimum temperature. The method produces reliable forecasts with accuracy exceeding the raw ensembles. Points on the CDF for 1650 stations have been mapped to the National Digital Forecast Database 5-km grid and an example is provided.
APA, Harvard, Vancouver, ISO, and other styles
40

Flournoy, Matthew D., Michael C. Coniglio, Erik N. Rasmussen, Jason C. Furtado, and Brice E. Coffer. "Modes of Storm-Scale Variability and Tornado Potential in VORTEX2 Near- and Far-Field Tornadic Environments." Monthly Weather Review 148, no. 10 (October 1, 2020): 4185–207. http://dx.doi.org/10.1175/mwr-d-20-0147.1.

Full text
Abstract:
AbstractSome supercellular tornado outbreaks are composed almost entirely of tornadic supercells, while most consist of both tornadic and nontornadic supercells sometimes in close proximity to each other. These differences are related to a balance between larger-scale environmental influences on storm development as well as more chaotic, internal evolution. For example, some environments may be potent enough to support tornadic supercells even if less predictable intrastorm characteristics are suboptimal for tornadogenesis, while less potent environments are supportive of tornadic supercells given optimal intrastorm characteristics. This study addresses the sensitivity of tornadogenesis to both environmental characteristics and storm-scale features using a cloud modeling approach. Two high-resolution ensembles of simulated supercells are produced in the near- and far-field environments observed in the inflow of tornadic supercells during the second Verification of the Origins of Rotation in Tornadoes Experiment (VORTEX2). All simulated supercells evolving in the near-field environment produce a tornado, and 33% of supercells evolving in the far-field environment produce a tornado. Composite differences between the two ensembles are shown to address storm-scale characteristics and processes impacting the volatility of tornadogenesis. Storm-scale variability in the ensembles is illustrated using empirical orthogonal function analysis, revealing storm-generated boundaries that may be linked to the volatility of tornadogenesis. Updrafts in the near-field ensemble are markedly stronger than those in the far-field ensemble during the time period in which the ensembles most differ in terms of tornado production. These results suggest that storm-environment modifications can influence the volatility of supercellular tornadogenesis.
APA, Harvard, Vancouver, ISO, and other styles
41

Allaire, Frédéric, Jean-Baptiste Filippi, and Vivien Mallet. "Generation and evaluation of an ensemble of wildland fire simulations." International Journal of Wildland Fire 29, no. 2 (2020): 160. http://dx.doi.org/10.1071/wf19073.

Full text
Abstract:
Numerical simulations of wildfire spread can provide support in deciding firefighting actions but their predictive performance is challenged by the uncertainty of model inputs stemming from weather forecasts, fuel parameterisation and other fire characteristics. In this study, we assign probability distributions to the inputs and propagate the uncertainty by running hundreds of Monte Carlo simulations. The ensemble of simulations is summarised via a burn probability map whose evaluation based on the corresponding observed burned surface is not obvious. We define several properties and introduce probabilistic scores that are common in meteorological applications. Based on these elements, we evaluate the predictive performance of our ensembles for seven fires that occurred in Corsica from mid-2017 to early 2018. We obtain fair performance in some of the cases but accuracy and reliability of the forecasts can be improved. The ensemble generation can be accomplished in a reasonable amount of time and could be used in an operational context provided that sufficient computational resources are available. The proposed probabilistic scores are also appropriate in a calibration process to improve the ensembles.
APA, Harvard, Vancouver, ISO, and other styles
42

Younas, Waqar, and Youmin Tang. "PNA Predictability at Various Time Scales." Journal of Climate 26, no. 22 (October 29, 2013): 9090–114. http://dx.doi.org/10.1175/jcli-d-12-00609.1.

Full text
Abstract:
Abstract In this study, the predictability of the Pacific–North American (PNA) pattern is evaluated on time scales from days to months using state-of-the-art dynamical multiple-model ensembles including the Canadian Historical Forecast Project (HFP2) ensemble, the Development of a European Multimodel Ensemble System for Seasonal-to-Interannual Prediction (DEMETER) ensemble, and the Ensemble-Based Predictions of Climate Changes and their Impacts (ENSEMBLES). Some interesting findings in this study include (i) multiple-model ensemble (MME) skill was better than most of the individual models; (ii) both actual prediction skill and potential predictability increased as the averaging time scale increased from days to months; (iii) there is no significant difference in actual skill between coupled and uncoupled models, in contrast with the potential predictability where coupled models performed better than uncoupled models; (iv) relative entropy (REA) is an effective measure in characterizing the potential predictability of individual prediction, whereas the mutual information (MI) is a reliable indicator of overall prediction skill; and (v) compared with conventional potential predictability measures of the signal-to-noise ratio, the MI-based measures characterized more potential predictability when the ensemble spread varied over initial conditions. Further analysis found that the signal component dominated the dispersion component in REA for PNA potential predictability from days to seasons. Also, the PNA predictability is highly related to the signal of the tropical sea surface temperature (SST), and SST–PNA correlation patterns resemble the typical ENSO structure, suggesting that ENSO is the main source of PNA seasonal predictability. The predictable component analysis (PrCA) of atmospheric variability further confirmed the above conclusion; that is, PNA is one of the most predictable patterns in the climate variability over the Northern Hemisphere, which originates mainly from the ENSO forcing.
APA, Harvard, Vancouver, ISO, and other styles
43

Annan, J. D., and J. C. Hargreaves. "Understanding the CMIP3 Multimodel Ensemble." Journal of Climate 24, no. 16 (August 15, 2011): 4529–38. http://dx.doi.org/10.1175/2011jcli3873.1.

Full text
Abstract:
Abstract The Coupled Model Intercomparison Project phase 3 (CMIP3) multimodel ensemble has been widely utilized for climate research and prediction, but the properties and behavior of the ensemble are not yet fully understood. Here, some investigations are undertaken into various aspects of the ensemble’s behavior, in particular focusing on the performance of the multimodel mean. This study presents an explanation of this phenomenon in the context of the statistically indistinguishable paradigm and also provides a quantitative analysis of the main factors that control how likely the mean is to outperform the models in the ensemble, both individually and collectively. The analyses lend further support to the usage of the paradigm of a statistically indistinguishable ensemble and indicate that the current ensemble size is too small to adequately sample the space from which the models are drawn.
APA, Harvard, Vancouver, ISO, and other styles
44

Aksoy, Altuğ, David C. Dowell, and Chris Snyder. "A Multicase Comparative Assessment of the Ensemble Kalman Filter for Assimilation of Radar Observations. Part II: Short-Range Ensemble Forecasts." Monthly Weather Review 138, no. 4 (April 1, 2010): 1273–92. http://dx.doi.org/10.1175/2009mwr3086.1.

Full text
Abstract:
Abstract The quality of convective-scale ensemble forecasts, initialized from analysis ensembles obtained through the assimilation of radar observations using an ensemble Kalman filter (EnKF), is investigated for cases whose behaviors span supercellular, linear, and multicellular organization. This work is the companion to Part I, which focused on the quality of analyses during the 60-min analysis period. Here, the focus is on 30-min ensemble forecasts initialized at the end of that period. As in Part I, the Weather Research and Forecasting (WRF) model is employed as a simplified cloud model at 2-km horizontal grid spacing. Various observation-space and state-space verification metrics, computed both for ensemble means and individual ensemble members, are employed to assess the quality of ensemble forecasts comparatively across cases. While the cases exhibit noticeable differences in predictability, the forecast skill in each case, as measured by various metrics, decays on a time scale of tens of minutes. The ensemble spread also increases rapidly but significant outlier members or clustering among members are not encountered. Forecast quality is seen to be influenced to varying degrees by the respective initial soundings. While radar data assimilation is able to partially mitigate some of the negative effects in some situations, the supercell case, in particular, remains difficult to predict even after 60 min of data assimilation.
APA, Harvard, Vancouver, ISO, and other styles
45

Flora, Montgomery L., Corey K. Potvin, and Louis J. Wicker. "Practical Predictability of Supercells: Exploring Ensemble Forecast Sensitivity to Initial Condition Spread." Monthly Weather Review 146, no. 8 (July 16, 2018): 2361–79. http://dx.doi.org/10.1175/mwr-d-17-0374.1.

Full text
Abstract:
Abstract As convection-allowing ensembles are routinely used to forecast the evolution of severe thunderstorms, developing an understanding of storm-scale predictability is critical. Using a full-physics numerical weather prediction (NWP) framework, the sensitivity of ensemble forecasts of supercells to initial condition (IC) uncertainty is investigated using a perfect model assumption. Three cases are used from the real-time NSSL Experimental Warn-on-Forecast System for Ensembles (NEWS-e) from the 2016 NOAA Hazardous Weather Testbed Spring Forecasting Experiment. The forecast sensitivity to IC uncertainty is assessed by repeating the simulations with the initial ensemble perturbations reduced to 50% and 25% of their original magnitudes. The object-oriented analysis focuses on significant supercell features, including the mid- and low-level mesocyclone, and rainfall. For a comprehensive analysis, supercell location and amplitude predictability of the aforementioned features are evaluated separately. For all examined features and cases, forecast spread is greatly reduced by halving the IC spread. By reducing the IC spread from 50% to 25% of the original magnitude, forecast spread is still substantially reduced in two of the three cases. The practical predictability limit (PPL), or the lead time beyond which the forecast spread exceeds some prechosen threshold, is case and feature dependent. Comparing to past studies reveals that practical predictability of supercells is substantially improved by initializing once storms are well established in the ensemble analysis.
APA, Harvard, Vancouver, ISO, and other styles
46

Miao, Chiyuan, Qingyun Duan, Qiaohong Sun, and Jianduo Li. "Evaluation and application of Bayesian multi-model estimation in temperature simulations." Progress in Physical Geography: Earth and Environment 37, no. 6 (August 5, 2013): 727–44. http://dx.doi.org/10.1177/0309133313494961.

Full text
Abstract:
Use of multi-model ensembles from global climate models to simulate the current and future climate change has flourished as a research topic during recent decades. This paper assesses the performance of multi-model ensembles in simulating global land temperature from 1960 to 1999, using Nash-Sutcliffe model efficiency and Taylor diagrams. The future trends of temperature for different scales and emission scenarios are projected based on the posterior model probabilities estimated by Bayesian methods. The results show that ensemble prediction can improve the accuracy of simulations of the spatiotemporal distribution of global temperature. The performance of Bayesian model averaging (BMA) at simulating the annual temperature dynamic is significantly better than single climate models and their simple model averaging (SMA). However, BMA simulation can demonstrate the temperature trend on the decadal scale, but its annual assessment of accuracy is relatively weak. The ensemble prediction presents dissimilarly accurate descriptions in different regions, and the best performance appears in Australia. The results also indicate that future temperatures in northern Asia rise with the greatest speed in some scenarios, and Australia is the most sensitive region for the effects of greenhouse gas emissions. In addition to the uncertainty of ensemble prediction, the impacts of climate change on agriculture production and water resources are discussed as an extension of this research.
APA, Harvard, Vancouver, ISO, and other styles
47

Whitmore, Richard Alan. "The Emerging Ensemble: The Vieux-Colombier and the Group Theatre." Theatre Survey 34, no. 1 (May 1993): 60–70. http://dx.doi.org/10.1017/s0040557400009765.

Full text
Abstract:
The founding of experimental ensembles dominates the history of Western theatre in the first half of the twentieth century. Groups of theatre artists, desperate to revive and resuscitate an art grown stale and lifeless, banded together, seeking common ground in the period's vast swirl of artistic and ideological sensibilities. These groups, from the Moscow Art Theatre to the Berliner Ensemble, produced a broad range of performances, yet all shared a common commitment to change and theatrical growth. Two of these, the Group Theatre and Jacques Copeau's Théâtre du Vieux-Colombier participated in and elevated this world-wide phenomenon. Since they share some common roots, an exploration of the relationship between the Vieux-Colombier and the Group yields some insights into the formation and maintenance of an acting ensemble.
APA, Harvard, Vancouver, ISO, and other styles
48

Brasika, Ida Bagus Mandhara. "Ensemble Model of Precipitation Change Over Indonesia Caused by El Nino Modoki." Journal of Marine Research and Technology 4, no. 1 (February 28, 2021): 72. http://dx.doi.org/10.24843/jmrt.2021.v04.i01.p10.

Full text
Abstract:
The aim of this research is to understand the impact of El Nino Modoki into Indonesian precipitation and how ensemble models can simulate this changing. Ensemble model has been recognized as a method to improve the quality of model and/or prediction of climate phenomenon. Every model has their own algorithm which causes strength and weakness in many aspects. Ensemble will improve the quality of simulation while reducing the weakness. However, the combination of models for ensembles is differ for each event and/or location. Here we utilize the Squared Error Skill Score (SESS) method to examine each model quality and to compare the ensemble model with the single model. El Nino Modoki is a unique phenomenon. It remains debatable amongst scientists, many features of this phenomenon are unfold. So, it is important to find out how El Nino Modoki has changed precipitation over Indonesia. To verify the changing precipitation, the composite of precipitation on El Nino Modoki Year is divided with the composite of all years. Last, validating ensemble model with Satellite-gauge precipitation dataset. El Nino Modoki decreases precipitation in most of Indonesian regions. The ensemble, while statistically promising, has failed to simulate precipitation in some region.
APA, Harvard, Vancouver, ISO, and other styles
49

Salmhofer, Manfred. "Functional Integral and Stochastic Representations for Ensembles of Identical Bosons on a Lattice." Communications in Mathematical Physics 385, no. 2 (March 11, 2021): 1163–211. http://dx.doi.org/10.1007/s00220-021-04010-4.

Full text
Abstract:
AbstractRegularized coherent-state functional integrals are derived for ensembles of identical bosons on a lattice, the regularization being a discretization of Euclidian time. Convergence of the time-continuum limit is proven for various discretized actions. The focus is on the integral representation for the partition function and expectation values in the canonical ensemble. The connection to the grand-canonical integral is exhibited and some important differences are discussed. Uniform bounds for covariances are proven, which simplify the analysis of the time-continuum limit and can also be used to analyze the thermodynamic limit. The relation to a stochastic representation by an ensemble of interacting random walks is made explicit, and its modifications in presence of a condensate are discussed.
APA, Harvard, Vancouver, ISO, and other styles
50

Caldararu, Octav, Vilhelm Ekberg, Derek T. Logan, Esko Oksanen, and Ulf Ryde. "Exploring ligand dynamics in protein crystal structures with ensemble refinement." Acta Crystallographica Section D Structural Biology 77, no. 8 (July 29, 2021): 1099–115. http://dx.doi.org/10.1107/s2059798321006513.

Full text
Abstract:
Understanding the dynamics of ligands bound to proteins is an important task in medicinal chemistry and drug design. However, the dominant technique for determining protein–ligand structures, X-ray crystallography, does not fully account for dynamics and cannot accurately describe the movements of ligands in protein binding sites. In this article, an alternative method, ensemble refinement, is used on six protein–ligand complexes with the aim of understanding the conformational diversity of ligands in protein crystal structures. The results show that ensemble refinement sometimes indicates that the flexibility of parts of the ligand and some protein side chains is larger than that which can be described by a single conformation and atomic displacement parameters. However, since the electron-density maps are comparable and R free values are slightly increased, the original crystal structure is still a better model from a statistical point of view. On the other hand, it is shown that molecular-dynamics simulations and automatic generation of alternative conformations in crystallographic refinement confirm that the flexibility of these groups is larger than is observed in standard refinement. Moreover, the flexible groups in ensemble refinement coincide with groups that give high atomic displacement parameters or non-unity occupancy if optimized in standard refinement. Therefore, the conformational diversity indicated by ensemble refinement seems to be qualitatively correct, indicating that ensemble refinement can be an important complement to standard crystallographic refinement as a tool to discover which parts of crystal structures may show extensive flexibility and therefore are poorly described by a single conformation. However, the diversity of the ensembles is often exaggerated (probably partly owing to the rather poor force field employed) and the ensembles should not be trusted in detail.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography