Academic literature on the topic 'Model type uncertainty'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Model type uncertainty.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Model type uncertainty"

1

Rizal, Nandiko, Dian Purnama Sari, Beny Cahyono, Dedy Dwi Prastyo, Baharuddin Ali, and Erdina Arianti. "Uncertainty Analysis Study on Seakeeping Tests of Benchmark Model." IOP Conference Series: Earth and Environmental Science 1081, no. 1 (2022): 012021. http://dx.doi.org/10.1088/1755-1315/1081/1/012021.

Full text
Abstract:
Abstract Experimental uncertainty and measurement, particularly in engineering, have been increasingly emphasized over the last 20 years. Uncertainty analysis of ship model testing in seakeeping experiments based on International Organization for Standardization, Guide for Uncertainty of Measurements (ISO-GUM), report with recommendations published by ITTC (2008). In this paper, the uncertainties for free-running tests were performed for a model scale 1:62 as a benchmark model test at the Indonesian Hydrodynamic Laboratory (IHL). The benchmark model is representative of a full scale of 186 meters. The model has full appendages as well as a rudder and propellers to allow for free-running tests, natural roll period, free roll decay, and swing tests. Factors of basic uncertainty (type A and type B) are estimated using model tests. The standard deviation of the mean value of a repeated measurement is uncertainty type A. However, Type B uncertainty is estimated from manufacturing specifications. The uncertainty components resulting from both types are quantified by standard deviations. The uncertainty value of Type B is approximated by a corresponding variance. This study helps to understand the underlying uncertainty associated with this research, which resulted in a total geometry uncertainty value (Lpp, B, T, H, KG ¯ , and kyy) of 0.17598% and the total uncertainty value of the instrument calibration results (wave height sensor and qualysis motion tracking) is 0.01161%. The results of this study are expected to minimize the impact of sources of uncertainty and allow the potential for underlying uncertainty to be corrected for the next seaworthiness test.
APA, Harvard, Vancouver, ISO, and other styles
2

Kauppi, Anu, Pekka Kolmonen, Marko Laine, and Johanna Tamminen. "Aerosol-type retrieval and uncertainty quantification from OMI data." Atmospheric Measurement Techniques 10, no. 11 (2017): 4079–98. http://dx.doi.org/10.5194/amt-10-4079-2017.

Full text
Abstract:
Abstract. We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.
APA, Harvard, Vancouver, ISO, and other styles
3

Rauser, Florian, Jochem Marotzke, and Peter Korn. "Ensemble-type numerical uncertainty information from single model integrations." Journal of Computational Physics 292 (July 2015): 30–42. http://dx.doi.org/10.1016/j.jcp.2015.02.043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wei, Yicheng, and Junzo Watada. "Building a Type-2 Fuzzy Qualitative Regression Model." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 4 (2012): 527–32. http://dx.doi.org/10.20965/jaciii.2012.p0527.

Full text
Abstract:
Type-1 fuzzy regression model is constructed with type-1 fuzzy coefficients dealing with real value inputs and outputs. From the fuzzy set-theoretical point of view, uncertainty also exists when associated with qualitative data (membership degrees). This paper intends to build a qualitative regression model to measure uncertainty by applying the type-2 fuzzy set as the model’s coefficients. We are thus able to quantitatively describe the relationship between qualitative object variables and qualitative values of multivariate attributes (membership degree or type-1 fuzzy set), which are given by subjective recognition and judgment. We will build a basic qualitative model first and then improve it capable of ranging inputs. We will also give a heuristic solution in the end.
APA, Harvard, Vancouver, ISO, and other styles
5

Zakaria, Rozaimi, Abd Fatah Wahab, Isfarita Ismail, and Mohammad Izat Emir Zulkifly. "Complex Uncertainty of Surface Data Modeling via the Type-2 Fuzzy B-Spline Model." Mathematics 9, no. 9 (2021): 1054. http://dx.doi.org/10.3390/math9091054.

Full text
Abstract:
This paper discusses the construction of a type-2 fuzzy B-spline model to model complex uncertainty of surface data. To construct this model, the type-2 fuzzy set theory, which includes type-2 fuzzy number concepts and type-2 fuzzy relation, is used to define the complex uncertainty of surface data in type-2 fuzzy data/control points. These type-2 fuzzy data/control points are blended with the B-spline surface function to produce the proposed model, which can be visualized and analyzed further. Various processes, namely fuzzification, type-reduction and defuzzification are defined to achieve a crisp, type-2 fuzzy B-spline surface, representing uncertainty complex surface data. This paper ends with a numerical example of terrain modeling, which shows the effectiveness of handling the uncertainty complex data.
APA, Harvard, Vancouver, ISO, and other styles
6

Reeves, Heather Dawn, Kimberly L. Elmore, Alexander Ryzhkov, Terry Schuur, and John Krause. "Sources of Uncertainty in Precipitation-Type Forecasting." Weather and Forecasting 29, no. 4 (2014): 936–53. http://dx.doi.org/10.1175/waf-d-14-00007.1.

Full text
Abstract:
Abstract Five implicit precipitation-type algorithms are assessed using observed and model-forecast sounding data in order to measure their accuracy and to gauge the effects of model uncertainty on algorithm performance. When applied to observed soundings, all algorithms provide very reliable guidance on snow and rain (SN and RA). However, their skills for ice pellets and freezing rain (IP and FZRA) are comparatively low. Most misclassifications of IP are for FZRA and vice versa. Deeper investigation reveals that no method used in any of the algorithms to differentiate between IP and FZRA allows for clear discrimination between the two forms. The effects of model uncertainty are also considered. For SN and RA, these effects are minimal and each algorithm performs reliably. Conversely, IP and FZRA are strongly impacted. When the range of uncertainty is fully accounted for, their resulting wet-bulb temperature profiles are nearly indistinguishable, leading to very poor skill for all algorithms. Although currently available data do not allow for a thorough investigation, comparison of the statistics from only those soundings that are associated with long-duration, horizontally uniform regions of FZRA shows there are significant differences between these profiles and those that are from more transient, highly variable environments. Hence, a five-category (SN, RA, IP, FZRA, and IP–FZRA mix) approach is advocated to differentiate between sustained regions of horizontally uniform FZRA (or IP) from more mixed environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Krbez, Joshua M., and Adnan Shaout. "Interval Type-2 Fuzzy Application for Diet Journaling." International Journal of Fuzzy System Applications 8, no. 2 (2019): 34–67. http://dx.doi.org/10.4018/ijfsa.2019040103.

Full text
Abstract:
In this article, an improved system is constructed using interval type-2 fuzzy sets (IT2FS) and a fuzzy logic controller (FLC) with non-singleton inputs. The primary purpose is to better model nutritional input uncertainty which is propagated through the Type-2 FLC. To this end, methods are proposed to (1) model nutrient uncertainty in food items, (2) extend the nutritional information of a food item using an IT2FS representation for each nutrient incorporating the uncertainty in the extension process, (3) accumulate uncertainties for IT2FS inputs using fuzzy arithmetic, and (4) build IT2FS antecedents for FLC rules based on dietary reference intakes (DRIs). These methods are then used to implement a web application for diet journaling that includes a client-side Type-2 non-singleton Interval Type-2 FLC. The produced application is then compared with the previous work and shown to be more suitable. This is the first known work on diet journaling that attempts to model uncertainty for all anticipated measurement error.
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Tianshu, Ting-En Lin, Yuchuan Wu, Min Yang, Fei Huang, and Yongbin Li. "Diverse AI Feedback For Large Language Model Alignment." Transactions of the Association for Computational Linguistics 13 (2025): 392–407. https://doi.org/10.1162/tacl_a_00746.

Full text
Abstract:
Abstract Recent advances in large language models (LLMs) focus on aligning models with human values to minimize harmful content. However, existing methods often rely on a single type of feedback, such as preferences, annotated labels, or critiques, which can lead to overfitting and suboptimal performance. In this paper, we propose Diverse AIFeedback (DAIF), a novel approach that integrates three types of feedback—critique, refinement, and preference—tailored to tasks of varying uncertainty levels. Through an analysis of information gain, we show that critique feedback is most effective for low-uncertainty tasks, refinement feedback for medium-uncertainty tasks, and preference feedback for high-uncertainty tasks. Training with this diversified feedback reduces overfitting and improves alignment. Experimental results across three tasks—question answering, dialog generation, and text summarization–demonstrate that DAIF outperforms traditional methods relying on a single feedback type.1
APA, Harvard, Vancouver, ISO, and other styles
9

Lakshmi, Shrinivasan, and R. Raol J. "Type-2 Fuzzy Logic in Pair Formation." Indonesian Journal of Electrical Engineering and Computer Science 10, no. 1 (2018): 94–99. https://doi.org/10.11591/ijeecs.v10.i1.pp94-99.

Full text
Abstract:
This paper gives an overview of Type-2 Fuzzy sets (T2FSs) and Type-2 fuzzy Logic system (T2FLS) considering one aviation scenario. The existing type-1 Fuzzy system has limited capability to handle the uncertainty directly. In order to overcome the limitations of Type-1 fuzzy Logic system (T1FLS), a next level of fuzzy set is introduced, that is known as T2FSs. Here we will discuss about: Type-2 fuzzy sets, type-2 membership functions, inference engine, type reduction and defuzzification. Pair formation is the undertaken aviation scenario which is very critical in a fighting situation. Crisp data are taken by the sensors of aircraft and with the techniques of data fusion, a constant decision is passed whether two aircrafts can achieve pair formation or not. Experiments are evaluated and performance is compared with ground truth and existing T1FLS, which proves better in terms of decision making while a certain amount of uncertainty is present.
APA, Harvard, Vancouver, ISO, and other styles
10

Sedigh, Ashkan, Mohammad-R. Akbarzadeh-T., and Ryan E. Tomlinson. "Optimizing Mineralization of Bioprinted Bone Utilizing Type-2 Fuzzy Systems." Biophysica 2, no. 4 (2022): 400–411. http://dx.doi.org/10.3390/biophysica2040035.

Full text
Abstract:
Bioprinting is an emerging tissue engineering method used to generate cell-laden scaffolds with high spatial resolution. Bioprinting parameters, such as pressure, nozzle size, and speed, highly influence the quality of the bioprinted construct. Moreover, cell suspension density and other critical biological parameters directly impact the biological function. Therefore, an approximation model that can be used to find the optimal bioprinting parameter settings for bioprinted constructs is highly desirable. Here, we propose a type-2 fuzzy model to handle the uncertainty and imprecision in the approximation model. Specifically, we focus on the biological parameters, such as the culture period, that can be used to maximize the output value (mineralization volume 21.8 mm3 with the same culture period of 21 days). We have also implemented a type-1 fuzzy model and compared the results with the proposed type-2 fuzzy model using two levels of uncertainty. We hypothesize that the type-2 fuzzy model may be preferred in biological systems due to the inherent vagueness and imprecision of the input data. Our numerical results confirm this hypothesis. More specifically, the type-2 fuzzy model with a high uncertainty boundary (30%) is superior to type-1 and type-2 fuzzy systems with low uncertainty boundaries in the overall output approximation error for bone bioprinting inputs.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Model type uncertainty"

1

Laguna, Sanz Alejandro José. "Uncertainty in Postprandial Model Identification in type 1 Diabetes." Doctoral thesis, Editorial Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/37191.

Full text
Abstract:
Postprandial characterization of patients with type 1 diabetes is crucial for the development of an automatic glucose control system (Artificial Pancreas). Uncertainty sources within the patient, and variability of the glucose response between patients, are a challenge for individual patients model identification leading to poor predictability with current methods. Also, continuous glucose monitors, which have been the springboard for research towards a domiciliary artificial pancreas, still introduce large measurement errors, greatly complicating the characterization of the patient. In this thesis, individual model identification characterizing intra-patient variability from domiciliary data is addressed. First, literature models are reviewed. Next, we investigate the collection of data, and how can it be improved using optimal experiment design. Data gathering improvement is later applied to an ambulatory clinical protocol implemented at the Hospital Clínic Universitari de València, and data are collected from twelve patients following a set of mixed meal studies. With regard to the uncertainty of the glucose monitors, two continuous glucose monitoring devices are analyzed and statistically modeled. The models of these devices are used for in silico simulations and the analysis of identification methods. Identification using intervals models is then performed, showing an inherent capability for characterization of both the patient and the related uncertainty. First an in silico study is conducted in order to assess the feasibility of the identifications. Then, model identification is addressed from real patient data, increasing the complexity of the problem. As conclusion a new method for interval model identification is developed and successfully validated from clinical data.<br>Laguna Sanz, AJ. (2014). Uncertainty in Postprandial Model Identification in type 1 Diabetes [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37191<br>Alfresco
APA, Harvard, Vancouver, ISO, and other styles
2

Madi, Elissa Nadia. "An improved uncertainty in multi-criteria decision making model based on type-2 fuzzy TOPSIS." Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/55394/.

Full text
Abstract:
This thesis presents a detailed study about one of the Multiple Criteria Decision Making (MCDM) models, namely Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), based on fuzzy set theory (FST) by focusing on improving modelling uncertain information provided by a group of decision makers (DMs). An exploration of issues and limitations in current models of standard TOPSIS and fuzzy TOPSIS were made. Despite many variations of type-1 fuzzy TOPSIS (T1-TOPSIS) model, none of the studies explaining the details of the key stages of standard TOPSIS (non-fuzzy) and T1-TOPSIS are based on a step-wise procedure. A detailed study was conducted which involve the process of identifying the limitations of standard TOPSIS and T1-TOPSIS. Based on this, a novel contribution on the comparison between these two models in systematic stepwise procedure was given. This study successfully identified and discussed the limitations, issues and challenges which have not been investigated sufficiently in the context of T1-TOPSIS model. Based on this exploration, further investigation of multiple variants of the extension of the fuzzy TOPSIS model for solving the MCDM problem was made with the primary aim of detailing the steps involved. One challenge that has risen is that it is not straightforward to differentiate between the multiple variants of TOPSIS existing today. A systematic comparison was made between standard T1-TOPSIS model with the recently extended model to show the differences between both models and to provide context for their respective strengths and limitations both in the complexity of application and expressiveness of results. Based on the resulting comparison, the differences in the steps implemented by these two Fuzzy TOPSIS models were highlighted throughout the worked example. Furthermore, this task highlights the ability of both models to handle different levels of uncertainty. Following the exploration of issues and limitations of the current model, as well as a comparative study, a novel extension of type-2 fuzzy TOPSIS model is proposed in this thesis which suggests providing an interval-valued output to reflect the uncertainties and to model subjective information. The proposed model enables to uniquely captures input uncertainty (i.e., decision-makers' preferences) in the decision-making outputs and provide a direct mapping of uncertainty in the inputs to outputs. By keeping the output values in interval form, the proposed model reduces the loss of information and maximises the potential benefit of using Interval Type-2 Fuzzy Sets (IT2 FSs). To demonstrate the MCDM problems when a various level of uncertainty is introduced, a novel experimental method was proposed in this study. The primary aim is to explore the use of IT2 FSs in handling uncertainty based on the TOPSIS model. This experiment was conducted to show how the variation of uncertainty levels in the input affects the final outputs. An implementation of the proposed model to two different case studies was conducted to evaluate the proposed model. The proposed model for the first time generates an interval-valued output. As intervals can, for example, exhibit partial overlap, a novel extended measure is proposed to compare the resulting interval-valued output from various cases (i.e., overlapping and non-overlapping) of the interval with considering uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
3

Rashidi, Mehrabadi Niloofar. "Power Electronics Design Methodologies with Parametric and Model-Form Uncertainty Quantification." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82934.

Full text
Abstract:
Modeling and simulation have become fully ingrained into the set of design and development tools that are broadly used in the field of power electronics. To state simply, they represent the fastest and safest way to study a circuit or system, thus aiding in the research, design, diagnosis, and debugging phases of power converter development. Advances in computing technologies have also enabled the ability to conduct reliability and production yield analyses to ensure that the system performance can meet given requirements despite the presence of inevitable manufacturing variability and variations in the operating conditions. However, the trustworthiness of all the model-based design techniques depends entirely on the accuracy of the simulation models used, which, thus far, has not yet been fully considered. Prior to this research, heuristic safety factors were used to compensate for deviation of real system performance from the predictions made using modeling and simulation. This approach resulted invariably in a more conservative design process. In this research, a modeling and design approach with parametric and model-form uncertainty quantification is formulated to bridge the modeling and simulation accuracy and reliance gaps that have hindered the full exploitation of model-based design techniques. Prior to this research, a few design approaches were developed to account for variability in the design process; these approaches have not shown the capability to be applicable to complex systems. This research, however, demonstrates that the implementation of the proposed modeling approach is able to handle complex power converters and systems. A systematic study for developing a simplified test bed for uncertainty quantification analysis is introduced accordingly. For illustrative purposes, the proposed modeling approach is applied to the switching model of a modular multilevel converter to improve the existing modeling practice and validate the model used in the design of this large-scale power converter. The proposed modeling and design methodology is also extended to design optimization, where a robust multi-objective design and optimization approach with parametric and model form uncertainty quantification is proposed. A sensitivity index is defined accordingly as a quantitative measure of system design robustness, with regards to manufacturing variability and modeling inaccuracies in the design of systems with multiple performance functions. The optimum design solution is realized by exploring the Pareto Front of the enhanced performance space, where the model-form error associated with each design is used to modify the estimated performance measures. The parametric sensitivity of each design point is also considered to discern between cases and help identify the most parametrically-robust of the Pareto-optimal design solutions. To demonstrate the benefits of incorporating uncertainty quantification analysis into the design optimization from a more practical standpoint, a Vienna-type rectifier is used as a case study to compare the theoretical analysis with a comprehensive experimental validation. This research shows that the model-form error and sensitivity of each design point can potentially change the performance space and the resultant Pareto Front. As a result, ignoring these main sources of uncertainty in the design will result in incorrect decision-making and the choice of a design that is not an optimum design solution in practice.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Omlin, Martin. "Uncertainty analysis of model predictions for environmental systems : concepts and application to lake modelling /." [S.l.] : [s.n.], 2000. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=13243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Yanfei. "Fuzzy methods for analysis of microarrays and networks." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/48175/1/Yanfei_Wang_Thesis.pdf.

Full text
Abstract:
Bioinformatics involves analyses of biological data such as DNA sequences, microarrays and protein-protein interaction (PPI) networks. Its two main objectives are the identification of genes or proteins and the prediction of their functions. Biological data often contain uncertain and imprecise information. Fuzzy theory provides useful tools to deal with this type of information, hence has played an important role in analyses of biological data. In this thesis, we aim to develop some new fuzzy techniques and apply them on DNA microarrays and PPI networks. We will focus on three problems: (1) clustering of microarrays; (2) identification of disease-associated genes in microarrays; and (3) identification of protein complexes in PPI networks. The first part of the thesis aims to detect, by the fuzzy C-means (FCM) method, clustering structures in DNA microarrays corrupted by noise. Because of the presence of noise, some clustering structures found in random data may not have any biological significance. In this part, we propose to combine the FCM with the empirical mode decomposition (EMD) for clustering microarray data. The purpose of EMD is to reduce, preferably to remove, the effect of noise, resulting in what is known as denoised data. We call this method the fuzzy C-means method with empirical mode decomposition (FCM-EMD). We applied this method on yeast and serum microarrays, and the silhouette values are used for assessment of the quality of clustering. The results indicate that the clustering structures of denoised data are more reasonable, implying that genes have tighter association with their clusters. Furthermore we found that the estimation of the fuzzy parameter m, which is a difficult step, can be avoided to some extent by analysing denoised microarray data. The second part aims to identify disease-associated genes from DNA microarray data which are generated under different conditions, e.g., patients and normal people. We developed a type-2 fuzzy membership (FM) function for identification of diseaseassociated genes. This approach is applied to diabetes and lung cancer data, and a comparison with the original FM test was carried out. Among the ten best-ranked genes of diabetes identified by the type-2 FM test, seven genes have been confirmed as diabetes-associated genes according to gene description information in Gene Bank and the published literature. An additional gene is further identified. Among the ten best-ranked genes identified in lung cancer data, seven are confirmed that they are associated with lung cancer or its treatment. The type-2 FM-d values are significantly different, which makes the identifications more convincing than the original FM test. The third part of the thesis aims to identify protein complexes in large interaction networks. Identification of protein complexes is crucial to understand the principles of cellular organisation and to predict protein functions. In this part, we proposed a novel method which combines the fuzzy clustering method and interaction probability to identify the overlapping and non-overlapping community structures in PPI networks, then to detect protein complexes in these sub-networks. Our method is based on both the fuzzy relation model and the graph model. We applied the method on several PPI networks and compared with a popular protein complex identification method, the clique percolation method. For the same data, we detected more protein complexes. We also applied our method on two social networks. The results showed our method works well for detecting sub-networks and give a reasonable understanding of these communities.
APA, Harvard, Vancouver, ISO, and other styles
6

Park, Inseok. "Quantification of Multiple Types of Uncertainty in Physics-Based Simulation." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1348702461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sánchez, Sánchez Almudena. "Modelling the evolution dynamics of the academic performance in high school in Spain. Probabilistic predictions of future trends and their economical consequences." Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/32280.

Full text
Abstract:
En esta tesis, se utilizan t ecnicas matem atico-epidemiol ogicas para modelar el rendimiento acad emico en Espa~na (prestando especial atenci on en el fracaso escolar) para comprender mejor los mecanismos detr as de esta importante cuesti on, as como para predecir c omo evolucionar an los resultados acad emicos en el Bachillerato espa~nol en los pr oximos a~nos. El nivel educativo de Bachillerato en Espa~na est a formado por los dos ultimos cursos antes de acceder a la universidad o al mercado de trabajo y corresponde a los estudiantes de 16 18 a~nos. Este nivel educativo es muy importante para la formaci on de los estudiantes ya que representa un periodo en el que deber an tomar importantes decisiones sobre el futuro acad emico y profesional. En primer lugar, en el Cap tulo 2, se presenta un modelo determinista donde se analiza el rendimiento acad emico asumiendo que la actitud negativa de los alumnos de Bachillerato puede ser debida a su comportamiento aut onomo y la in uencia de compa~neros con malos resultados acad emicos. Luego, en el Cap tulo 3, se mejora el modelo basado en la idea de que no s olo los malos h abitos acad emicos se transmiten socialmente sino tambi en los buenos h abitos de estudio. Adem as, descomponemos los par ametros de transmisi on de h abitos acad emicos con el n de analizar con m as detalle qu e grupos de estudiantes son m as susceptibles a ser in uenciados por compa~neros con buenos o malos h abitos acad emicos. El abandono escolar tambi en han sido incluido en este modelo. El enfoque adoptado permite proporcionar predicciones deterministas y con intervalos de con anza de la evoluci on del rendimiento escolar (incluyendo las tasas de abandono) en Bachillerato en Espa~na en los pr oximos a~nos. Este enfoque, adem as, nos permite modelar el rendimiento acad emico en otros niveles educativos del sistema acad emico espa~nol o de fuera de Espa~na tal y como se muestra en el Cap tulo 4, donde el modelo se aplica satisfactoriamente al sistema acad emico actual de la regi on alemana de Renania del Norte-Westfalia. Para concluir esta tesis, proporcionamos una estimaci on de los costes relacionados con el rendimiento acad emico espa~nol basado en nuestras predicciones. Esta estimaci on representa la inversi on en Bachillerato por parte del Gobierno espa~nol y las familias en los pr oximos a~nos, con especial atenci on en los grupos de estudiantes que no promocionan y abandonan en los diferentes cursos acad emicos.<br>In this dissertation, we use epidemiologic-mathematical techniques to model the academic performance in Spain (paying special attention on the academic underachievement) to understand better the mechanisms behind this important issue as well as to predict how academic results will evolve in the Spanish Bachillerato over the next few years. The Spanish Bachillerato educational level is made up of the last courses before accessing to the university or to the work market and corresponds to students of 16¿18 years old. This educational level is a milestone in the career training of students because it represents a period to make important decisions about academic and professional future. In a rst step, in the Chapter 2 we will present a deterministic model where academic performance is analyzed assuming the negative attitude of Bachillerato students may be due to their autonomous behavior and the in uence of classmates with bad academic results. Then, in the Chapter 3, the model is improved based on the idea that not only the bad academic habits are socially transmitted but also the good study habits. Besides, we decompose the transmission academic habits into good and bad academic habits, in order to analyze with more detail which group of students are more susceptible to be in uenced by good or bad academic students. The consideration of quantifying the abandon rates is also a new issue dealt with in it. The adopted approach allow to provide both punctual and con dence intervals predictions to the evolution of academic performance (including the abandon rates) in Bachillerato in Spain over the next few years. The adopted approach allows us to model academic performance in academic levels other than Bachillerato and/or beyond the Spanish academic system. This issue is assessed in Chapter 4, where the model is satisfactorily applied to the current academic system of the German region of North Rhine-Westphalia. To conclude this dissertation, we provide an estimation of the cost related to the Spanish academic underachievement based on our predictions. This estimation represents the investment in the Spanish Bachillerato from the Spanish Government and families over the next few years, paying special attention on the groups of students who do not promote and abandon during their corresponding academic year.<br>Sánchez Sánchez, A. (2013). Modelling the evolution dynamics of the academic performance in high school in Spain. Probabilistic predictions of future trends and their economical consequences [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/32280<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
8

Antão, Rómulo José Magalhães Martins. "Type-2 fuzzy logic: uncertain systems' modeling and control." Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/18041.

Full text
Abstract:
Doutoramento em Engenharia Eletrotécnica<br>A última fronteira da Inteligência Artificial será o desenvolvimento de um sistema computacional autónomo capaz de "rivalizar" com a capacidade de aprendizagem e de entendimento humana. Ainda que tal objetivo não tenha sido até hoje atingido, da sua demanda resultam importantes contribuições para o estado-da-arte tecnológico atual. A Lógica Difusa é uma delas que, influenciada pelos princípios fundamentais da lógica proposicional do raciocínio humano, está na base de alguns dos sistemas computacionais "inteligentes" mais usados da atualidade. A teoria da Lógica Difusa é uma ferramenta fundamental na suplantação de algumas das limitações inerentes à representação de informação incerta em sistemas computacionais. No entanto esta apresenta ainda algumas lacunas, pelo que diversos melhoramentos à teoria original têm sido introduzidos ao longo dos anos, sendo a Lógica Difusa de Tipo-2 uma das mais recentes propostas. Os novos graus de liberdade introduzidos por esta teoria têm-se demonstrado vantajosos, particularmente em aplicações de modelação de sistemas não-lineares complexos. Uma das principais vantagens prende-se com o aumento da robustez dos modelos assim desenvolvidos comparativamente àqueles baseados nos princípios da Lógica Difusa de Tipo-1 sem implicar necessariamente um aumento da sua dimensão. Tal propriedade é particularmente vantajosa considerando que muitas vezes estes modelos são utilizados como suporte ao desenvolvimento de sistemas de controlo que deverão ser capazes de assegurar o comportamento ótimo de um processo em condições de operação variáveis. No entanto, o estado-da-arte da teoria de controlo de sistemas baseada em modelos não tem integrado todos os melhoramentos proporcionados pelo desenvolvimento de modelos baseados nos princípios da Lógica Difusa de Tipo-2. Por essa razão, a presente tese propõe-se a abordar este tópico desenvolvendo uma metodologia de síntese de Controladores Preditivos baseados em modelos Takagi-Sugeno seguindo os princípios da Lógica Difusa de Tipo-2. De modo a cumprir este objetivo, quatro linhas de investigação serão debatidas neste trabalho.Primeiramente proceder-se-á ao desenvolvimento de uma metodologia de treino de Modelos Difusos de Tipo-2 simplificada, focada em dois paradigmas: manter a clareza dos intervalos de incerteza introduzidos sobre um Modelo Difuso de Tipo-1; assegurar a validade dos diversos modelos localmente lineares que constituem a estrutura Takagi- Sugeno, de modo a torná-los adequados a métodos de síntese de controladores baseados em modelos. O modelo desenvolvido é tipicamente utilizado para extrapolar o comportamento do sistema numa janela temporal futura. No entanto, quando usados em aproximações de sistemas não lineares, os modelos do tipo Takagi-Sugeno estabelecem um compromisso entre exatidão e complexidade computacional. Assim, é proposta a utilização dos princípios da Lógica Difusa de Tipo-2 para reduzir a influência dos erros de modelação nas estimações obtidas através do ajuste dos intervalos de incerteza dos parâmetros do modelo. Com base na estrutura Takagi-Sugeno, um método de linearização local de modelos não-lineares será utilizado em cada ponto de funcionamento do sistema de modo a obter os parâmetros necessários para a síntese de um controlador otimizado numa janela temporal futura de acordo com os princípios da teoria de Controlo Preditivo Generalizado - um dos algoritmos de Controlo Preditivo mais utilizado na indústria. A qualidade da resposta do sistema em malha fechada e a sua robustez a perturbações serão então comparadas com implementações do mesmo algoritmo baseadas em métodos de modelação mais simples. Para concluir, o controlador proposto será implementado num System-on-Chip baseado no core ARM Cortex-M4. Com o propósito de facilitar a realização de testes de implementação de algoritmos de controlo em sistemas embutidos, será apresentada também uma plataforma baseada numa arquitetura Processor-In-the-Loop, que permitirá avaliar a execução do algoritmo proposto em sistemas computacionais com recursos limitados, aferindo a existência de possíveis limitações antes da sua aplicação em cenários reais. A validade do novo método proposto é avaliada em dois cenários de simulação comummente utilizados em testes de sistemas de controlo não-lineares: no Controlo da Temperatura de uma Cuba de Fermentação e no Controlo do Nível de Líquidos num Sistema de Tanques Acoplados. É demonstrado que o algoritmo de controlo desenvolvido permite uma melhoria da performance dos processos supramencionados, particularmente em casos de mudança rápida dos regimes de funcionamento e na presença de perturbações ao processo não medidas.<br>The development of an autonomous system capable of matching human knowledge and learning capabilities embedded in a compact yet transparent way has been one of the most sought milestones of Artificial Intelligence since the invention of the first mechanical general purpose computers. Such accomplishment is yet to come but, in its pursuit, important contributions to the state-of-the-art of current technology have been made. Fuzzy Logic is one of such, supporting some of the most used frameworks for embedding human-like knowledge in computational systems. The theory of Fuzzy Logic overcame some of the difficulties that the inherent uncertainty in information representations poses to the development of computational systems. However, it does present some limitations so, aiming to further extend its capabilities, several improvements over its original formalization have been proposed over the years such as Type-2 Fuzzy Logic - one of its most recent advances. The additional degrees of freedom of Type-2 Fuzzy Logic are showing greater potential to supplant its original counterpart, especially in complex non-linear modeling tasks. One of its main outcomes is its capability of improving the developed model’s robustness without necessarily increasing its dimensionality comparatively to a Type-1 Fuzzy Model counterpart. Such feature is particularly advantageous if one considers these model as a support for developing control systems capable of maintaining a process’s optimal performance over changing operating conditions. However, state-of-the art model-based control theory does not seem to be taking full advantage of the improvements achieved with the development of Type-2 Fuzzy Logic based models. Therefore, this thesis proposes to address this problem by developing a Model Predictive Control system supported by Interval Type-2 Takagi- Sugeno Fuzzy Models. To accomplish this goal, four main research directions are covered in this work.Firstly, a simpler method for training a Type-2 Takagi-Sugeno Fuzzy Model focused on two main paradigms is proposed: maintaining a meaningful interpretation of the uncertainty intervals embedded over an estimated Type-1 Fuzzy Model; ensuring the validity of several locally linear models that constitute the Takagi-Sugeno structure in order to make them suitable for model-based control approaches. Based on the developed model, a multi-step ahead estimation of the process behavior is extrapolated. However, as Takagi-Sugeno Fuzzy Models establish a trade-off between accuracy and computational complexity when used as a non-linear process approximation, it is proposed to apply the principles of Type-2 Fuzzy Logic to reduce the influence of modeling uncertainties on the obtained estimations by adjusting the model parameters’ uncertainty intervals. Supported by the developed Type-2 Takagi-Sugeno Fuzzy Model, a locally linear approximation of each current operation point is used to obtain the optimal control law over a prediction horizon according to the principles of Generalized Predictive Control - one of the most used Model Predictive Control algorithms in Industry. The improvements in terms of closed loop tracking performance and robustness to unmodeled operation conditions are then assessed comparatively to Generalized Predictive Control implementations based on simpler modeling approaches. Ultimately, the proposed control system is implemented in a general purpose System-on-a-Chip based on a ARM Cortex-M4 core. A Processor-In-the-Loop testing framework, developed to support the implementation of control loops in embedded systems, is used to evaluate the algorithm’s turnaround time when executed in such computationally constrained platform, assessing its possible limitations before deployment in real application scenarios. The applicability of the new methods introduced in this thesis is illustrated in two simulated processes commonly used in non-linear control benchmarking: the Temperature Control of a Fermentation Reactor and the Liquid Level Control of a Coupled Tanks System. It is shown that the developed control system achieves an improved closed loop performance of the above mentioned processes, particularly in the cases of quick changes in the operation regime and in presence of unmeasured external disturbances.
APA, Harvard, Vancouver, ISO, and other styles
9

Manceur, Malik. "Commande robuste des systèmes non linéaires complexes." Thesis, Reims, 2012. http://www.theses.fr/2012REIMS003/document.

Full text
Abstract:
Le travail de la thèse traite le problème de suivi de trajectoires des systèmes non linéaires incertains,dont le modèle nominal est construit à l’aide d’un système flou TS (Takagi-Sugeno) de type-2. Cedernier, exploite les modèles locaux du système obtenus par linéarisation autour de certains pointsde fonctionnement. La commande développée est basée sur les modes glissants d’ordre deux avecSuper-Twisting. Nous avons proposé deux systèmes flous type-2 adaptatifs, qui ont comme uniqueentrée la surface de glissement, pour résoudre le problème du calcul de la valeur optimale des gainsde la commande. Des résultats de simulation ont permis de comparer les performances de l’approcheproposée avec la méthode classique. Ensuite, nous avons introduit le concept de l’intégral sliding modepour imposer à priori le temps d’arrivée sur la surface de glissement. Les approches proposées sontgénéralisées aux cas des systèmes multivariables. Plusieurs résultats par simulation et implémentationen temps réel sont présentés pour illustrer les performances des approches développées<br>This work deals with a fuzzy tracking control design for uncertain nonlinear dynamic system withexternal disturbances and using a TS (Takagi-Sugeno) fuzzy model description. The control is basedon the Super-Twisting algorithm, which is among of second order sliding mode control. Moreover, twoadaptive fuzzy type-2 systems have been introduced to generate the two Super-Twisting signals toavoid both the chattering and the constraint on the knowledge of disturbances and uncertainties upperbounds. These adaptive fuzzy type-2 systems has only one input : the sliding surface, and one output :the optimale values of the control gains, which are hard to compute with the original algorithm.Simulation results are obtained in order to compare the performances of the proposed method tothat given by Levant. Then, we have introduced the integral sliding mode concept to impose inadvance the convergence time and the arrival on the sliding surface. The proposed approaches aregeneralized to the case of multivariable systems. Several results in simulation and in real time usinga benchmark are obtained to validate and to confirm the performances of our contributions
APA, Harvard, Vancouver, ISO, and other styles
10

Murphy, David. "Predicting Effects of Artificial Recharge using Groundwater Flow and Transport Models with First Order Uncertainty Analysis." Thesis, The University of Arizona, 1997. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu_etd_hy0122_sip1_w.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Model type uncertainty"

1

Mishchenko, Aleksandr, and Elena Miheeva. Methods of assessment of efficiency of management of production and financial activity of the enterprise. INFRA-M Academic Publishing LLC., 2019. http://dx.doi.org/10.12737/monography_5d1ae60d82d6d9.87533425.

Full text
Abstract:
The proposed book describes the static and dynamic models of optimization of production and financial activities of the enterprise in the conditions of deterministic source data, and taking into account the uncertainty and risk. In the latter case, when choosing a management decision, not only the amount of expected profit, but also various types of risks, as well as such an indicator as the stability of the selected option of production and economic activity to changes in the market environment, are taken into account.
APA, Harvard, Vancouver, ISO, and other styles
2

Muhaev, Rashid. State and municipal administration. INFRA-M Academic Publishing LLC., 2024. http://dx.doi.org/10.12737/2125206.

Full text
Abstract:
What kind of public administration system should be in order to be effective in conditions of uncertainty? How should the mechanism of public administration function in order to promptly respond to the growing variety of requests and expectations of the population? The textbook answers these and other questions. It offers a discursive analysis of the current problems of the history, theory and practice of modern public and municipal administration, which is based on a generalization of the world and domestic experience in the functioning of public administration systems. Within the framework of the communicative paradigm, public administration is interpreted as a mechanism for coordinating group and generally significant interests by distributing the "public good" by the state apparatus in the form of making and implementing political and administrative decisions. The patterns of evolution of systems and models of public and municipal administration in different countries, the reasons for the transformation of technologies and styles of governing influence in them, depending on the maturity of society, the legal system, and the type of culture, are revealed.&#x0D; The novel of the textbook is a comparative analysis of threats, challenges and responses that define modern transformations of the administrative sphere and change management technologies in the field of public administration, as well as markers for quantifying the effectiveness of public administration systems in the information society.&#x0D; Meets the requirements of the federal state educational standards of higher education of the latest generation.&#x0D; For students of higher educational institutions studying in the fields of 38.03.04 "State and municipal management", 41.03.04 "Political Science", 38.03.02 "Management".
APA, Harvard, Vancouver, ISO, and other styles
3

Ślusarski, Marek. Metody i modele oceny jakości danych przestrzennych. Publishing House of the University of Agriculture in Krakow, 2017. http://dx.doi.org/10.15576/978-83-66602-30-4.

Full text
Abstract:
The quality of data collected in official spatial databases is crucial in making strategic decisions as well as in the implementation of planning and design works. Awareness of the level of the quality of these data is also important for individual users of official spatial data. The author presents methods and models of description and evaluation of the quality of spatial data collected in public registers. Data describing the space in the highest degree of detail, which are collected in three databases: land and buildings registry (EGiB), geodetic registry of the land infrastructure network (GESUT) and in database of topographic objects (BDOT500) were analyzed. The results of the research concerned selected aspects of activities in terms of the spatial data quality. These activities include: the assessment of the accuracy of data collected in official spatial databases; determination of the uncertainty of the area of registry parcels, analysis of the risk of damage to the underground infrastructure network due to the quality of spatial data, construction of the quality model of data collected in official databases and visualization of the phenomenon of uncertainty in spatial data. The evaluation of the accuracy of data collected in official, large-scale spatial databases was based on a representative sample of data. The test sample was a set of deviations of coordinates with three variables dX, dY and Dl – deviations from the X and Y coordinates and the length of the point offset vector of the test sample in relation to its position recognized as a faultless. The compatibility of empirical data accuracy distributions with models (theoretical distributions of random variables) was investigated and also the accuracy of the spatial data has been assessed by means of the methods resistant to the outliers. In the process of determination of the accuracy of spatial data collected in public registers, the author’s solution was used – resistant method of the relative frequency. Weight functions, which modify (to varying degree) the sizes of the vectors Dl – the lengths of the points offset vector of the test sample in relation to their position recognized as a faultless were proposed. From the scope of the uncertainty of estimation of the area of registry parcels the impact of the errors of the geodetic network points was determined (points of reference and of the higher class networks) and the effect of the correlation between the coordinates of the same point on the accuracy of the determined plot area. The scope of the correction was determined (in EGiB database) of the plots area, calculated on the basis of re-measurements, performed using equivalent techniques (in terms of accuracy). The analysis of the risk of damage to the underground infrastructure network due to the low quality of spatial data is another research topic presented in the paper. Three main factors have been identified that influence the value of this risk: incompleteness of spatial data sets and insufficient accuracy of determination of the horizontal and vertical position of underground infrastructure. A method for estimation of the project risk has been developed (quantitative and qualitative) and the author’s risk estimation technique, based on the idea of fuzzy logic was proposed. Maps (2D and 3D) of the risk of damage to the underground infrastructure network were developed in the form of large-scale thematic maps, presenting the design risk in qualitative and quantitative form. The data quality model is a set of rules used to describe the quality of these data sets. The model that has been proposed defines a standardized approach for assessing and reporting the quality of EGiB, GESUT and BDOT500 spatial data bases. Quantitative and qualitative rules (automatic, office and field) of data sets control were defined. The minimum sample size and the number of eligible nonconformities in random samples were determined. The data quality elements were described using the following descriptors: range, measure, result, and type and unit of value. Data quality studies were performed according to the users needs. The values of impact weights were determined by the hierarchical analytical process method (AHP). The harmonization of conceptual models of EGiB, GESUT and BDOT500 databases with BDOT10k database was analysed too. It was found that the downloading and supplying of the information in BDOT10k creation and update processes from the analyzed registers are limited. An effective approach to providing spatial data sets users with information concerning data uncertainty are cartographic visualization techniques. Based on the author’s own experience and research works on the quality of official spatial database data examination, the set of methods for visualization of the uncertainty of data bases EGiB, GESUT and BDOT500 was defined. This set includes visualization techniques designed to present three types of uncertainty: location, attribute values and time. Uncertainty of the position was defined (for surface, line, and point objects) using several (three to five) visual variables. Uncertainty of attribute values and time uncertainty, describing (for example) completeness or timeliness of sets, are presented by means of three graphical variables. The research problems presented in the paper are of cognitive and application importance. They indicate on the possibility of effective evaluation of the quality of spatial data collected in public registers and may be an important element of the expert system.
APA, Harvard, Vancouver, ISO, and other styles
4

Popova, Elmira, David Morton, Paul Damien, and Tim Hanson. Paternity testing allowing for uncertain mutation rates. Edited by Anthony O'Hagan and Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.8.

Full text
Abstract:
This article discusses the use of probabilistic reasoning to analyse a disputed paternity case, where the DNA genotypes are compatible on all markers but one, allowing for the possibility of mutation, when the mutation rate is itself uncertain. It first describes the construction and Bayesian analysis of a suitable model for paternity testing, taking into account the potentially misleading effect of genetic mutation and allowing for mutation rate uncertainty, before introducing the simplest type of disputed paternity case. It then considers a specific disputed paternity case, in which an apparent exclusion at a single marker could indicate either non-paternity or mutation, as well as the features of the mutation process that need to be accounted for in the analysis. The results of a simple analysis of the specific disputed paternity case are examined and the analysis is set in the broader context of DNA profiling and forensic genetics.
APA, Harvard, Vancouver, ISO, and other styles
5

Olsen, Jan Abel. Uncertainty and health insurance. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198794837.003.0010.

Full text
Abstract:
This chapter seeks to explain why most people prefer to have a health insurance plan. Two types of uncertainty give rise to the demand for financial protection: people do not know if they will ever come to need healthcare, and they do not know the full financial implications of illness. Health insurance would take away—or at least reduce—such financial uncertainties associated with future illnesses. A model is presented to show the so-called welfare gain from health insurance. This is followed by an investigation into the potential efficiency losses of health insurance, due to excess demand for services. In the last section, a different efficiency problem is discussed: when people have an incentive to signal ‘false risks’, this can lead to there being no market for insurance contracts which reflect ‘true risks’.
APA, Harvard, Vancouver, ISO, and other styles
6

Busuioc, Aristita, and Alexandru Dumitrescu. Empirical-Statistical Downscaling: Nonlinear Statistical Downscaling. Oxford University Press, 2018. http://dx.doi.org/10.1093/acrefore/9780190228620.013.770.

Full text
Abstract:
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Climate Science. Please check back later for the full article.The concept of statistical downscaling or empirical-statistical downscaling became a distinct and important scientific approach in climate science in recent decades, when the climate change issue and assessment of climate change impact on various social and natural systems have become international challenges. Global climate models are the best tools for estimating future climate conditions. Even if improvements can be made in state-of-the art global climate models, in terms of spatial resolution and their performance in simulation of climate characteristics, they are still skillful only in reproducing large-scale feature of climate variability, such as global mean temperature or various circulation patterns (e.g., the North Atlantic Oscillation). However, these models are not able to provide reliable information on local climate characteristics (mean temperature, total precipitation), especially on extreme weather and climate events. The main reason for this failure is the influence of local geographical features on the local climate, as well as other factors related to surrounding large-scale conditions, the influence of which cannot be correctly taken into consideration by the current dynamical global models.Impact models, such as hydrological and crop models, need high resolution information on various climate parameters on the scale of a river basin or a farm, scales that are not available from the usual global climate models. Downscaling techniques produce regional climate information on finer scale, from global climate change scenarios, based on the assumption that there is a systematic link between the large-scale and local climate. Two types of downscaling approaches are known: a) dynamical downscaling is based on regional climate models nested in a global climate model; and b) statistical downscaling is based on developing statistical relationships between large-scale atmospheric variables (predictors), available from global climate models, and observed local-scale variables of interest (predictands).Various types of empirical-statistical downscaling approaches can be placed approximately in linear and nonlinear groupings. The empirical-statistical downscaling techniques focus more on details related to the nonlinear models—their validation, strengths, and weaknesses—in comparison to linear models or the mixed models combining the linear and nonlinear approaches. Stochastic models can be applied to daily and sub-daily precipitation in Romania, with a comparison to dynamical downscaling. Conditional stochastic models are generally specific for daily or sub-daily precipitation as predictand.A complex validation of the nonlinear statistical downscaling models, selection of the large-scale predictors, model ability to reproduce historical trends, extreme events, and the uncertainty related to future downscaled changes are important issues. A better estimation of the uncertainty related to downscaled climate change projections can be achieved by using ensembles of more global climate models as drivers, including their ability to simulate the input in downscaling models. Comparison between future statistical downscaled climate signals and those derived from dynamical downscaling driven by the same global model, including a complex validation of the regional climate models, gives a measure of the reliability of downscaled regional climate changes.
APA, Harvard, Vancouver, ISO, and other styles
7

Golan, Amos. A Complete Info-Metrics Framework. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780199349524.003.0009.

Full text
Abstract:
In this chapter I develop the complete info-metrics framework for inferring problems and theories under all types of uncertainty and missing information. That framework allows for uncertainty in the observed values and about the functional form, as captured by the constraints. Using the derivations of Chapter 8, it also extends the info-metrics framework to include priors. The basic properties of the complete framework are developed as well. Generally speaking, that framework can be viewed as a “meta-theory”—a theory of how to construct theories and consistent models given the available information. This accrues all the benefits of the maximum entropy formalism but additionally accommodates a larger class of problems. The derivations are complemented with a complete visual representation of the info-metrics framework. Theoretical and empirical applications are provided.
APA, Harvard, Vancouver, ISO, and other styles
8

Hammer, Mitchell R. Saving Lives. Praeger, 2007. http://dx.doi.org/10.5040/9798216985518.

Full text
Abstract:
The standoff and ultimate tragedy in Waco, Texas, highlights the potential volatility and uncertainty of crisis negotiations and demonstrates the challenges law enforcement officials face as they attempt to resolve these situations. Hammer's book provides a practical negotiation approach (the S.A.F.E. model) that hostage negotiators and first responders can use to help save lives in situations where violence or the threat of violence is present. He identifies methods of interaction and communication during a hostage crisis that help to dispel tension and resolve situations peacefully. Combining approaches from various schools of thought on the topic, and applying the methods to both domestic and international contexts, the author has devised a model that is applicable to many types of crisis negotiations and focuses on interaction, communication, and discourse designed to bring a situation down to a manageable level. Through the analysis of several cases representing domestic, criminal, and suicidal situations, he provides a vivid roadmap to the ways in which crisis negotiation can be used to dispel violence before it takes place.
APA, Harvard, Vancouver, ISO, and other styles
9

Gartzke, Erik A., and Paul Poast. Empirically Assessing the Bargaining Theory of War: Potential and Challenges. Oxford University Press, 2017. http://dx.doi.org/10.1093/acrefore/9780190228637.013.274.

Full text
Abstract:
What explains war? The so-called bargaining approach has evolved quickly in the past two decades, opening up important new possibilities and raising fundamental challenges to previous conventional thinking about the origins of political violence. Bargaining is intended to explain the causes of conflict on many levels, from interpersonal to international. War is not the product of any of a number of variables creating opportunity or willingness, but instead is caused by whatever factors prevent competitors from negotiating the settlements that result from fighting. Conflict is thus a bargaining failure, a socially inferior outcome, but also a determined choice.Embraced by a growing number of scholars, the bargaining perspective rapidly created a new consensus in some circles. Bargaining theory is radical in relocating at least some of the causes of conflict away from material, cultural, political, or psychological factors and replacing them with states of knowledge about these same material or ideational factors. Approaching conflict as a bargaining failure—produced by uncertainty and incentives to misrepresent, credible commitment problems, or issue indivisibility—is the “state of the art” in the study of conflict.At the same time, bargaining theories remain largely untested in any systematic sense: theory has moved far ahead of empirics. The bargaining perspective has been favored largely because of compelling logic rather than empirical validity. Despite the bargaining analogy’s wide-ranging influence (or perhaps because of this influence), scholars have largely failed to subject the key causal mechanisms of bargaining theory to systematic empirical investigation. Further progress for bargaining theory, both among adherents and in the larger research community, depends on empirical tests of both core claims and new theoretical implications of the bargaining approach.The limited amount of systematic empirical research on bargaining theories of conflict is by no means entirely accident or the product of lethargy on the part of the scholarly community. Tests of theories that involve intangible factors like states of belief or perception are difficult to pursue. How does one measure uncertainty? What does learning look like in the midst of a war? When is indivisibility or commitment a problem, and when can it be resolved through other measures, such as ancillary bargains? The challenge before researchers, however, is to surmount these obstacles. To the degree that progress in science is empirical, bargaining theory needs testing.As should be clear, the dearth of empirical tests of bargaining approaches to the study of conflict leaves important questions unanswered. Is it true, for example, as bargaining theory suggests, that uncertainty leads to the possibility of war? If so, how much uncertainty is required and in what contexts? Which types of uncertainty are most pernicious (and which are perhaps relatively benign)? Under what circumstances are the effects of uncertainty greatest and where are they least critical? Empirical investigation of the bargaining model can provide essential guidance to theoretical work on conflict by identifying insights that can offer intellectual purchase and by highlighting areas of inquiry that are likely to be empirical dead ends. More broadly, the impact of bargaining theory on the study and practice of international relations rests to a substantial degree on the success of efforts to substantiate the perspective empirically.
APA, Harvard, Vancouver, ISO, and other styles
10

Fuhse, Jan. Social Networks of Meaning and Communication. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780190275433.001.0001.

Full text
Abstract:
Social structures can be fruitfully studied as networks of social relationships. These should not be conceptualized, and examined, as stable, acultural patterns of ties. Building on relational sociology around Harrison White, the book examines the interplay of social networks and meaning. Social relationships consist of dynamic bundles of expectations about the behavior between particular actors. These expectations come out of the process of communication, and they make for the regularity and predictability of communication, reducing its inherent uncertainty. Like all social structures, relationships and networks are made of expectations that guide social processes, but that continuously change as the result of these processes. Building on Niklas Luhmann, the events in networks can fruitfully be conceptualized as communication, the processing of meaning between actors (rather than emanating from them). Communication draws on a variety of cultural forms to define and negotiate the relationships between actors: relationship frames like “love” and “friendship” prescribe the kinds of interaction appropriate for types of tie; social categories like ethnicity and gender guide the interaction within and between categories of actors; and collective and corporate actors form on the basis of cultural models like “company,” “bureaucracy,” “street gang,” or “social movement.” Such cultural models are diffused in systems of education and in the mass media, but they also institutionalize in communication, with existing patterns of interaction and relationships serving as models for others. Social groups are semi-institutionalized social patterns, with a strong social boundary separating their members from the social environment.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Model type uncertainty"

1

Pérez-Blanco, C. D. "Navigating Deep Uncertainty in Complex Human–Water Systems." In Springer Climate. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-86211-4_20.

Full text
Abstract:
AbstractComplex human–water systems are deeply uncertain. Policymakers are not aware of all possible futures (deep uncertainty type 2), while the probability of those futures that can be identified ex-ante is typically unknown (deep uncertainty type 1). In this context, standard decision-making based on a complete probabilistic description of future conditions and optimization of expected performance is no longer appropriate; instead, priority should be given to robustness, through the identification of policies that are (i) insensitive to foreseeable changes in future conditions (classical robustness that addresses deep uncertainty type 1) and (ii) adaptive to unforeseen contingencies (adaptive robustness that addresses deep uncertainty type 2). This research surveys recent advances in (socio-)hydrology and (institutional) economics toward robust decision-making. Despite significant progress, integration among disciplines remains weak and allows only for a fractioned understanding and partial representation of uncertainty. To bridge this gap, I will argue that science needs to further underpin the development and integration of two pieces of ex-ante information: (1) a modeling hierarchy of human–water systems to assess policy performance under alternative scenarios and model settings, so as to navigate deep uncertainty type 1 and (2) a longitudinal accounting and analysis of public transaction costs to navigate deep uncertainty type 2.
APA, Harvard, Vancouver, ISO, and other styles
2

Figueroa-García, Juan Carlos, and Germán Hernández. "A Multiple Means Transportation Model with Type-2 Fuzzy Uncertainty." In Supply Chain Management Under Fuzziness. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-53939-8_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kurata, Masahiro, Jerome P. Lynch, Kincho H. Law, and Liming W. Salvino. "Bayesian Model Updating Approach for Systematic Damage Detection of Plate-Type Structures." In Topics in Model Validation and Uncertainty Quantification, Volume 4. Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-2431-4_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhanqiong, He, Songsak Sriboonchitta, and Dai Jing. "Modeling Dependence Dynamics of Air Pollution: Time Series Analysis Using a Copula Based GARCH Type Model." In Uncertainty Analysis in Econometrics with Applications. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-35443-4_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dinvay, Evgueni. "A Stochastic Benjamin-Bona-Mahony Type Equation." In Mathematics of Planet Earth. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18988-3_3.

Full text
Abstract:
AbstractConsidered herein is a particular nonlinear dispersive stochastic equation. It was introduced recently in Dinvay and Mémin (Proc. R. Soc. A. 478:20220050, 2022), as a model describing surface water waves under location uncertainty. The corresponding noise term is introduced through a Hamiltonian formulation, which guarantees the energy conservation of the flow. Here the initial-value problem is studied.
APA, Harvard, Vancouver, ISO, and other styles
6

Matei, Alexander, and Stefan Ulbrich. "Detection of Model Uncertainty in the Dynamic Linear-Elastic Model of Vibrations in a Truss." In Lecture Notes in Mechanical Engineering. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77256-7_22.

Full text
Abstract:
AbstractDynamic processes have always been of profound interest for scientists and engineers alike. Often, the mathematical models used to describe and predict time-variant phenomena are uncertain in the sense that governing relations between model parameters, state variables and the time domain are incomplete. In this paper we adopt a recently proposed algorithm for the detection of model uncertainty and apply it to dynamic models. This algorithm combines parameter estimation, optimum experimental design and classical hypothesis testing within a probabilistic frequentist framework. The best setup of an experiment is defined by optimal sensor positions and optimal input configurations which both are the solution of a PDE-constrained optimization problem. The data collected by this optimized experiment then leads to variance-minimal parameter estimates. We develop efficient adjoint-based methods to solve this optimization problem with SQP-type solvers. The crucial test which a model has to pass is conducted over the claimed true values of the model parameters which are estimated from pairwise distinct data sets. For this hypothesis test, we divide the data into k equally-sized parts and follow a k-fold cross-validation procedure. We demonstrate the usefulness of our approach in simulated experiments with a vibrating linear-elastic truss.
APA, Harvard, Vancouver, ISO, and other styles
7

Pelz, Peter F., Robert Feldmann, Christopher M. Gehb, et al. "Our Specific Approach on Mastering Uncertainty." In Springer Tracts in Mechanical Engineering. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78354-9_3.

Full text
Abstract:
AbstractThis chapter serves as an introduction to the main topic of this book, namely to master uncertainty in technical systems. First, the difference of our approach to previous ones is highlighted. We then discuss process chains as an important type of technical systems, in which uncertainty propagates along the chain. Five different approaches to master uncertainty in process chains are presented: uncertainty identification, uncertainty propagation, robust optimisation, sensitivity analysis and model adaption. The influence of the process on uncertainty and methods depends on whether it is dynamic/time-varying and/or active. This brings us to the main strategies for mastering uncertainty: robustness, flexibility and resilience. Finally, three different concrete technical systems that are used to demonstrate our methods are presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Campos, Damián, Andrés Ajras, Lucas Goytiño, and Marcelo Piovan. "Bayesian Inversion of a Non-linear Dynamic Model for Stockbridge Dampers." In Proceedings of the XV Ibero-American Congress of Mechanical Engineering. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-38563-6_1.

Full text
Abstract:
AbstractStockbridge dampers are the most widely used in wind induced vibration control of overhead power transmission lines. This dynamic absorber comprises a carrier cable with a mass at each end and a bolted clamp that can be attached to a conductor or a guard wire, with the purpose of supplementing the energy dissipated by the cable related to its self-damping. The maximum response of this type of absorber is associated with the frequencies of its different oscillation modes. The masses are designed in such a way to obtain moments of inertia and location of their center of gravity such that, with the vibration of the clamp, their various characteristic bending and torsional modes are excited. In this work, the calibration of a nonlinear finite element model using Bayesian inference is presented to evaluate the dynamic behavior of the damper for all excitation frequencies and displacement amplitudes. To this end, an inverse problem was posed in which the probability distributions of the parameters of interest are obtained from backward uncertainty propagation of experimental measurements performed in laboratory tests. Finally, the uncertainty of the calibrated model was propagated and contrasted with the experimental data. The developed model is a powerful tool when defining the quantity and distribution of dampers in the span of a line.
APA, Harvard, Vancouver, ISO, and other styles
9

Ma, Qiang, Yan-Guo Zhou, Kai Liu, and Yun-Min Chen. "Centrifuge Model Tests at Zhejiang University for LEAP-ASIA-2019." In Model Tests and Numerical Simulations of Liquefaction and Lateral Spreading II. Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-48821-4_13.

Full text
Abstract:
AbstractTwo centrifuge models with the same target relative density (Dr = 65%) were conducted in different centrifugal acceleration (30 g for Model-A and 15 g for Model-B) at Zhejiang University (ZJU) to validate generalized scaling law in the program of LEAP-ASIA-2019. The same model used in LEAP-UCD-2017 was repeated, representing a 5-degree slope consisting of saturated Ottawa F-65 sand. This chapter describes test facilities, instrumentations layout, and test procedures. Uncertainty analysis is also carried out in input parameters (e.g., achieved peak ground acceleration, achieved density and the degree of saturation). The test results of acceleration, excess pore water pressures, and displacement etc. were compared at prototype scale to check the validity of the generalized scaling law (GSL). The preliminary experiment results of Zhejiang University show that the Type II generalized scaling law is applicable to the acceleration response while has a weak applicability to the displacement response.
APA, Harvard, Vancouver, ISO, and other styles
10

Bai, Ruolin, Lei Jin, Bin Zhuo, Hai Yan Fu, Jing Liu, and Hui Bin Guo. "Development of a Type-2 Fuzzy Bi-level Programming Model Coupling MCDA Analysis for Water Resources Optimization Under Uncertainty." In Environmental Science and Technology: Sustainable Development. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27431-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Model type uncertainty"

1

Lu, Pai-Chuan, and Mirna Urquidi-Macdonald. "Prediction of IGSCC in Type 304 SS Using an Artificial Neural Network." In CORROSION 1994. NACE International, 1994. https://doi.org/10.5006/c1994-94151.

Full text
Abstract:
Abstract An artificial neural network (ANN) has been developed to describe intergranular stress corrosion cracking in sensitized Type 304SS in high temperature aqueous solutions. The ANN predictions of crack growth rate (CGR) versus oxygen concentration, flow velocity, stress intensity, hydrogen concentration, and ECP are compared with the predictions of the deterministic Coupled Environment Fracture Model (CEFM). The predictions of these two approaches, which represent the extremes in the spectrum of predictive technologies, are generally in good accord, except that the cGRs obtained from the ANN are up to a factor of three higher than those predicted by the CEFM under some conditions. However, this difference is within the uncertainty in the experimental data used to train the ANN.
APA, Harvard, Vancouver, ISO, and other styles
2

Jain, Swati, Andrea N. Sánchez, Shan Guan, et al. "Probabilistic Assessment of External Corrosion Rates in Buried Oil and Gas Pipelines." In CORROSION 2015. NACE International, 2015. https://doi.org/10.5006/c2015-05529.

Full text
Abstract:
Abstract Quantitative risk assessment due to external corrosion requires an estimation of corrosion rates which is a challenging task for pipeline engineers because of the uncertainty in data related to environmental and physical variables such as soil type, drainage, soil chemistry, CP effectiveness, coating type and coating properties. Unfortunately, the research into quantitative assessment of external corrosion rates and the probability of failure of a buried pipeline is limited and has not progressed significantly. The reason is the complex mechanism of external corrosion, numerous factors affecting it, and the uncertainty in the knowledge of the variables. There is the need of a probabilistic external corrosion methodology that compiles in one framework field data, multiple analytical methods (i.e. mechanistic models from various sources and multiple risk modelling methods are combined in one unified method) and expert knowledge. In this paper a novel model for quantitative assessment of corrosion rates using Bayesian network method is proposed. Bayesian Networks are graphical models based on cause-consequence relationships that are quantified through conditional probability tables based on a combination of information available from subject matter experts, mechanistic models, and field data. A case study is presented to assess the probability of failure due to external corrosion in a crude oil buried pipeline located in Eastern China. The model was validated using in-line inspection data.
APA, Harvard, Vancouver, ISO, and other styles
3

Heppner, Kevin L., Richard W. Evitts, and John Postlethwaite. "Determining the Crevice Corrosion Incubation Period of Passive Metals for Systems with Moderately High Electrolyte Concentrations – Application of Pitzer’S Ionic Interaction Model." In CORROSION 2003. NACE International, 2003. https://doi.org/10.5006/c2003-03691.

Full text
Abstract:
Abstract Past research into the mechanism governing the time to active crevice corrosion, the incubation period, of a passive metal crevice has produced theoretical models coupled with the B-dot model, the Debye-Hückel limiting law, and other activity models, to correct for non-ideal behavior at moderately high concentrations. In this research, the transport model of Watson and Postlethwaite1, 2 is coupled with the ionic interaction model of Pitzer3 to predict the effect of the crevice gap on the iR drop and chemical activity of the crevice solution. To validate the model, the experimental type 304 stainless steel crevice of Alavi and Cottis4 is simulated. Model predictions match observations of this experimental work within experimental uncertainty. The effect of crevice gap on a titanium crevice immersed in 0.5 mol/L aqueous NaCl solution at 25°C is also predicted. The iR drop between crevice tip and mouth, the solution electrical conductivity, and chemical activity, increases as the crevice gap decreases in size. The relationship between iR drop and deviation from charge electroneutrality of the solution is investigated.
APA, Harvard, Vancouver, ISO, and other styles
4

Ravikumar, Arjun, and John Lee. "Quantification of Uncertainty in Model Based Type Well Construction." In SPE Latin American and Caribbean Petroleum Engineering Conference. Society of Petroleum Engineers, 2020. http://dx.doi.org/10.2118/199118-ms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Patino-Callejas, Juan S., Krisna Y. Espinosa-Ayala, and Juan C. Figueroa-Garcia. "A model for goal programming with Type-2 fuzzy uncertainty." In 2015 Workshop on Engineering Applications - International Congress on Engineering (WEA). IEEE, 2015. http://dx.doi.org/10.1109/wea.2015.7370157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ravikumar, Arjun, and W. John Lee. "Model-Based Type Wells Reduce Uncertainty in Production Forecasting for Unconventional Wells." In Unconventional Resources Technology Conference. American Association of Petroleum Geologists, 2020. http://dx.doi.org/10.15530/urtec-2020-3245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

KAHRAMAN, CENGİZ, BAŞAR ÖZTAYŞI, and SEZI CEVIK ONAR. "PHOTOVOLTAICS TYPE SELECTION USING A PROJECTION MODEL-BASED APPROACH TO INTUITIONISTIC FUZZY MULTICRITERIA DECISION MAKING." In Conference on Uncertainty Modelling in Knowledge Engineering and Decision Making (FLINS 2016). WORLD SCIENTIFIC, 2016. http://dx.doi.org/10.1142/9789813146976_0143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Fan, Bin Ding, Shujun Zhao, Jie Liu, Jiaming Cai, and Yunhui Chen. "Bi-level Planning Model of Urban Diamond-type Distribution Network Considering Load Uncertainty." In 2021 IEEE 5th Conference on Energy Internet and Energy System Integration (EI2). IEEE, 2021. http://dx.doi.org/10.1109/ei252483.2021.9713364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ma, Z. M., W. J. Zhang, and W. M. Ma. "Fuzzy Data Type Modeling With EXPRESS." In ASME 2001 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/detc2001/cie-21235.

Full text
Abstract:
Abstract Information with imprecision and uncertainty is inherently presented in engineering design and manufacturing. The nature of imprecision and uncertainty is incompleteness. Product data model, being a core of intelligent manufacturing system, consists of all concerned data in the product life cycle. It is possible that crisp data as well as incomplete data are involved in product data model. So EXPRESS, being a powerful tool to develop a product data model, should be extended for this purpose. This paper extends the data types in EXPRESS to make it possible to represent fuzzy information.
APA, Harvard, Vancouver, ISO, and other styles
10

Yu, Gwo-Ruey, Tzu-Fu Cheng, and Yu-Yan Chen. "LMI-Based Control of Interval Type-2 T-S Fuzzy Systems with Model Uncertainty." In 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2015. http://dx.doi.org/10.1109/smc.2015.74.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Model type uncertainty"

1

Roberson, Madeleine, Kathleen Inman, Ashley Carey, Isaac Howard, and Jameson Shannon. Probabilistic neural networks that predict compressive strength of high strength concrete in mass placements using thermal history. Engineer Research and Development Center (U.S.), 2022. http://dx.doi.org/10.21079/11681/44483.

Full text
Abstract:
This study explored the use of artificial neural networks to predict UHPC compressive strengths given thermal history and key mix components. The model developed herein employs Bayesian variational inference using Monte Carlo dropout to convey prediction uncertainty using 735 datapoints on seven UHPC mixtures collected using a variety of techniques. Datapoints contained a measured compressive strength along with three curing inputs (specimen maturity, maximum temperature experienced during curing, time of maximum temperature) and five mixture inputs to distinguish each UHPC mixture (cement type, silicon dioxide content, mix type, water to cementitious material ratio, and admixture dosage rate). Input analysis concluded that predictions were more sensitive to curing inputs than mixture inputs. On average, 8.2% of experimental results in the final model fell outside of the predicted range with 67.9%of these cases conservatively underpredicting. The results support that this model methodology is able to make sufficient probabilistic predictions within the scope of the provided dataset but is not for extrapolating beyond the training data. In addition, the model was vetted using various datasets obtained from literature to assess its versatility. Overall this model is a promising advancement towards predicting mechanical properties of high strength concrete with known uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
2

Soar, Philip, Colin Thorne, David Biedenharn, et al. Development and testing of the FRAME tool on a 200-mile reach of the Lower Mississippi River. Engineer Research and Development Center (U.S.), 2025. https://doi.org/10.21079/11681/49744.

Full text
Abstract:
Understanding the likely long-term evolution of the Lower Mississippi River (LMR) is a challenging mission for the US Army Corps of Engineers (USACE) that remains difficult for conventional river engineering models. A new type of model is currently in development, tasked with revealing uncertainty-bounded trends in sediment transport and channel morphology over annual, decadal, and centennial timescales. The Future River Analysis and Management Evaluation (FRAME) tool is being designed with river managers and planners in mind to provide exploratory insights into plausible river futures and their potential impacts. A unique attribute of the tool is its hybrid interfacing of traditional one-dimensional hydraulic and sediment transport modeling with geomorphic rules for characterizing the morphological response. This report documents the development of a FRAME test-bed model for a 200-mile reach of the Mississippi River upstream of Vicksburg, Mississippi. This testbed allowed development and testing of the prototype FRAME tool in a data-rich environment. This work identified proposed future developments to provide river managers and planners with a fully functional tool for delivering insights on long-term morphological response in river channels across a variety of spatial and temporal scales.
APA, Harvard, Vancouver, ISO, and other styles
3

Torres, Marissa, Michael-Angelo Lam, and Matt Malej. Practical guidance for numerical modeling in FUNWAVE-TVD. Engineer Research and Development Center (U.S.), 2022. http://dx.doi.org/10.21079/11681/45641.

Full text
Abstract:
This technical note describes the physical and numerical considerations for developing an idealized numerical wave-structure interaction modeling study using the fully nonlinear, phase-resolving Boussinesq-type wave model, FUNWAVE-TVD (Shi et al. 2012). The focus of the study is on the range of validity of input wave characteristics and the appropriate numerical domain properties when inserting partially submerged, impermeable (i.e., fully reflective) coastal structures in the domain. These structures include typical designs for breakwaters, groins, jetties, dikes, and levees. In addition to presenting general numerical modeling best practices for FUNWAVE-TVD, the influence of nonlinear wave-wave interactions on regular wave propagation in the numerical domain is discussed. The scope of coastal structures considered in this document is restricted to a single partially submerged, impermeable breakwater, but the setup and the results can be extended to other similar structures without a loss of generality. The intended audience for these materials is novice to intermediate users of the FUNWAVE-TVD wave model, specifically those seeking to implement coastal structures in a numerical domain or to investigate basic wave-structure interaction responses in a surrogate model prior to considering a full-fledged 3-D Navier-Stokes Computational Fluid Dynamics (CFD) model. From this document, users will gain a fundamental understanding of practical modeling guidelines that will flatten the learning curve of the model and enhance the final product of a wave modeling study. Providing coastal planners and engineers with ease of model access and usability guidance will facilitate rapid screening of design alternatives for efficient and effective decision-making under environmental uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
4

Gunay, Selim, Fan Hu, Khalid Mosalam, et al. Blind Prediction of Shaking Table Tests of a New Bridge Bent Design. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, 2020. http://dx.doi.org/10.55461/svks9397.

Full text
Abstract:
Considering the importance of the transportation network and bridge structures, the associated seismic design philosophy is shifting from the basic collapse prevention objective to maintaining functionality on the community scale in the aftermath of moderate to strong earthquakes (i.e., resiliency). In addition to performance, the associated construction philosophy is also being modernized, with the utilization of accelerated bridge construction (ABC) techniques to reduce impacts of construction work on traffic, society, economy, and on-site safety during construction. Recent years have seen several developments towards the design of low-damage bridges and ABC. According to the results of conducted tests, these systems have significant potential to achieve the intended community resiliency objectives. Taking advantage of such potential in the standard design and analysis processes requires proper modeling that adequately characterizes the behavior and response of these bridge systems. To evaluate the current practices and abilities of the structural engineering community to model this type of resiliency-oriented bridges, the Pacific Earthquake Engineering Research Center (PEER) organized a blind prediction contest of a two-column bridge bent consisting of columns with enhanced response characteristics achieved by a well-balanced contribution of self-centering, rocking, and energy dissipation. The parameters of this blind prediction competition are described in this report, and the predictions submitted by different teams are analyzed. In general, forces are predicted better than displacements. The post-tension bar forces and residual displacements are predicted with the best and least accuracy, respectively. Some of the predicted quantities are observed to have coefficient of variation (COV) values larger than 50%; however, in general, the scatter in the predictions amongst different teams is not significantly large. Applied ground motions (GM) in shaking table tests consisted of a series of naturally recorded earthquake acceleration signals, where GM1 is found to be the largest contributor to the displacement error for most of the teams, and GM7 is the largest contributor to the force (hence, the acceleration) error. The large contribution of GM1 to the displacement error is due to the elastic response in GM1 and the errors stemming from the incorrect estimation of the period and damping ratio. The contribution of GM7 to the force error is due to the errors in the estimation of the base-shear capacity. Several teams were able to predict forces and accelerations with only moderate bias. Displacements, however, were systematically underestimated by almost every team. This suggests that there is a general problem either in the assumptions made or the models used to simulate the response of this type of bridge bent with enhanced response characteristics. Predictions of the best-performing teams were consistently and substantially better than average in all response quantities. The engineering community would benefit from learning details of the approach of the best teams and the factors that caused the models of other teams to fail to produce similarly good results. Blind prediction contests provide: (1) very useful information regarding areas where current numerical models might be improved; and (2) quantitative data regarding the uncertainty of analytical models for use in performance-based earthquake engineering evaluations. Such blind prediction contests should be encouraged for other experimental research activities and are planned to be conducted annually by PEER.
APA, Harvard, Vancouver, ISO, and other styles
5

De Castro-Valderrama, Marcela, Santiago Forero-Alvarado, Nicolás Moreno-Arias, and Sara Naranjo-Saldarriaga. Unraveling the Exogenous Forces Behind Analysts' Macroeconomic Forecasts. Banco de la República, 2021. http://dx.doi.org/10.32468/be.1184.

Full text
Abstract:
Modern macroeconomics focuses on the identification of the primitive exogenous forces generating business cycles. This is at odds with macroeconomic forecasts collected through surveys, which are about endogenous variables. To address this divorce, our paper uses a general equilibrium model as a multivariate filter to infer the shocks behind market analysts' forecasts and thus, unravel their implicit macroeconomic stories. By interpreting all analysts' forecasts through the same lenses, it is possible to understand the differences between projected endogenous variables as differences in the types and magnitudes of shocks. It also allows to explain market's uncertainty about the future in terms of analysts' disagreement about these shocks. The usefulness of the approach is illustrated by adapting the canonical SOE semi-structural model in Carabenciov et al. (2008a) to Colombia and then using it to filter forecasts of its Central Bank's Monthly Expectations Survey during the COVID-19 crisis.
APA, Harvard, Vancouver, ISO, and other styles
6

Hertel, Thomas, David Hummels, Maros Ivanic, and Roman Keeney. How Confident Can We Be in CGE-Based Assessments of Free Trade Agreements? GTAP Working Paper, 2003. http://dx.doi.org/10.21642/gtap.wp26.

Full text
Abstract:
With the proliferation of Free Trade Agreements (FTAs) over the past decade, demand for quantitative analysis of their likely impacts has surged. The main quantitative tool for performing such analysis is Computable General Equilibrium (CGE) modeling. Yet these models have been widely criticized for performing poorly (Kehoe, 2002) and having weak econometric foundations (McKitrick, 1998; Jorgenson, 1984). FTA results have been shown to be particularly sensitive to the trade elasticities, with small trade elasticities generating large terms of trade effects and relatively modest efficiency gains, whereas large trade elasticities lead to the opposite result. Critics are understandably wary of results being determined largely by the authors’ choice of trade elasticities. Where do these trade elasticities come from? CGE modelers typically draw these elasticities from econometric work that uses time series price variation to identify an elasticity of substitution between domestic goods and composite imports (Alaouze, 1977; Alaouze, et al., 1977; Stern et al., 1976; Gallaway, McDaniel and Rivera, 2003). This approach has three problems: the use of point estimates as “truth”, the magnitude of the point estimates, and estimating the relevant elasticity. First, modelers take point estimates drawn from the econometric literature, while ignoring the precision of these estimates. As we will make clear below, the confidence one has in various CGE conclusions depends critically on the size of the confidence interval around parameter estimates. Standard “robustness checks” such as systematically raising or lowering the substitution parameters does not properly address this problem because it ignores information about which parameters we know with some precision and which we do not. A second problem with most existing studies derives from the use of import price series to identify home vs. foreign substitution, for example, tends to systematically understate the true elasticity. This is because these estimates take price variation as exogenous when estimating the import demand functions, and ignore quality variation. When quality is high, import demand and prices will be jointly high. This biases estimated elasticities toward zero. A related point is that the fixed-weight import price series used by most authors are theoretically inappropriate for estimating the elasticities of interest. CGE modelers generally examine a nested utility structure, with domestic production substitution for a CES composite import bundle. The appropriate price series is then the corresponding CES price index among foreign varieties. Constructing such an index requires knowledge of the elasticity of substitution among foreign varieties (see below). By using a fixed-weight import price series, previous estimates place too much weight on high foreign prices, and too small a weight on low foreign prices. In other words, they overstate the degree of price variation that exists, relative to a CES price index. Reconciling small trade volume movements with large import price series movements requires a small elasticity of substitution. This problem, and that of unmeasured quality variation, helps explain why typical estimated elasticities are very small. The third problem with the existing literature is that estimates taken from other researchers’ studies typically employ different levels of aggregation, and exploit different sources of price variation, from what policy modelers have in mind. Employment of elasticities in experiments ill-matched to their original estimation can be problematic. For example, estimates may be calculated at a higher or lower level of aggregation than the level of analysis than the modeler wants to examine. Estimating substitutability across sources for paddy rice gives one a quite different answer than estimates that look at agriculture as a whole. When analyzing Free Trade Agreements, the principle policy experiment is a change in relative prices among foreign suppliers caused by lowering tariffs within the FTA. Understanding the substitution this will induce across those suppliers is critical to gauging the FTA’s real effects. Using home v. foreign elasticities rather than elasticities of substitution among imports supplied from different countries may be quite misleading. Moreover, these “sourcing” elasticities are critical for constructing composite import price series to appropriate estimate home v. foreign substitutability. In summary, the history of estimating the substitution elasticities governing trade flows in CGE models has been checkered at best. Clearly there is a need for improved econometric estimation of these trade elasticities that is well-integrated into the CGE modeling framework. This paper provides such estimation and integration, and has several significant merits. First, we choose our experiment carefully. Our CGE analysis focuses on the prospective Free Trade Agreement of the Americas (FTAA) currently under negotiation. This is one of the most important FTAs currently “in play” in international negotiations. It also fits nicely with the source data used to estimate the trade elasticities, which is largely based on imports into North and South America. Our assessment is done in a perfectly competitive, comparative static setting in order to emphasize the role of the trade elasticities in determining the conventional gains/losses from such an FTA. This type of model is still widely used by government agencies for the evaluation of such agreements. Extensions to incorporate imperfect competition are straightforward, but involve the introduction of additional parameters (markups, extent of unexploited scale economies) as well as structural assumptions (entry/no-entry, nature of inter-firm rivalry) that introduce further uncertainty. Since our focus is on the effects of a PTA we estimate elasticities of substitution across multiple foreign supply sources. We do not use cross-exporter variation in prices or tariffs alone. Exporter price series exhibit a high degree of multicolinearity, and in any case, would be subject to unmeasured quality variation as described previously. Similarly, tariff variation by itself is typically unhelpful because by their very nature, Most Favored Nation (MFN) tariffs are non-discriminatory in nature, affecting all suppliers in the same way. Tariff preferences, where they exist, are often difficult to measure – sometimes being confounded by quantitative barriers, restrictive rules of origin, and other restrictions. Instead we employ a unique methodology and data set drawing on not only tariffs, but also bilateral transportation costs for goods traded internationally (Hummels, 1999). Transportation costs vary much more widely than do tariffs, allowing much more precise estimation of the trade elasticities that are central to CGE analysis of FTAs. We have highly disaggregated commodity trade flow data, and are therefore able to provide estimates that precisely match the commodity aggregation scheme employed in the subsequent CGE model. We follow the GTAP Version 5.0 aggregation scheme which includes 42 merchandise trade commodities covering food products, natural resources and manufactured goods. With the exception of two primary commodities that are not traded, we are able to estimate trade elasticities for all merchandise commodities that are significantly different form zero at the 95% confidence level. Rather than producing point estimates of the resulting welfare, export and employment effects, we report confidence intervals instead. These are based on repeated solution of the model, drawing from a distribution of trade elasticity estimates constructed based on the econometrically estimated standard errors. There is now a long history of CGE studies based on SSA: Systematic Sensitivity Analysis (Harrison and Vinod, 1992; Wigle, 1991; Pagon and Shannon, 1987) Ho
APA, Harvard, Vancouver, ISO, and other styles
7

Burner, Ryan, Alan Kirschbaum, Ted Gostomski, and David Peitz. US national park units as breeding bird habitat: A comparison of species prevalence and land cover across the midwestern and central United States. National Park Service, 2025. https://doi.org/10.36967/2312602.

Full text
Abstract:
The value of national parks as bird habitat depends not only on local conditions within the parks, but also on the landscape habitat matrices in which they are located. However, the influences of local and landscape habitat matrices on birds vary by species and have not been quantified. Similarly, the trends of land cover types through time have not been systematically quantified for Midwest Region national parks and the landscapes around them, despite evidence of ongoing habitat loss exacerbated by climate change and human population growth. Managers and policy makers can use this information to understand and sustain the contribution of parks to our Nation’s avifauna. We developed models using North American Breeding Bird Survey (BBS) data collected on routes from across the central United States. The models were used to predict occupancy of bird species of concern in 32 national park units across nine Bird Conservation Regions in the Midwest based on land cover in and around those parks. We then compared these predictions with data collected through National Park Service (NPS) bird surveys at each park to determine if bird species of concern were more or less prevalent than expected. In each park, the mean difference between observed species detections and mean predicted detections indicates that most species are less frequently detected in the parks than predicted. However, when the range of uncertainty of predictions is considered, only 21% of park-bird combinations showed strong evidence (95%) of differing from expectation. Of these, species were less common than expected in the park in all but two cases. These results indicate that some bird species of concern occupy sites in Midwest Region national park units at a rate roughly comparable to sites with similar land cover in the Bird Conservation Region (BCR) in which they occur. However, for one in five species-park combinations, parks appear to be less occupied than comparable sites elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
8

Putriastuti, Massita Ayu Cindy, Vivi Fitriyanti, and Muhammad Razin Abdullah. Leveraging the Potential of Crowdfunding for Financing Renewable Energy. Purnomo Yusgiantoro Center, 2021. http://dx.doi.org/10.33116/br.002.

Full text
Abstract:
• Renewable energy (RE) projects in Indonesia usually have IRR between 10% and 15% and PP around 6 to 30 years • Attractive return usually could be found in large scale RE projects, although there are numerous other factors involved including technology developments, capacity scale, power purchasing price agreements, project locations, as well as interest rates and applied incentives. • Crowdfunding (CF) has big potential to contribute to the financing of RE projects especially financing small scale RE projects. • P2P lending usually targeted short-term loans with high interest rates. Therefore, it cannot be employed as an alternative financing for RE projects in Indonesia. • Three types of CF that can be employed as an alternative for RE project funding in Indonesia. Namely, securities, reward, and donation-based CF. In addition, hybrid models such as securities-reward and reward-donation could also be explored according to the project profitability. • Several benefits offer by securities crowdfunding (SCF) compared to conventional banking and P2P lending, as follows: (1) issuer do not need to pledge assets as collateral; (2) do not require to pay instalment each month; (3) issuer share risks with investors with no obligation to cover the investor’s loss; (4) applicable for micro, small, medium, enterprises (MSMEs) with no complex requirements; and (5) there is possibility to attract investors with bring specific value. • Several challenges that need to be tackled such as the uncertainty of RE regulations; (1) issuer’s inability in managing the system and business; (2) the absence of third parties in bridging between CF platform and potential issuer from RE project owner; (3) the lack of financial literacy of the potential funders; and (4) lastly the inadequacy of study regarding potential funders in escalating the RE utilisation in Indonesia.
APA, Harvard, Vancouver, ISO, and other styles
9

Mayfield, Colin. Capacity Development in the Water Sector: the case of Massive Open On-line Courses. United Nations University Institute for Water, Environment and Health, 2017. http://dx.doi.org/10.53328/mwud6984.

Full text
Abstract:
The Sustainable Development Goal 6 targets are all dependent on capacity development as outlined in SDG 6a “Expand international cooperation and capacity-building support to developing countries in water- and sanitation related activities and programmes “. Massive Open On-line Courses (MOOCs) and distance learning in general have a significant role to play in this expansion. This report examines the role that MOOCs and similar courses could play in capacity development in the water sector. The appearance of MOOCs in 2010/11 led within 4 years to a huge increase in this type of course and in student enrollment. Some problems with student dropout rates, over-estimating the transformational and disruptive nature of MOOCs and uncertain business models remain, but less “massive” MOOCs with more engaged students are overcoming these problems. There are many existing distance learning courses and programmes in the water sector designed to train and/ or educate professionals, operators, graduate and undergraduate students and, to a lesser extent, members of communities dealing with water issues. There are few existing true MOOCs in the water sector. MOOCs could supply significant numbers of qualified practitioners for the water sector. A suite of programmes on water-related topics would allow anyone to try the courses and determine whether they were appropriate and useful. If they were, the students could officially enroll in the course or programme to gain a meaningful qualification or simply to upgrade their qualifications. To make MOOCs more relevant to education and training in the water sector an analysis of the requirements in the sector and the potential demand for such courses is required. Cooperation between institutions preparing MOOCs would be desirable given the substantial time and funding required to produce excellent quality courses. One attractive model for cooperation would be to produce modules on all aspects of water and sanitation dealing with technical, scientific, social, legal and management topics. These should be produced by recognized experts in each field and should be “stand-alone” or complete in themselves. If all modules were made freely available, users or mentors could assemble different MOOCs by linking relevant modules. Then extracts, simplified or less technical versions of the modules could then be used to produce presentations to encourage public participation and for other training purposes. Adaptive learning, where course materials are more tailored to individual students based on their test results and reactions to the material, can be an integral part of MOOCs. MOOCs efficiently provide access to quality courses at low or no cost to students around the world, they enable students to try courses at their convenience, they can be tailored to both professional and technical aspects, and they are very suitable to provide adaptive learning courses. Cooperation between institutions would provide many course modules for the water sector that collectively could provide excellent programmes to address the challenges of capacity development for SDG 6 and other issues within the water sector.
APA, Harvard, Vancouver, ISO, and other styles
10

Steudlein, Armin, Besrat Alemu, T. Matthew Evans, et al. PEER Workshop on Liquefaction Susceptibility. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, 2023. http://dx.doi.org/10.55461/bpsk6314.

Full text
Abstract:
Seismic ground failure potential from liquefaction is generally undertaken in three steps. First, a susceptibility evaluation determines if the soil in a particular layer is in a condition where liquefaction triggering could potentially occur. This is followed by a triggering evaluation to estimate the likelihood of triggering given anticipated seismic demands, environmental conditions pertaining to the soil layer (e.g., its depth relative to the ground water table), and the soil state. For soils where triggering can be anticipated, the final step involves assessments of the potential for ground failure and its impact on infrastructure systems. This workshop was dedicated to the first of these steps, which often plays a critical role in delineating risk for soil deposits with high fines contents and clay-silt-sand mixtures of negligible to moderate plasticity. The workshop was hosted at Oregon State University on September 8-9, 2022 and was attended by 49 participants from the research, practice, and regulatory communities. Through pre-workshop polls, extended abstracts, workshop presentations, and workshop breakout discussions, it was demonstrated that leaders in the liquefaction community do not share a common understanding of the term “susceptibility” as applied to liquefaction problems. The primary distinction between alternate views concerns whether environmental conditions and soil state provide relevant information for a susceptibility evaluation, or if susceptibility is a material characteristic. For example, a clean, dry, dense sand in a region of low seismicity is very unlikely to experience triggering of liquefaction and would be considered not susceptible by adherents of a definition that considers environmental conditions and state. The alternative, and recommended, definition focusing on material susceptibility would consider the material as susceptible and would defer consideration of saturation, state, and loading effects to a separate triggering analysis. This material susceptibility definition has the advantage of maintaining a high degree of independence between the parameters considered in the susceptibility and triggering phases of the ground failure analysis. There exist differences between current methods for assessing material susceptibility – the databases include varying amount of test data, the materials considered are distinct (from different regions) and have been tested using different procedures, and the models can be interpreted as providingdifferent outcomes in some cases. The workshop reached a clear consensus that new procedures are needed that are developed using a new research approach. The recommended approach involves assembling a database of information from sites for which in situ test data are available (borings with samples, CPTs), cyclic test data are available from high-quality specimens, and a range of index tests are available for important layers. It is not necessary that the sites have experienced earthquake shaking for which field performance is known, although such information is of interest where available. A considerable amount of data of this type are available from prior research studies and detailed geotechnical investigations for project sites by leading geotechnical consultants. Once assembled and made available, this data would allow for the development of models to predict the probability of material susceptibility given various independent variables (e.g., in-situ tests indices, laboratory index parameters) and the epistemic uncertainty of the predictions. Such studies should be conducted in an open, transparent manner utilizing a shared database, which is a hallmark of the Next Generation Liquefaction (NGL) project.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography