Academic literature on the topic 'Deterministic methods for XAI'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deterministic methods for XAI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Deterministic methods for XAI"

1

Bhaskaran, Venkatsubramaniam, and Pallav Kumar Baruah Prof. "A Novel Approach to Explainable AI using Formal Concept Lattice." International Journal of Innovative Technology and Exploring Engineering (IJITEE) 11, no. 7 (2022): 36–48. https://doi.org/10.35940/ijitee.G9992.0611722.

Full text
Abstract:
<strong>Abstract</strong>: Current approaches in explainable AI use an interpretable model to approximate a black box model or use gradient techniques to determine the salient parts of the input. While it is true that such approaches provide intuition about the black box model, the primary purpose of an explanation is to be exact at an individual instance and also from a global perspective, which is difficult to obtain using such model based approximations or from salient parts. On the other hand, traditional, deterministic approaches satisfy this primary purpose of explainability of being exact at an individual instance and globally, while posing a challenge to scale for large amounts of data. In this work, we propose a deterministic, novel approach to explainability using a formal concept lattice for classification problems, that reveal accurate explanations both globally and locally, including generation of similar and contrastive examples around an instance. This technique consists of preliminary lattice construction, synthetic data generation using implications from the preliminary lattice followed by actual lattice construction which is used to generate local, global, similar and contrastive explanations. Using sanity tests like Implementation Invariance, Input transformation Invariance, Model parameter randomization sensitivity and model-outcome relationship randomization sensitivity, its credibility is proven. Explanations from the lattice are compared to a white box model in order to prove its trustworthiness.
APA, Harvard, Vancouver, ISO, and other styles
2

Goričan, Tomaž, Milan Terčelj, and Iztok Peruš. "New Approach for Automated Explanation of Material Phenomena (AA6082) Using Artificial Neural Networks and ChatGPT." Applied Sciences 14, no. 16 (2024): 7015. http://dx.doi.org/10.3390/app14167015.

Full text
Abstract:
Artificial intelligence methods, especially artificial neural networks (ANNs), have increasingly been utilized for the mathematical description of physical phenomena in (metallic) material processing. Traditional methods often fall short in explaining the complex, real-world data observed in production. While ANN models, typically functioning as “black boxes”, improve production efficiency, a deeper understanding of the phenomena, akin to that provided by explicit mathematical formulas, could enhance this efficiency further. This article proposes a general framework that leverages ANNs (i.e., Conditional Average Estimator—CAE) to explain predicted results alongside their graphical presentation, marking a significant improvement over previous approaches and those relying on expert assessments. Unlike existing Explainable AI (XAI) methods, the proposed framework mimics the standard scientific methodology, utilizing minimal parameters for the mathematical representation of physical phenomena and their derivatives. Additionally, it analyzes the reliability and accuracy of the predictions using well-known statistical metrics, transitioning from deterministic to probabilistic descriptions for better handling of real-world phenomena. The proposed approach addresses both aleatory and epistemic uncertainties inherent in the data. The concept is demonstrated through the hot extrusion of aluminum alloy 6082, where CAE ANN models and predicts key parameters, and ChatGPT explains the results, enabling researchers and/or engineers to better understand the phenomena and outcomes obtained by ANNs.
APA, Harvard, Vancouver, ISO, and other styles
3

Moradi, A., M. Satari, and M. Momeni. "INDIVIDUAL TREE OF URBAN FOREST EXTRACTION FROM VERY HIGH DENSITY LIDAR DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 337–43. http://dx.doi.org/10.5194/isprsarchives-xli-b3-337-2016.

Full text
Abstract:
Airborne LiDAR (Light Detection and Ranging) data have a high potential to provide 3D information from trees. Most proposed methods to extract individual trees detect points of tree top or bottom firstly and then using them as starting points in a segmentation algorithm. Hence, in these methods, the number and the locations of detected peak points heavily effect on the process of detecting individual trees. In this study, a new method is presented to extract individual tree segments using LiDAR points with 10cm point density. In this method, a two-step strategy is performed for the extraction of individual tree LiDAR points: finding deterministic segments of individual trees points and allocation of other LiDAR points based on these segments. This research is performed on two study areas in Zeebrugge, Bruges, Belgium (51.33° N, 3.20° E). The accuracy assessment of this method showed that it could correctly classified 74.51% of trees with 21.57% and 3.92% under- and over-segmentation errors respectively.
APA, Harvard, Vancouver, ISO, and other styles
4

Moradi, A., M. Satari, and M. Momeni. "INDIVIDUAL TREE OF URBAN FOREST EXTRACTION FROM VERY HIGH DENSITY LIDAR DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 337–43. http://dx.doi.org/10.5194/isprs-archives-xli-b3-337-2016.

Full text
Abstract:
Airborne LiDAR (Light Detection and Ranging) data have a high potential to provide 3D information from trees. Most proposed methods to extract individual trees detect points of tree top or bottom firstly and then using them as starting points in a segmentation algorithm. Hence, in these methods, the number and the locations of detected peak points heavily effect on the process of detecting individual trees. In this study, a new method is presented to extract individual tree segments using LiDAR points with 10cm point density. In this method, a two-step strategy is performed for the extraction of individual tree LiDAR points: finding deterministic segments of individual trees points and allocation of other LiDAR points based on these segments. This research is performed on two study areas in Zeebrugge, Bruges, Belgium (51.33° N, 3.20° E). The accuracy assessment of this method showed that it could correctly classified 74.51% of trees with 21.57% and 3.92% under- and over-segmentation errors respectively.
APA, Harvard, Vancouver, ISO, and other styles
5

Xiang, Yiheng, Yanghe Liu, Xiangxi Zou, Tao Peng, Zhiyuan Yin, and Yufeng Ren. "Post-Processing Ensemble Precipitation Forecasts and Their Applications in Summer Streamflow Prediction over a Mountain River Basin." Atmosphere 14, no. 11 (2023): 1645. http://dx.doi.org/10.3390/atmos14111645.

Full text
Abstract:
Ensemble precipitation forecasts (EPFs) can help to extend lead times and provide reliable probabilistic forecasts, which have been widely applied for streamflow predictions by driving hydrological models. Nonetheless, inherent biases and under-dispersion in EPFs require post-processing for accurate application. It is imperative to explore the skillful lead time of post-processed EPFs for summer streamflow predictions, particularly in mountainous regions. In this study, four popular EPFs, i.e., the CMA, ECMWF, JMA, and NCEP, were post-processed by two state of art methods, i.e., the Bayesian model averaging (BMA) and generator-based post-processing (GPP) methods. These refined forecasts were subsequently integrated with the Xin’anjiang (XAJ) model for summer streamflow prediction. The performances of precipitation forecasts and streamflow predictions were comprehensively evaluated before and after post-processing. The results reveal that raw EPFs frequently deviate from ensemble mean forecasts, particularly underestimating torrential rain. There are also clear underestimations of uncertainty in their probabilistic forecasts. Among the four EPFs, the ECMWF outperforms its peers, delivering skillful precipitation forecasts for 1–7 lead days and streamflow predictions for 1–4 lead days. The effectiveness of post-processing methods varies, yet both GPP and BMA address the under-dispersion of EPFs effectively. The GPP method, recommended as the superior method, can effectively improve both deterministic and probabilistic forecasting accuracy. Moreover, the ECMWF post-processed by GPP extends the effective lead time to seven days and reduces the underestimation of peak flows. The findings of this study underscore the potential benefits of adeptly post-processed EPFs, providing a reference for streamflow prediction over mountain river basins.
APA, Harvard, Vancouver, ISO, and other styles
6

Kupidura, P. "COMPARISON OF FILTERS DEDICATED TO SPECKLE SUPPRESSION IN SAR IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 21, 2016): 269–76. http://dx.doi.org/10.5194/isprsarchives-xli-b7-269-2016.

Full text
Abstract:
This paper presents the results of research on the effectiveness of different filtering methods dedicated to speckle suppression in SAR images. The tests were performed on RadarSat-2 images and on an artificial image treated with simulated speckle noise. The research analysed the performance of particular filters related to the effectiveness of speckle suppression and to the ability to preserve image details and edges. Speckle is a phenomenon inherent to radar images – a deterministic noise connected with land cover type, but also causing significant changes in digital numbers of pixels. As a result, it may affect interpretation, classification and other processes concerning radar images. Speckle, resembling “salt and pepper” noise, has the form of a set of relatively small groups of pixels of values markedly different from values of other pixels representing the same type of land cover. Suppression of this noise may also cause suppression of small image details, therefore the ability to preserve the important parts of an image, was analysed as well. In the present study, selected filters were tested, and methods dedicated particularly to speckle noise suppression: Frost, Gamma-MAP, Lee, Lee-Sigma, Local Region, general filtering methods which might be effective in this respect: Mean, Median, in addition to morphological filters (alternate sequential filters with multiple structuring element and by reconstruction). The analysis presented in this paper compared the effectiveness of different filtering methods. It proved that some of the dedicated radar filters are efficient tools for speckle suppression, but also demonstrated a significant efficiency of the morphological approach, especially its ability to preserve image details.
APA, Harvard, Vancouver, ISO, and other styles
7

Kupidura, P. "COMPARISON OF FILTERS DEDICATED TO SPECKLE SUPPRESSION IN SAR IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 21, 2016): 269–76. http://dx.doi.org/10.5194/isprs-archives-xli-b7-269-2016.

Full text
Abstract:
This paper presents the results of research on the effectiveness of different filtering methods dedicated to speckle suppression in SAR images. The tests were performed on RadarSat-2 images and on an artificial image treated with simulated speckle noise. The research analysed the performance of particular filters related to the effectiveness of speckle suppression and to the ability to preserve image details and edges. Speckle is a phenomenon inherent to radar images – a deterministic noise connected with land cover type, but also causing significant changes in digital numbers of pixels. As a result, it may affect interpretation, classification and other processes concerning radar images. Speckle, resembling “salt and pepper” noise, has the form of a set of relatively small groups of pixels of values markedly different from values of other pixels representing the same type of land cover. Suppression of this noise may also cause suppression of small image details, therefore the ability to preserve the important parts of an image, was analysed as well. In the present study, selected filters were tested, and methods dedicated particularly to speckle noise suppression: Frost, Gamma-MAP, Lee, Lee-Sigma, Local Region, general filtering methods which might be effective in this respect: Mean, Median, in addition to morphological filters (alternate sequential filters with multiple structuring element and by reconstruction). The analysis presented in this paper compared the effectiveness of different filtering methods. It proved that some of the dedicated radar filters are efficient tools for speckle suppression, but also demonstrated a significant efficiency of the morphological approach, especially its ability to preserve image details.
APA, Harvard, Vancouver, ISO, and other styles
8

Kedar, Ms Mayuri Manish. "Exploring the Effectiveness of SHAP over other Explainable AI Methods." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 06 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem35556.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical domain to demystify the opaque decision-making processes of machine learning models, fostering trust and understanding among users. Among various XAI methods, SHAP (SHapley Additive exPlanations) has gained prominence for its theo- retically grounded approach and practical applicability. The paper presents a comprehensive exploration of SHAP’s effectiveness compared to other promi- nent XAI methods.Methods such as LIME (Local Interpretable Model-agnostic Explanations), permutation importance, Anchors and partial dependence plots are examined for their respective strengths and limitations. Through a detailed analysis of their principles, strengths, and limitations through reviewing differ- ent research papers based on some important factors of XAI, the paper aims to provide insights into the effectiveness and suitability of these methods.The study offers valuable guidance for researchers and practitioners seeking to incorporate XAI into their AI systems. Keywords: SHAP, XAI, LIME, permutation importance, Anchors and par- tial dependence plots
APA, Harvard, Vancouver, ISO, and other styles
9

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predictions and human understanding.Keywords:- Explainable AI (XAI), Interpretable Machine Learning, Transparent AI, AI Transparency, Interpretability in AI, Ethical AI, Explainable Machine Learning Models, Model Transparency, AI Accountability, Trustworthy AI, AI &nbsp;Ethics, XAI Techniques, LIME (Local Interpretable Model- agnostic Explanations), SHAP (SHapley Additive &nbsp;exPlanations), Rule-based Explanation, Post-hoc Explanation, AI and Society, Human-AI Collaboration, AI Regulation, Trust in Artificial Intelligence.
APA, Harvard, Vancouver, ISO, and other styles
10

Mohseni, Sina, Niloofar Zarei, and Eric D. Ragan. "A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems." ACM Transactions on Interactive Intelligent Systems 11, no. 3-4 (2021): 1–45. http://dx.doi.org/10.1145/3387166.

Full text
Abstract:
The need for interpretable and accountable intelligent systems grows along with the prevalence of artificial intelligence ( AI ) applications used in everyday life. Explainable AI ( XAI ) systems are intended to self-explain the reasoning behind system decisions and predictions. Researchers from different disciplines work together to define, design, and evaluate explainable systems. However, scholars from different disciplines focus on different objectives and fairly independent topics of XAI research, which poses challenges for identifying appropriate design and evaluation methodology and consolidating knowledge across efforts. To this end, this article presents a survey and framework intended to share knowledge and experiences of XAI design and evaluation methods across multiple disciplines. Aiming to support diverse design goals and evaluation methods in XAI research, after a thorough review of XAI related papers in the fields of machine learning, visualization, and human-computer interaction, we present a categorization of XAI design goals and evaluation methods. Our categorization presents the mapping between design goals for different XAI user groups and their evaluation methods. From our findings, we develop a framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams. Further, we provide summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Deterministic methods for XAI"

1

Jonsson, Ewerbring Marcus. "Explainable Deep Learning Methods for Market Surveillance." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300156.

Full text
Abstract:
Deep learning methods have the ability to accurately predict and interpret what data represents. However, the decision making of a deep learning model is not comprehensible for humans. This is a problem for sectors like market surveillance which needs clarity in the decision making of the used algorithms. This thesis aimed to investigate how a deep learning model can be constructed to make the decision making of the model humanly comprehensible, and to investigate the potential impact on classification performance. A literature study was performed and publicly available explanation methods were collected. The explanation methods LIME, SHAP, model distillation and SHAP TreeExplainer were implemented and evaluated on a ResNet trained on three different time-series datasets. A decision tree was used as the student model for model distillation, where it was trained with both soft and hard labels. A survey was conducted to evaluate if the explanation method could increase comprehensibility. The results were that all methods could improve comprehensibility for people with experience in machine learning. However, none of the methods could provide full comprehensibility and clarity of the decision making. The model distillation reduced the performance compared to the ResNet model and did not improve the performance of the student model.<br>Djupinlärningsmetoder har egenskapen att förutspå och tolka betydelsen av data. Däremot så är djupinlärningsmetoders beslut inte förståeliga för människor. Det är ett problem för sektorer som marknadsövervakning som behöver klarhet i beslutsprocessen för använda algoritmer. Målet för den här uppsatsen är att undersöka hur en djupinlärningsmodell kan bli konstruerad för att göra den begriplig för en människa, och att undersöka eventuella påverkan av klassificeringsprestandan. En litteraturstudie genomfördes och publikt tillgängliga förklaringsmetoder samlades. Förklaringsmetoderna LIME, SHAP, modelldestillering och SHAP TreeExplainer blev implementerade och utvärderade med en ResNet modell tränad med tre olika dataset. Ett beslutsträd användes som studentmodell för modelldestillering och den blev tränad på båda mjuka och hårda etiketter. En undersökning genomfördes för att utvärdera om förklaringsmodellerna kan förbättra förståelsen av modellens beslut. Resultatet var att alla metoder kan förbättra förståelsen för personer med förkunskaper inom maskininlärning. Däremot så kunde ingen av metoderna ge full förståelse och insyn på hur beslutsprocessen fungerade. Modelldestilleringen minskade prestandan jämfört med ResNet modellen och förbättrade inte prestandan för studentmodellen.
APA, Harvard, Vancouver, ISO, and other styles
2

Minoukadeh, Kimiya. "Deterministic and stochastic methods for molecular simulation." Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00597694.

Full text
Abstract:
Molecular simulation is an essential tool in understanding complex chemical and biochemical processes as real-life experiments prove increasingly costly or infeasible in practice . This thesis is devoted to methodological aspects of molecular simulation, with a particular focus on computing transition paths and their associated free energy profiles. The first part is dedicated to computational methods for reaction path and transition state searches on a potential energy surface. In Chapter 3 we propose an improvement to a widely-used transition state search method, the Activation Relaxation Technique (ART). We also present a local convergence study of a prototypical algorithm. The second part is dedicated to free energy computations. We focus in particular on an adaptive importance sampling technique, the Adaptive Biasing Force (ABF) method. The first contribution to this field, presented in Chapter 5, consists in showing the applicability to a large molecular system of a new parallel implementation, named multiple-walker ABF (MW-ABF). Numerical experiments demonstrated the robustness of MW-ABF against artefacts arising due to poorly chosen or oversimplified reaction coordinates. These numerical findings inspired a new study of the longtime convergence of the ABF method, as presented in Chapter 6. By studying a slightly modified model, we back our numerical results by showing a faster theoretical rate of convergence of ABF than was previously shown
APA, Harvard, Vancouver, ISO, and other styles
3

Bagheri, Mehdi. "Block stability analysis using deterministic and probabilistic methods." Doctoral thesis, KTH, Jord- och bergmekanik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-49447.

Full text
Abstract:
This thesis presents a discussion of design tools for analysing block stability around a tunnel. First, it was determined that joint length and field stress have a significant influence on estimating block stability. The results of calculations using methods based on kinematic limit equilibrium (KLE) were compared with the results of filtered DFN-DEM, which are closer to reality. The comparison shows that none of the KLE approaches– conventional, limited joint length, limited joint length with stress and probabilistic KLE – could provide results similar to DFN-DEM. This is due to KLE’s unrealistic assumptions in estimating either volume or clamping forces. A simple mechanism for estimating clamping forces such as continuum mechanics or the solution proposed by Crawford-Bray leads to an overestimation of clamping forces, and thus unsafe design. The results of such approaches were compared to those of DEM, and it was determined that these simple mechanisms ignore a key stage of relaxation of clamping forces due to joint existence. The amount of relaxation is a function of many parameters, such as stiffness of the joint and surrounding rock, the joint friction angle and the block half-apical angle. Based on a conceptual model, the key stage was considered in a new analytical solution for symmetric blocks, and the amount of joint relaxation was quantified. The results of the new analytical solution compared to those of DEM and the model uncertainty of the new solution were quantified. Further numerical investigations based on local and regional stress models were performed to study initial clamping forces. Numerical analyses reveal that local stresses, which are a product of regional stress and joint stiffness, govern block stability. Models with a block assembly show that the clamping forces in a block assembly are equal to the clamping forces in a regional stress model. Therefore, considering a single block in massive rock results in lower clamping forces and thus safer design compared to a block assembly in the same condition of in-situ stress and properties. Furthermore, a sensitivity analysis was conducted to determine which is  the most important parameter by assessing sensitivity factors and studying the applicability of the partial coefficient method for designing block stability. It was determined that the governing parameter is the dispersion of the half-apical angle. For a dip angle with a high dispersion, partial factors become very large and the design value for clamping forces is close to zero. This suggests that in cases with a high dispersion of the half-apical angle, the clamping forces could be ignored in a stability analysis, unlike in cases with a lower dispersion. The costs of gathering more information about the joint dip angle could be compared to the costs of overdesign. The use of partial factors is uncertain, at least without dividing the problem into sub-classes. The application of partial factors is possible in some circumstances but not always, and a FORM analysis is preferable.<br>QC 20111201
APA, Harvard, Vancouver, ISO, and other styles
4

Angstmann, Christopher N. Physics Faculty of Science UNSW. "Deterministic and associated stochastic methods for dynamical systems." Publisher:University of New South Wales. Physics, 2009. http://handle.unsw.edu.au/1959.4/44401.

Full text
Abstract:
An introduction to periodic orbit techniques for deterministic dynamical systems is presented. The Farey map is considered as examples of intermittency in one-dimensional maps. The effect of intermittency on the Markov partition is considered. The Gauss map is shown to be related to the farey map by a simple transformation of trajectories. A method of calculating periodic orbits in the thermostated Lorentz gas is derived. This method relies on minimising the action from the Hamiltonian description of the Lorentz gas, as well as the construction of a generating partition of the phase space. This method is employed to examine a range of bifurcation processes in the Lorentz gas. A novel construction of the Sinai billiard is performed by using symmetry arguments to reduce two particles in a hard walled box to the square Sinai billiard. Infinite families of periodic orbits are found, even at the lowest order, due to the intermittency of the system. The contribution of these orbits is examined and found to be tractable at the lowest order. The number of orbits grows too quickly for consideration of any other terms in the periodic orbit expansion. A simple stochastic model for the diffusion in the Lorentz gas was constructed. The model produced a diffusion coefficient that was a remarkably good fit to more precise numerical calculations. This is a significant improvement to the Machta-Zwanzig approximation for the diffusion coefficient. We outline a general approach to constructing stochastic models of deterministic dynamical systems. This method should allow for calculations to be performed in more complicated systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Cormican, Kelly James. "Computational methods for deterministic and stochastic network interdiction problems." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA297596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mosbach, Sebastian. "Explicit stochastic and deterministic simulation methods for combustion chemistry." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.614282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Díaz, Cereceda Cristina. "Efficient models for building acoustics : combining deterministic and statistical methods." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/134507.

Full text
Abstract:
Modelling vibroacoustic problems in the field of building design is a challenging problem due to the large size of the domains and the wide frequency range required by regulations. Standard numerical techniques, for instance finite element methods (FEM), fail when trying to reach the highest frequencies. The required element size is too small compared to the problem dimensions and the computational cost becomes unaffordable for such an everyday calculation. Statistical energy analysis (SEA) is a framework of analysis for vibroacoustic problems, based on the wave behaviour at high frequencies. It works directly with averaged magnitudes, which is in fact what regulations require, and its computational cost is very low. However, this simplified approach presents several limitations when dealing with real-life structures. Experiments or other complementary data are often required to complete the definition of the SEA model. This thesis deals with the modelling of building acoustic problems with a reasonable computational cost. In this sense, two main research lines have been followed. In the first part of the thesis, the potential of numerical simulations for extending the SEA applicability is analysed. In particular, three main points are addressed: first, a systematic methodology for the estimation of coupling loss factors from numerical simulations is developed. These factors are estimated from small deterministic simulations, and then applied for solving larger problems with SEA. Then, an SEA-like model for non-conservative couplings is presented, and a strategy for obtaining conservative and non-conservative coupling loss factors from numerical simulations is developed. Finally, a methodology for identifying SEA subsystems with modal analysis is proposed. This technique consists in performing a cluster analysis based on the problem eigenmodes. It allows detecting optimal SEA subdivisions for complex domains, even when two subsystems coexist in the same region of the geometry. In the second part of the thesis, the sound transmission through double walls is analysed from different points of view, as a representative example of the complexities of vibroacoustic simulations. First, a compilation of classical approaches to this problem is presented. Then, the finite layer method is proposed as a new way of discretising the pressure field in the cavity inside double walls, especially when it is partially filled with an absorbing material. This method combines a FEM-like discretisation in the direction perpendicular to the wall with trigonometric functions in the two in-plane directions. This approach has less computational cost than FEM but allows the enforcement of continuity and equilibrium between fluid layers. It is compared with experimental data and also with other prediction models in order to check the influence of commonly assumed simplifications. Finally, a combination of deterministic and statistical methods is presented as a possible solution for dealing with vibroacoustic problems consisting of double walls and other elements. The global analysis is performed with SEA, and numerical simulations of small parts of the problem are used to obtain the required parameters. Combining these techniques, a realistic simulation of the vibroacoustic problem can be performed with a reasonable computational cost.
APA, Harvard, Vancouver, ISO, and other styles
8

LOUREIRO, GETULIO VARGAS. "APPLICATION OF DETERMINISTIC OPTIMIZATION METHODS IN THE DESIGN OF STATISTICAL CIRCUITS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1985. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14124@1.

Full text
Abstract:
O aumento da complexidade dos circuitos eletrônicos motivou o emprego de técnicas de projeto apoiadas no uso intensivo do computador. Estas técnicas, baseadas essencialmente nos métodos de otimização, auxiliam o projecionista na solução dos problemas relacionados com a área de Projeto de Circuitos Eletrônicos. A produção de circuitos em larga escala, em contraste com o desenvolvimento de prototipos, introduz um ambiente de projeto onde as flutuações estatísticas são consideradas, caracterizando o que é chamado de Projeto Estatístico. Os problemas relacionados com este ambiente de projeto e os métodos utilizados na sua solução são aqui analisados. São descritas as técnicas derivadas da formulação destes problemas através de um critério Minimax e apresentados os detalhes de implementação dos algorítmos utilizados neste trabalho. Para testar a aplicação dos algorítmos em problemas reais, foi desenvolvido em linguagem Fortran, um pacote de Simulação e Projeto no domínio de freqüência. Este programa, sem alterar a topologia e de suas tolerâncias, através dos algoritmos que otimizam o seu desempenho.<br>The increase of the electronic circuits complexity motivated the use of computer supported design techniques. These techniques, mainly based on the optimization methods, aids the designers to solve different problems related to the Electronic Circuit Design area. The large scale production of circuits, contrasting with the development of prototypes, creates a design envionment where the statitiscal fluctuations are considered and it gives rise to the so called Statistical Design. It is analized here the problems related to this design envionment and their solutions via Optimization Methods. It is shown the description of the techniques derived during the formulation of these problems through a Minimax criteria as well as the details of the implementation of the developed algorithms. In order to test the algorithms in real world problems it was developed a Simulation Package for linear Circuits Design in the frequency domain. This program, without changes in the circuit topology, actuates over the values of the project prameters and their tolerancies through algorithms that optimize the circuit performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Thomas, Nicolas. "Stochastic numerical methods for Piecewise Deterministic Markov Processes : applications in Neuroscience." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS385.

Full text
Abstract:
Dans cette thèse, motivés par des applications en Neuroscience, nous étudions des méthodes efficaces de type Monte Carlo (MC) et Multilevel Monte Carlo (MLMC) basées sur le thinning pour des processus (de Markov) déterministe par morceaux (PDMP ou PDP) que l'on appliquent à des modèles à conductance. D'une part, lorsque la trajectoire déterministe du PDMP est connue explicitement nous aboutissons à une simulation exacte. D'autre part, lorsque la trajectoire déterministe du PDMP n'est pas explicite, nous établissons des estimées d'erreurs forte et un développement de l'erreur faible pour le schéma numérique que nous introduisons. La méthode de thinning est fondamentale dans cette thèse. Outre le fait que cette méthode est intuitive, nous l'utilisons à la fois numériquement (pour simuler les trajectoires de PDMP/PDP) et théoriquement (pour construire les instants de saut et établir des estimées d'erreurs pour les PDMP/PDP)<br>In this thesis, motivated by applications in Neuroscience, we study efficient Monte Carlo (MC) and Multilevel Monte Carlo (MLMC) methods based on the thinning for piecewise deterministic (Markov) processes (PDMP or PDP) that we apply to stochastic conductance-based models. On the one hand, when the deterministic motion of the PDMP is explicitly known we end up with an exact simulation. On the other hand, when the deterministic motion is not explicit, we establish strong estimates and a weak error expansion for the numerical scheme that we introduce. The thinning method is fundamental in this thesis. Beside the fact that it is intuitive, we use it both numerically (to simulate trajectories of PDMP/PDP) and theoretically (to construct the jump times and establish error estimates for PDMP/PDP)
APA, Harvard, Vancouver, ISO, and other styles
10

Moryadee, Chanicha. "Optimisation models and heuristic methods for deterministic and stochastic inventory routing problems." Thesis, University of Portsmouth, 2017. https://researchportal.port.ac.uk/portal/en/theses/optimisation-models-and-heuristic-methods-for-deterministic-and-stochastic-inventory-routing-problems(182ea07e-ef7b-4b4c-85c9-7570e8e5a160).html.

Full text
Abstract:
The inventory routing problem (IRP) integrates two components of supply chain management, namely, inventory management and vehicle routing. These two issues have been traditionally dealt with problems in the area of logistics separately. However, these issues may reduce the total costs in which the integration can lead a greater impact on overall system performance. The IRP is a well-known NP-hard problem in the optimisation research publication. A vehicle direct delivery from the supplier with and without transhipments (Inventory Routing Problem with Transhipment, IRPT) between customers in conjunction with multi-customer routes in order to increase the flexibility of the system. The vehicle is located at a single depot, it has a limited capacity for serving a number of customers. The thesis is focused on the two main aspects: (1) Development of the optimisation models for the deterministic and stochastic demand IRP and IRPT under ML/OU replenishment polices. On the deterministic demand, the supplier deliveries products to customers whose demands are known before the vehicle arrives at the customers’ locations. Nevertheless, the stochastic demand, the supplier serves customers whose actual demands are known only when the vehicle arrives at the customers’ location. (2) Development of integrated heuristic, biased probability and simulation to solve these problems. The proposed approaches are used for solving the optimisation models of these problem in order to minimise the total costs (transportation costs, transhipment costs, penalty costs and inventory holding costs). This thesis proposed five approaches: the CWS heuristic, the Randomised CWS heuristic, the Randomised CWS and IG with local search, the Sim-Randomised CWS, and the Sim-Randomised CWS and IG with local search. Specifically, the proposed approaches are tested for solving the deterministic demand IRP and IRPT, namely, the IRP-based CWS, the IRP-based Randomised CWS, the IRP-based Randomised CWS and IG with local search. For the transhipment case are called the IRPT-based CWS, the IRPT-based Randomised CWS, and the IRP-based Randomised CWS and IG with local search. On the stochastic demand, these proposed approaches are named the SIRP-based Sim-Randomised CWS, the SIRPT-based Sim-Randomised CWS, the SIRP-based Sim-Randomised CWS and IG with local search, and the SIRPT-based Sim-Randomised CWS and IG with local search. The aim of using the sim-heuristic is to deal with stochastic demand IRP and IRPT, the stochastic behaviour is the realistic scenarios in which demand is used to be addressed using simulation. Firstly, the Sim-Randomised CWS approach, an initial solution is generated by Randomised CWS heuristic, thereafter an MCS is combined to provide further improvement in the final solution of the SIRP and the SIRPT. Secondly, the integration of Randomised CWS with MCS and IG with local search is solved on these problems. Using an IG algorithm with local search improved the solution in which it generated by Randomised CWS. The developed heuristic algorithms are experimented in several benchmark instances. Local search has been proven to be an effective technique for obtaining good solutions. In the experiments, this thesis considers the average over the five instances for each combination and the algorithms are compared. Thus, the IG algorithm and local search outperformed the solution of Sim-Randomised CWS heuristics and the best solutions in the literature. This proposed algorithm also shows a shorter computer time than that in the literature. To the best of the author’ knowledge, this is the first study that CWS, Randomised CWS heuristic, Sim-Randomised CWS and IG with local search algorithms are used to solve the deterministic and stochastic demand IRP and IRPT under ML/OU replenishment policies, resulting of knowledge contribution in supply chain and logistics domain.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Deterministic methods for XAI"

1

Dooge, James. Deterministic methods in systems hydrology. Balkema, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

James, Dooge, and O'Kane J. P. 1943-, eds. Deterministic methods in systems hydrology. Balkema, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kumar, Bose Deb, ed. Neural networks: Deterministic methods of analysis. International Thomson Computer Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Winkler, Joab, Mahesan Niranjan, and Neil Lawrence, eds. Deterministic and Statistical Methods in Machine Learning. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11559887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hendricks, Elbert. Linear systems control: Deterministic and stochastic methods. Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Floudas, Christodoulos A. Deterministic global optimization: Theory, methods, and applications. Kluwer Academic Publishers, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Floudas, Christodoulos A. Deterministic Global Optimization: Theory, Methods and Applications. Springer US, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

E, Jannerup O., and Sørensen Paul H, eds. Linear systems control: Deterministic and stochastic methods. Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cruse, Thomas A. Non-deterministic, non-traditional methods (NDNTM): [final report]. National Aeronautics and Space Administration, Glenn Research Center, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Center, NASA Glenn Research, ed. Non-deterministic, non-traditional methods (NDNTM): [final report]. National Aeronautics and Space Administration, Glenn Research Center, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Deterministic methods for XAI"

1

Heermann, Dieter W. "Deterministic Methods." In Computer Simulation Methods in Theoretical Physics. Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/978-3-642-96971-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Heermann, Dieter W. "Deterministic Methods." In Computer Simulation Methods in Theoretical Physics. Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-75448-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cavazzuti, Marco. "Deterministic Optimization." In Optimization Methods. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31187-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karimi, Amir-Hossein, Julius von Kügelgen, Bernhard Schölkopf, and Isabel Valera. "Towards Causal Algorithmic Recourse." In xxAI - Beyond Explainable AI. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_8.

Full text
Abstract:
AbstractAlgorithmic recourse is concerned with aiding individuals who are unfavorably treated by automated decision-making systems to overcome their hardship, by offering recommendations that would result in a more favorable prediction when acted upon. Such recourse actions are typically obtained through solving an optimization problem that minimizes changes to the individual’s feature vector, subject to various plausibility, diversity, and sparsity constraints. Whereas previous works offer solutions to the optimization problem in a variety of settings, they critically overlook real-world considerations pertaining to the environment in which recourse actions are performed.The present work emphasizes that changes to a subset of the individual’s attributes may have consequential down-stream effects on other attributes, thus making recourse a fundamcausal problem. Here, we model such considerations using the framework of structural causal models, and highlight pitfalls of not considering causal relations through examples and theory. Such insights allow us to reformulate the optimization problem to directly optimize for minimally-costly recourse over a space of feasible actions (in the form of causal interventions) rather than optimizing for minimally-distant “counterfactual explanations”. We offer both the optimization formulations and solutions to deterministic and probabilistic recourse, on an individualized and sub-population level, overcoming the steep assumptive requirements of offering recourse in general settings. Finally, using synthetic and semi-synthetic experiments based on the German Credit dataset, we demonstrate how such methods can be applied in practice under minimal causal assumptions.
APA, Harvard, Vancouver, ISO, and other styles
5

Gianfagna, Leonida, and Antonio Di Cecco. "Model-Agnostic Methods for XAI." In Explainable AI with Python. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68640-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Metcalfe, Andrew. "Deterministic Models." In Research Methods for Postgraduates: Third Edition. John Wiley & Sons, Ltd, 2016. http://dx.doi.org/10.1002/9781118763025.ch32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lu, Xiaotian, Arseny Tolmachev, Tatsuya Yamamoto, et al. "Crowdsourcing Evaluation of Saliency-Based XAI Methods." In Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86517-7_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cocozza-Thivent, Christiane. "Numerical Methods." In Markov Renewal and Piecewise Deterministic Processes. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70447-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lan, Guanghui. "Deterministic Convex Optimization." In First-order and Stochastic Optimization Methods for Machine Learning. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-39568-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sen, Zekai. "Sampling and Deterministic Modeling Methods." In Spatial Modeling Principles in Earth Sciences. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41758-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Deterministic methods for XAI"

1

Djellikh, Soumia, Imane Youkana, Asma Amari, and Rachida Saouli. "Quantifying Explainability: Essential Guidelines for Evaluating XAI Methods." In 2025 International Symposium on iNnovative Informatics of Biskra (ISNIB). IEEE, 2025. https://doi.org/10.1109/isnib64820.2025.10983231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Venugopal, Arun, Mahdi Farnaghi, and Raúl Zurita-Milla. "Comparative Evaluation of XAI Methods for Transparent Crop Yield Estimation Using CNN." In IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. http://dx.doi.org/10.1109/igarss53475.2024.10641426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dietz, Katharina, Mehrdad Hajizadeh, Johannes Schleicher, et al. "Agree to Disagree: Exploring Consensus of XAI Methods for ML-based NIDS." In 2024 20th International Conference on Network and Service Management (CNSM). IEEE, 2024. https://doi.org/10.23919/cnsm62983.2024.10814448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ozdemir, Gokcen, Umut Ozdemir, Murat Kuzlu, and Ferhat Ozgur Catak. "Leveraging Explainable Artificial Intelligence (XAI) Methods Supporting Local and Global Explainability for Smart Grids." In 2024 Global Energy Conference (GEC). IEEE, 2024. https://doi.org/10.1109/gec61857.2024.10881108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pasti, Clemente, Talu Karagöz, Franz Nowak, Anej Svete, Reda Boumasmoud, and Ryan Cotterell. "An L* Algorithm for Deterministic Weighted Regular Languages." In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-main.468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Jiateng, Pengfei Yu, Yuji Zhang, et al. "EVEDIT: Event-based Knowledge Editing for Deterministic Knowledge Propagation." In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-main.282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Izza, Yacine, Xuanxiang Huang, Antonio Morgado, Jordi Planes, Alexey Ignatiev, and Joao Marques-Silva. "Distance-Restricted Explanations: Theoretical Underpinnings & Efficient Implementation." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/45.

Full text
Abstract:
The uses of machine learning (ML) have snowballed in recent years. In many cases, ML models are highly complex, and their operation is beyond the understanding of human decision-makers. Nevertheless, some uses of ML models involve high-stakes and safety-critical applications. Explainable artificial intelligence (XAI) aims to help human decision-makers in understanding the operation of such complex ML models, thus eliciting trust in their operation. Unfortunately, the majority of past XAI work is based on informal approaches, that offer no guarantees of rigor. Unsurprisingly, there exists comprehensive experimental and theoretical evidence confirming that informal methods of XAI can provide human-decision makers with erroneous information. Logic-based XAI represents a rigorous approach to explainability; it is model-based and offers the strongest guarantees of rigor of computed explanations. However, a well-known drawback of logic-based XAI is the complexity of logic reasoning, especially for highly complex ML models. Recent work proposed distance-restricted explanations, i.e. explanations that are rigorous provided the distance to a given input is small enough. Distance-restricted explainability is tightly related with adversarial robustness, and it has been shown to scale for moderately complex ML models, but the number of inputs still represents a key limiting factor. This paper investigates novel algorithms for scaling up the performance of logic-based explainers when computing and enumerating ML model explanations with a large number of inputs.
APA, Harvard, Vancouver, ISO, and other styles
8

Stephen, Gord, and Daniel Kirschen. "Endogenizing probabilistic resource adequacy risks in deterministic capacity expansion models." In 2024 18th International Conference on Probabilistic Methods Applied to Power Systems (PMAPS). IEEE, 2024. http://dx.doi.org/10.1109/pmaps61648.2024.10667201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guaragna, Gabriel Guerra, Rafael Augusto dos Reis Higashi, and Thiago Deeke Viek. "Monte Carlo SHALSTAB: A probabilistic-based SHALSTAB Analysis." In ENSUS2023 - XI Encontro de Sustentabilidade em Projeto. Grupo de Pesquisa Virtuhab/UFSC, 2023. http://dx.doi.org/10.29183/2596-237x.ensus2023.v11.n1.p461-476.

Full text
Abstract:
This paper aims to propose a method for assessing slope stability through probabilities, which can support sustainability based on an understanding of land use and land cover. The method uses the SHALSTAB mathematical model as a deterministic basis and, in order to take into account uncertainties, applies the Monte Carlo method in conjunction with probability density functions. Deterministic methods alone consider the events and parameters to be unique, as if no randomness exists. The events and combinations of soil parameters that generate instabilities are random, and for this reason the proposed method achieved optimal results. In general, the use of mean values for the parameters is used in deterministic modelling, but these mean values do not represent the continuous variation existing in the field, and there is also a great chance that the applied means do not summarize the study area correctly. Monte Carlo relies on the law of large numbers that will tend to the average probability after several simulations, and for this reason stochasticity carries more powerful information than determinism. A total of 100,000 SHALSTAB simulations were run, varying in each iteration the geomechanical parameters of the soils, soil depth and saturated hydraulic conductivity, as results, the calculated statistical AUC (Area Under the ROC Curve), used to validate the method, was 0.887.
APA, Harvard, Vancouver, ISO, and other styles
10

Erickson, Marjorie, and Mark Kirk. "Methods to Appropriately Account for Uncertainties in Structural Integrity Assessment Models." In ASME 2020 Pressure Vessels & Piping Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/pvp2020-21187.

Full text
Abstract:
Abstract To ensure an appropriate and/or conservative assessment of structural integrity it is essential to account for the uncertainties inherent to the various inputs and models that, collectively, contribute to a structural integrity assessment. While the methods used to account for uncertainties will differ, this applies equally to assessments performed using either deterministic or probabilistic approaches. Oftentimes the overall model used for a structural integrity assessment is itself comprised of multiple inputs and models, which themselves may be inter-related and/or correlated. In these circumstances the quest to ensure that all uncertainties are addressed can result in the same uncertainty — or uncertainty source — being accounted for multiple times. Such “double-counting” of uncertainties introduces un-needed conservatism to the assessment and should be avoided. In this paper we use the linked fracture toughness models contained in the recently proposed Revision 1 to ASME Section XI Code Case N-830 to provide examples of uncertainty treatment in analyses using multiple models. Identification of sources of uncertainty in each model used in a multi-model analysis can help to ensure that each source is accounted for appropriately and not multiple times. The CC N-830-1 models are used to demonstrate the effects of various uncertainty treatment strategies and the pitfalls that arise from treating sources of uncertainty twice.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Deterministic methods for XAI"

1

Dorning, J. J. Improved deterministic calculational methods for irregularly shaped shields. Office of Scientific and Technical Information (OSTI), 1992. http://dx.doi.org/10.2172/6891160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Urbatsch, T. J. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations. Office of Scientific and Technical Information (OSTI), 1995. http://dx.doi.org/10.2172/212566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Edward W. Larson. Hybrid Monte Carlo-Deterministic Methods for Nuclear Reactor-Related Criticality Calculations. Office of Scientific and Technical Information (OSTI), 2004. http://dx.doi.org/10.2172/821706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Apostolatos, A., R. Rossi, and C. Soriano. D7.2 Finalization of "deterministic" verification and validation tests. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.2.006.

Full text
Abstract:
This deliverable focus on the verification and validation of the solvers of Kratos Multiphysics which are used within ExaQUte. These solvers comprise standard body-fitted approaches and novel embedded approaches for the Computational Fluid Dynamics (CFD) simulations carried out within ExaQUte. Firstly, the standard body-fitted CFD solver is validated on a benchmark problem of high rise building - CAARC benchmark and subsequently the novel embedded CFD solver is verified against the solution of the body-fitted solver. Especially for the novel embedded approach, a workflow is presented on which the exact parameterized Computer-Aided Design (CAD) model is used in an efficient manner for the underlying CFD simulations. It includes: A note on the space-time methods Verification results for the body-fitted solver based on the CAARC benchmark Workflow consisting of importing an exact CAD model, tessellating it and performing embedded CFD on it Verification results for the embedded solver based on a high-rise building API definition and usage
APA, Harvard, Vancouver, ISO, and other styles
5

Dorning, J. J. Improved deterministic calculational methods for irregularly shaped shields. Final report, September 30, 1988--November 30, 1990. Office of Scientific and Technical Information (OSTI), 1992. http://dx.doi.org/10.2172/10106251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Roye, Thorsten. Unsettled Technology Areas in Deterministic Assembly Approaches for Industry 4.0. SAE International, 2021. http://dx.doi.org/10.4271/epr2021018.

Full text
Abstract:
Increased production rates and cost reduction are affecting manufacturing in all sectors of the mobility industry. One enabling methodology that could achieve these goals in the burgeoning “Industry 4.0” environment is the deterministic assembly (DA) approach. The DA approach is defined as an optimized assembly process; it always forms the same final structure and has a strong link to design-for-assembly and design-for-automation methodologies. It also looks at the whole supply chain, enabling drastic savings at the original equipment manufacturer (OEM) level by reducing recurring costs and lead time. Within Industry 4.0, DA will be required mainly for the aerospace and the space industry, but serves as an interesting approach for other industries assembling large and/or complex components. In its entirety, the DA approach connects an entire supply chain—from part manufacturing at an elementary level to an OEM’s final assembly line level. Addressing the whole process of aircraft design and manufacturing is necessary to develop further collaboration models between OEMs and the supply chain, including addressing the most pressing technology challenges. Since all parts aggregate at the OEM level, the OEM—as an integrator of all these single parts—needs special end-to-end methodologies to drastically decrease cost and lead time. This holistic approach can be considered in part design as well (in the design-for-automation and design-for-assembly philosophy). This allows for quicker assembly at the OEM level, such as “part-to-part” or “hole-to-hole” approaches, versus traditional, classical assembly methods like manual measurement or measurement-assisted assembly. In addition, it can increase flexibility regarding rate changes in production (such as those due to pandemic- or climate-related environmental challenges). The standardization and harmonization of these areas would help all industries and designers to have a deterministic approach with an end-to-end concept. Simulations can easily compare possible production and assembly steps with different impacts on local and global tolerances. Global measurement feedback needs high-accuracy turnkey solutions, which are very costly and inflexible. The goal of standardization would be to use Industry 4.0 feedback and features, as well as to define several building blocks of the DA approach as a one-way assembly (also known as one-up assembly, or “OUA”), false one-way assembly, “Jig-as-Master,” etc., up to the hole-to-hole assembly approach. The evolution of these assembly principles and the link to simulation approaches are undefined and unsolved domains; they are discussed in this report. They must be discussed in greater depth with aims of (first) clarifying the scope of the industry-wide alignment needs and (second) prioritizing the issues requiring standardization. NOTE: SAE EDGE™ Research Reports are intended to identify and illuminate key issues in emerging, but still unsettled, technologies of interest to the mobility industry. The goal of SAE EDGE™ Research Reports is to stimulate discussion and work in the hope of promoting and speeding resolution of identified issues. SAE EDGE™ Research Reports are not intended to resolve the challenges they identify or close any topic to further scrutiny.
APA, Harvard, Vancouver, ISO, and other styles
7

Peterson, Warren. PR663-18602-Z01 Guidance for Applying Revised AGA Report 8 Based on Measurement Uncertainty. Pipeline Research Council International, Inc. (PRCI), 2019. http://dx.doi.org/10.55274/r0011570.

Full text
Abstract:
The 2017 revision of AGA (American Gas Association) Report #8 encourages adoption of the new GERG2008 EOS (Equation of State) technology but leaves users with the decision on whether to upgrade. Due to the technical complexity of the subject and the potential financial impact, users seek additional technical guidance so that they may confidently apply this discretion. This project investigates technical methods for arriving at choices which are technically defendable and financially responsible. To provide guidance and tools for users, the project constructed deterministic and probabilistic models that illustrate potential impact of EOS upgrades, applying real-world gas composition, pressure, temperature and flow. A step-by-step sequence for evaluating upgrade potential was also created. Includes calculation spreadsheets.
APA, Harvard, Vancouver, ISO, and other styles
8

Holzenthal, Elizabeth, and Bradley Johnson. Comparison of run-up models with field data. Engineer Research and Development Center (U.S.), 2024. https://doi.org/10.21079/11681/49470.

Full text
Abstract:
Run-up predictions are inherently uncertain, owing to ambiguities in phase-averaged models and inherent complexities of surf and swash-zone hydrodynamics. As a result, different approaches, ranging from simple algebraic expressions to computationally intensive phase-resolving models, have been used in attempt to capture the most relevant run-up processes. Studies quantifiably comparing these methods in terms of physical accuracy and computational speed are needed as new observation technologies and models become available. The current study tests the capability of the new swash formulation of the Coastal Modeling System (CMS) to predict 1D run-up statistics (R2%) collected during an energetic 3-week period on sandy dune-backed beach in Duck, North Carolina. The accuracy and speed of the debut CMS swash formulation is compared with one algebraic model and three other numerical models. Of the four tested numerical models, the CSHORE model computed the results fastest, and the CMS model results had the greatest accuracy. All four numerical models, including XBeach in surfbeat and nonhydrostatic modes, yielded half the error of the algebraic model tested. These findings present an encouraging advancement for phase-averaged coastal models, a critical step towards rapid prediction for near-time deterministic or long-term stochastic guidance.
APA, Harvard, Vancouver, ISO, and other styles
9

Letcher, Theodore, Sandra LeGrand, and Christopher Polashenski. The Blowing Snow Hazard Assessment and Risk Prediction model : a Python based downscaling and risk prediction for snow surface erodibility and probability of blowing snow. Engineer Research and Development Center (U.S.), 2022. http://dx.doi.org/10.21079/11681/43582.

Full text
Abstract:
Blowing snow is an extreme terrain hazard causing intermittent severe reductions in ground visibility and snow drifting. These hazards pose significant risk to operations in snow-covered regions. While many ingredients-based forecasting methods can be employed to predict where blowing snow is likely to occur, there are currently no physically based tools to predict blowing snow from a weather forecast. However, there are several different process models that simulate the transport of snow over short distances that can be adapted into a terrain forecasting tool. This report documents a downscaling and blowing-snow prediction tool that leverages existing frameworks for snow erodibility, lateral snow transport, and visibility, and applies these frameworks for terrain prediction. This tool is designed to work with standard numerical weather model output and user-specified geographic models to generate spatially variable forecasts of snow erodibility, blowing snow probability, and deterministic blowing-snow visibility near the ground. Critically, this tool aims to account for the history of the snow surface as it relates to erodibility, which further refines the blowing-snow risk output. Qualitative evaluations of this tool suggest that it can provide more precise forecasts of blowing snow. Critically, this tool can aid in mission planning by downscaling high-resolution gridded weather forecast data using even higher resolution terrain dataset, to make physically based predictions of blowing snow.
APA, Harvard, Vancouver, ISO, and other styles
10

Soloviev, Vladimir, Andrii Bielinskyi, Oleksandr Serdyuk, Victoria Solovieva, and Serhiy Semerikov. Lyapunov Exponents as Indicators of the Stock Market Crashes. [б. в.], 2020. http://dx.doi.org/10.31812/123456789/4131.

Full text
Abstract:
The frequent financial critical states that occur in our world, during many centuries have attracted scientists from different areas. The impact of similar fluctuations continues to have a huge impact on the world economy, causing instability in it concerning normal and natural disturbances [1]. The an- ticipation, prediction, and identification of such phenomena remain a huge chal- lenge. To be able to prevent such critical events, we focus our research on the chaotic properties of the stock market indices. During the discussion of the re- cent papers that have been devoted to the chaotic behavior and complexity in the financial system, we find that the Largest Lyapunov exponent and the spec- trum of Lyapunov exponents can be evaluated to determine whether the system is completely deterministic, or chaotic. Accordingly, we give a theoretical background on the method for Lyapunov exponents estimation, specifically, we followed the methods proposed by J. P. Eckmann and Sano-Sawada to compute the spectrum of Lyapunov exponents. With Rosenstein’s algorithm, we com- pute only the Largest (Maximal) Lyapunov exponents from an experimental time series, and we consider one of the measures from recurrence quantification analysis that in a similar way as the Largest Lyapunov exponent detects highly non-monotonic behavior. Along with the theoretical material, we present the empirical results which evidence that chaos theory and theory of complexity have a powerful toolkit for construction of indicators-precursors of crisis events in financial markets.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography