To see the other types of publications on this topic, follow the link: Distribution of the attribute value probability.

Journal articles on the topic 'Distribution of the attribute value probability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Distribution of the attribute value probability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Simanavičienė, Rūta, and Vaida Petraitytė. "Sensitivity Analysis of the TOPSIS Method in Respect of Initial Data Distributions." Lietuvos statistikos darbai 55, no. 1 (2016): 45–51. http://dx.doi.org/10.15388/ljs.2016.13866.

Full text
Abstract:
The present article investigates the sensitivity of the multiple criteria decision-making method TOPSIS in respectof attribute probability distributions. To carry out research, initial data – attribute values – were generated according to anormal, log-normal, uniform, and beta distributions. Decision matrixes were constructed from the generated data. Byapplying the TOPSIS method to the matrixes generated, result samples were received. A statistical analysis was conductedfor the results obtained, which revealed that the distributions of the initial data comply with the distributions of the resu
APA, Harvard, Vancouver, ISO, and other styles
2

Khan, Muhammad Zahir, Muhammad Farid Khan, Muhammad Aslam, and Abdur Razzaque Mughal. "Design of Fuzzy Sampling Plan Using the Birnbaum-Saunders Distribution." Mathematics 7, no. 1 (2018): 9. http://dx.doi.org/10.3390/math7010009.

Full text
Abstract:
Acceptance sampling is one of the essential areas of quality control. In a conventional environment, probability theory is used to study acceptance sampling plans. In some situations, it is not possible to apply conventional techniques due to vagueness in the values emerging from the complexities of processor measurement methods. There are two types of acceptance sampling plans: attribute and variable. One of the important elements in attribute acceptance sampling is the proportion of defective items. In some situations, this proportion is not a precise value, but vague. In this case, it is su
APA, Harvard, Vancouver, ISO, and other styles
3

Alenazi, Fahad S., Khalil El Hindi, and Basil AsSadhan. "Complement-Class Harmonized Naïve Bayes Classifier." Applied Sciences 13, no. 8 (2023): 4852. http://dx.doi.org/10.3390/app13084852.

Full text
Abstract:
Naïve Bayes (NB) classification performance degrades if the conditional independence assumption is not satisfied or if the conditional probability estimate is not realistic due to the attributes of correlation and scarce data, respectively. Many works address these two problems, but few works tackle them simultaneously. Existing methods heuristically employ information theory or applied gradient optimization to enhance NB classification performance, however, to the best of our knowledge, the enhanced model generalization capability deteriorated especially on scant data. In this work, we propos
APA, Harvard, Vancouver, ISO, and other styles
4

Augustyn, D. R. "Query-condition-aware V-optimal histogram in range query selectivity estimation." Bulletin of the Polish Academy of Sciences Technical Sciences 62, no. 2 (2014): 287–303. http://dx.doi.org/10.2478/bpasts-2014-0029.

Full text
Abstract:
Abstract Obtaining the optimal query execution plan requires a selectivity estimation. The selectivity value allows to predict the size of a query result. This lets choose the best method of query execution. There are many selectivity obtaining methods that are based on different types of estimators of attribute values distribution (commonly they are based on histograms). The adaptive method, proposed in this paper, uses either attribute values distribution or range query condition boundaries one. The new type of histogram - the Query-Conditional-Aware V-optimal one (QCA-V-optimal) - is propos
APA, Harvard, Vancouver, ISO, and other styles
5

Yan, Ying, and Bin Suo. "Decision-Making with Risk under Interval Uncertainty Based on Area Metrics." Mathematical Problems in Engineering 2022 (April 14, 2022): 1–10. http://dx.doi.org/10.1155/2022/2793538.

Full text
Abstract:
From the perspective of D-S evidence theory and area measurement, a risk-based comprehensive decision-making method that considers both the expected utility and the uncertainty of the scheme is proposed under the interval uncertainty environment of attribute values. The upper and lower bounds of the synthetic probability distribution of attributes values in different natural states are constructed based on the belief measure and plausibility measure. Based on the area measurement, a method for calculating the expected utility of each scheme is proposed. To reflect the influence of the uncertai
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Yu, HongGuang Sun, and Zheng Xu. "A mixture transmuted generalized extreme value distribution: Definition and properties." EPJ Web of Conferences 332 (2025): 01003. https://doi.org/10.1051/epjconf/202533201003.

Full text
Abstract:
Extreme events are often described using generalized extreme value models, which are crucial for quantifying their impact. In prior studies, researchers have utilized the quadratic rank transmutation map to construct a comprehensive family of probability distributions, incorporating an additional parameter to significantly improve the flexibility of distribution modeling. To gain a deeper understanding of the statistical characteristics and patterns of extreme events, mixture distributions have been applied, such as the mixture normal distribution and the mixture of Gumbel distribution, among
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Wei, Guangle Yan, and Yuwen Shi. "Dynamic Multi-Attribute Group Decision Making Model Based on Generalized Interval-Valued Trapezoidal Fuzzy Numbers." Cybernetics and Information Technologies 14, no. 4 (2015): 11–28. http://dx.doi.org/10.1515/cait-2014-0002.

Full text
Abstract:
Abstract In this paper we investigate the dynamic multi-attribute group decision making problems, in which all the attribute values are provided by multiple decision makers at different periods. In order to increase the level of overall satisfaction for the final decision and deal with uncertainty, the attribute values are enhanced with generalized interval-valued trapezoidal fuzzy numbers to cope with the vagueness and indeterminacy. We first define the Dynamic Generalized Interval-valued Trapezoidal Fuzzy Numbers Weighted Geometric Aggregation (DGITFNWGA) operator and give an approach to det
APA, Harvard, Vancouver, ISO, and other styles
8

Maddulapalli, K., S. Azarm, and A. Boyars. "Interactive Product Design Selection With an Implicit Value Function." Journal of Mechanical Design 127, no. 3 (2005): 367–77. http://dx.doi.org/10.1115/1.1829727.

Full text
Abstract:
We present a new method to aid a decision maker (DM) in selecting the “most preferred” from a set of design alternatives. The method is deterministic and assumes that the DM’s preferences reflect an implicit value function that is quasi-concave. The method is interactive, with the DM stating preferences in the form of attribute tradeoffs at a series of trial designs, each a specific design under consideration. The method is iterative and uses the gradient of the value function obtained from the preferences of the DM to eliminate lower value designs at each trial design. We present an approach
APA, Harvard, Vancouver, ISO, and other styles
9

Xing, Yin, Yang Chen, Saipeng Huang, Wei Xie, Peng Wang, and Yunfei Xiang. "Research on the Uncertainty of Landslide Susceptibility Prediction Using Various Data-Driven Models and Attribute Interval Division." Remote Sensing 15, no. 8 (2023): 2149. http://dx.doi.org/10.3390/rs15082149.

Full text
Abstract:
Two significant uncertainties that are crucial for landslide susceptibility prediction modeling are attribute interval numbers (AIN) division of continuous landslide impact factors in frequency ratio analysis and various susceptibility prediction models. Five continuous landslide impact factor interval attribute classifications (4, 8, 12, 16, 20) and three data-driven models (deep belief networks (DBN), random forest (RF), and neural network (back propagation (BP)) were used for a total of fifteen different scenarios of landslide susceptibility prediction studies in order to investigate the ef
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Peide, Dongyang Wang, Hui Zhang, Liang Yan, Ying Li, and Lili Rong. "Multi-attribute decision-making method based on normal T-spherical fuzzy aggregation operator." Journal of Intelligent & Fuzzy Systems 40, no. 5 (2021): 9543–65. http://dx.doi.org/10.3233/jifs-202000.

Full text
Abstract:
T-spherical fuzzy numbers (FNs), which add an abstinence degree based on membership and non-membership degrees, can express neutral information conveniently and have a considerable large range of information expression. The normal FNs (NFNs) are very available to characterize normal distribution phenomenon widely existing in social life. In this paper, we first define the normal T-SFNs (NT-SFNs) which can combine the advantages of T-SFNs and NFNs. Then, we define their operational laws, score value, and accuracy value. By considering the interrelationship among multi-input parameters, we propo
APA, Harvard, Vancouver, ISO, and other styles
11

Azaïs, Romain, Bernard Delyon, and François Portier. "Integral estimation based on Markovian design." Advances in Applied Probability 50, no. 3 (2018): 833–57. http://dx.doi.org/10.1017/apr.2018.38.

Full text
Abstract:
AbstractSuppose that a mobile sensor describes a Markovian trajectory in the ambient space and at each time the sensor measures an attribute of interest, e.g. the temperature. Using only the location history of the sensor and the associated measurements, we estimate the average value of the attribute over the space. In contrast to classical probabilistic integration methods, e.g. Monte Carlo, the proposed approach does not require any knowledge of the distribution of the sensor trajectory. We establish probabilistic bounds on the convergence rates of the estimator. These rates are better than
APA, Harvard, Vancouver, ISO, and other styles
12

Nguyen, Hoa. "A probabilistic relational database model and algebra." Journal of Computer Science and Cybernetics 31, no. 4 (2015): 305. http://dx.doi.org/10.15625/1813-9663/31/4/5742.

Full text
Abstract:
This paper introduces a probabilistic relational database model, called PRDB, for representing and querying uncertain information of objects in practice. To develop the PRDB model, first, we represent the relational attribute value as a pair of probabilistic distributions on a set for modeling the possibility that the attribute can take one of the values of the set with a probability belonging to the interval which is inferred from the pair of probabilistic distributions. Next, on the basis representing such attribute values, we formally define the notions as the schema, relation, probabilisti
APA, Harvard, Vancouver, ISO, and other styles
13

Grana, Dario. "Probabilistic approach to rock physics modeling." GEOPHYSICS 79, no. 2 (2014): D123—D143. http://dx.doi.org/10.1190/geo2013-0333.1.

Full text
Abstract:
Rock physics modeling aims to provide a link between rock properties, such as porosity, lithology, and fluid saturation, and elastic attributes, such as velocities or impedances. These models are then used in quantitative seismic interpretation and reservoir characterization. However, most of the geophysical measurements are uncertain; therefore, rock physics equations must be combined with mathematical tools to account for the uncertainty in the data. We combined probability theory with rock physics modeling to make predictions of elastic properties using probability distributions rather than
APA, Harvard, Vancouver, ISO, and other styles
14

Xie, Jian, Wenan Tan, Bingwu Fang, and Zhiqiu Huang. "Towards a Statistical Model Checking Method for Safety-Critical Cyber-Physical System Verification." Security and Communication Networks 2021 (May 17, 2021): 1–12. http://dx.doi.org/10.1155/2021/5536722.

Full text
Abstract:
Safety-Critical Cyber-Physical System (SCCPS) refers to the system that if the system fails or its key functions fail, it will cause casualties, property damage, environmental damage, and other catastrophic consequences. Therefore, it is vital to verify the safety of safety critical systems. In the community, the SCCPS safety verification mainly relies on the statistical model checking methodology, but for SCCPS with extremely high safety requirements, the statistical model checking method is difficult/infeasible to sample the extremely small probability event since the probability of the syst
APA, Harvard, Vancouver, ISO, and other styles
15

Ruan, Aiqing, and Yinao Wang. "The multiple attribute decision model of grey targets based on grey potential degree." Grey Systems: Theory and Application 7, no. 3 (2017): 365–75. http://dx.doi.org/10.1108/gs-07-2017-0021.

Full text
Abstract:
Purpose Grey target decision making is one of the important problems of decision-making theory. It is critical to express uncertain information effectively and depose them in a reasonable and simple way. The purpose of this paper is to solve the grey target problem by the grey potential degree method without whiten value and without distribution function. Furthermore, this new approach has an advantage of realizing both comparing and standardization work during the process of treating the data. Design/methodology/approach First, this paper makes a brief overview of the existing method for grey
APA, Harvard, Vancouver, ISO, and other styles
16

Takahashi, Chiharu, Yukiko Imada, Hiroaki Kawase, and Tomohiro Tanaka. "A new statistical method of rapid event attribution for probability of extreme events: applications to heatwave events in Japan." Environmental Research: Climate 4, no. 3 (2025): 035001. https://doi.org/10.1088/2752-5295/ade1f3.

Full text
Abstract:
Abstract We have developed a new statistical method of rapid event attribution (EA) that can immediately estimate the probability of a specific extreme event and attribute it to long-term climate change including anthropogenic global warming by using existing long-term large ensemble (LE) climate simulations with an atmospheric general circulation model and observational data. This paper describes the new EA method with the example of summer heat waves that have occurred in Japan. The probability distribution functions of the temperature over Japan are well approximated with the Gaussian (Gaus
APA, Harvard, Vancouver, ISO, and other styles
17

Shi, Meifeng, Peng Zhang, Xin Liao, and Zhijian Xue. "An Estimation of Distribution Based Algorithm for Continuous Distributed Constraint Optimization Problems." Information Technology and Control 53, no. 1 (2024): 80–97. http://dx.doi.org/10.5755/j01.itc.53.1.33343.

Full text
Abstract:
Continuous Distributed Constraint Optimization Problem(C-DCOP) is a constraint processing framework for continuous variables problems in multi-agent systems. There is a constraint cost function between two mutually restrictive agents in C-DCOP. The goal of the C-DCOP solving algorithm is to keep the sum of constraint cost functions in an extreme state. In a C-DCOP, each function is defined by a set of continuous variables. At present, some C-DCOP solving algorithms have been proposed, but there are some common problems such as the limitation of constraints cost function form, easy to fall into
APA, Harvard, Vancouver, ISO, and other styles
18

Indriany, Sylvia, Ade Sjafruddin, Aine Kusumawati, and Widyarini Weningtyas. "Identification of cumulative prospect theory parameters for mode choice model." MATEC Web of Conferences 270 (2019): 03012. http://dx.doi.org/10.1051/matecconf/201927003012.

Full text
Abstract:
The use of Cumulative Prospect Theory (CPT) in decision making related to transportation risk is still much debated. Mainly because of the travel and socio-economic characteristics of the traveller it possible for different responses to the specified Reference Point (RP) as well as the loss aversion. This difference can be seen from the value of Cumulative Prospect Theory parameters. Therefore, this paper will discuss about the determination of parameters CPT which affect public transportation mode choice model in the course of work trip activity. The reference point as an essential part of th
APA, Harvard, Vancouver, ISO, and other styles
19

Abdeen, Suad, Mohd Shareduwan Mohd Kasihmuddin, Nur Ezlin Zamri, Gaeithry Manoharam, Mohd Asyraf Mansor, and Nada Alshehri. "S-Type Random k Satisfiability Logic in Discrete Hopfield Neural Network Using Probability Distribution: Performance Optimization and Analysis." Mathematics 11, no. 4 (2023): 984. http://dx.doi.org/10.3390/math11040984.

Full text
Abstract:
Recently, a variety of non-systematic satisfiability studies on Discrete Hopfield Neural Networks have been introduced to overcome a lack of interpretation. Although a flexible structure was established to assist in the generation of a wide range of spatial solutions that converge on global minima, the fundamental problem is that the existing logic completely ignores the probability dataset’s distribution and features, as well as the literal status distribution. Thus, this study considers a new type of non-systematic logic termed S-type Random k Satisfiability, which employs a creative layer o
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Li-Min, Peng Chen, Musa Mammadov, Yang Liu, and Si-Yuan Wu. "Alleviating the independence assumptions of averaged one-dependence estimators by model weighting." Intelligent Data Analysis 25, no. 6 (2021): 1431–51. http://dx.doi.org/10.3233/ida-205400.

Full text
Abstract:
Of numerous proposals to refine naive Bayes by weakening its attribute independence assumption, averaged one-dependence estimators (AODE) has been shown to be able to achieve significantly higher classification accuracy at a moderate cost in classification efficiency. However, all one-dependence estimators (ODEs) in AODE have the same weights and are treated equally. To address this issue, model weighting, which assigns discriminate weights to ODEs and then linearly combine their probability estimates, has been proved to be an efficient and effective approach. Most information-theoretic weight
APA, Harvard, Vancouver, ISO, and other styles
21

Kang, Qiangqiang, Jiagen Hou, Liqin Liu, Mingqiu Hou, and Yuming Liu. "Quantitative Prediction of Braided Sandbodies Based on Probability Fusion and Multi-Point Geostatistics." Energies 16, no. 6 (2023): 2796. http://dx.doi.org/10.3390/en16062796.

Full text
Abstract:
Predicting the spatial distribution of braided fluvial facies reservoirs is of paramount significance for oil and gas exploration and development. Given that seismic materials enjoy an advantage in dense spatial sampling, many methods have been proposed to predict the reservoir distribution based on different seismic attributes. Nevertheless, different seismic attributes have different sensitivities to the reservoirs, and informational redundancy between them makes it difficult to combine them effectively. Regarding reservoir modeling, multi-point geostatistics represents the distribution char
APA, Harvard, Vancouver, ISO, and other styles
22

Yin, Tao, Xiaojuan Mao, Xingtan Wu, Hengrong Ju, Weiping Ding, and Xibei Yang. "An improved D-S evidence theory based neighborhood rough classification approach." Journal of Intelligent & Fuzzy Systems 41, no. 6 (2021): 6601–13. http://dx.doi.org/10.3233/jifs-210462.

Full text
Abstract:
Neighborhood classifier, a common classification method, is applied in pattern recognition and data mining. The neighborhood classifier mainly relies on the majority voting strategy to judge each category. This strategy only considers the number of samples in the neighborhood but ignores the distribution of samples, which leads to a decreased classification accuracy. To overcome the shortcomings and improve the classification performance, D-S evidence theory is applied to represent the evidence information support of other samples in the neighborhood, and the distance between samples in the ne
APA, Harvard, Vancouver, ISO, and other styles
23

Tang, Liangrui, Zhilin Lu, and Bing Fan. "Energy Efficient and Reliable Routing Algorithm for Wireless Sensors Networks." Applied Sciences 10, no. 5 (2020): 1885. http://dx.doi.org/10.3390/app10051885.

Full text
Abstract:
In energy-constrained wireless sensor networks, low energy utilization and unbalanced energy distribution are seriously affecting the operation of the network. Therefore, efficient and reasonable routing algorithms are needed to achieve higher Quality of Service (QoS). For the Dempster–Shafer (DS) evidence theory, it can fuse multiple attributes of sensor nodes with reasonable theoretical deduction and has low demand for prior knowledge. Based on the above, we propose an energy efficient and reliable routing algorithm based on DS evidence theory (DS-EERA). First, DS-EERA establishes three attr
APA, Harvard, Vancouver, ISO, and other styles
24

Dombi, József, Ana Vranković Lacković, and Jonatan Lerga. "A New Insight into Entropy Based on the Fuzzy Operators, Applied to Useful Information Extraction from Noisy Time-Frequency Distributions." Mathematics 11, no. 3 (2023): 505. http://dx.doi.org/10.3390/math11030505.

Full text
Abstract:
In this paper, we study the connections between generalized mean operators and entropies, where the mean value operators are related to the strictly monotone logical operators of fuzzy theory. Here, we propose a new entropy measure based on the family of generalized Dombi operators. Namely, this measure is obtained by using the Dombi operator as a generator function in the general solution of the bisymmetric functional equation. We show how the proposed entropy can be used in a fuzzy system where the performance is consistent in choosing the best alternative in the Multiple Attribute Decision-
APA, Harvard, Vancouver, ISO, and other styles
25

Rutkauskas, Aleksandras Vytautas, Džiuljeta Ruškyte, and Vytas Navickas. "Evaluation of the Influence of the Structure And Rate of Taxes And Social Insurance Contributions on Labour Market Using the Stochastically Informative Expert System." Business: Theory and Practice 14, no. (2) (2013): 83–96. https://doi.org/10.3846/btp.2013.10.

Full text
Abstract:
The aim of this article is to make use of the possibilities of stochastically informative expertise system, adapting it for the analysis of interactions between the tax scale and structure with changes in the labour market. In this article, stochastically informative expertise is understood as a case of expert evaluation, when the expert estimate of the possibilities of the analysed characteristic are presented as probability distributions. The authors aim to constructively merge the idea of stochastically informative expertise with possibilities of correlation and regression analysis methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Lvguan, Dazhong Wu, and Zhenying Chen. "Research on Web Visual Development Platform Based on Microservice." Mathematical Problems in Engineering 2022 (May 14, 2022): 1–6. http://dx.doi.org/10.1155/2022/9422601.

Full text
Abstract:
In order to meet the needs of diversified terminal networks, a Web visual development platform based on microservices is proposed. According to the characteristics of network data distribution and resource attributes in the big data environment, select the service-oriented architecture as the basic data collaborative management system, establish a sample set containing various data types, calculate the density distribution of each type of data in the continuous function based on the sample set, and obtain the weight probability of the maximum and minimum density data of the data sample set by
APA, Harvard, Vancouver, ISO, and other styles
27

Baikov, I. R., S. V. Kitaev, and O. V. Smorodova. "Set of indicators for dependability evaluation of gas compression units." Dependability 18, no. 4 (2018): 16–21. http://dx.doi.org/10.21683/1729-2646-2018-18-4-16-21.

Full text
Abstract:
The paper is dedicated to the improvement of the evaluation methods of one of the most important operating characteristics of gas compression units (GCUs), i.e. dependability, under the conditions of decreasing pipeline utilization rate. Currently, the dependability of units is characterized by a set of parameters based on the identification of the time spent by a unit in certain operational state. The paper presents the primary findings regarding the dependability coefficients of GPA-Ts-18 units, 41 of which are operated in multi-yard compressor stations (CSs) of one of Gazprom’s subsidiaries
APA, Harvard, Vancouver, ISO, and other styles
28

Supriyatin, Wahyu. "Application of Naive Bayes Algorithm to Analysis of Free Fatty Acid (FFA) Production Based on Fruit Freshness Level." Komputasi: Jurnal Ilmiah Ilmu Komputer dan Matematika 20, no. 1 (2022): 24–34. http://dx.doi.org/10.33751/komputasi.v20i1.6293.

Full text
Abstract:
Cooking oil is a basic need for everyone who is used to process food ingredients. The use of cooking oil repeatedly and continuously by heating at high temperatures can increase the free fatty acid levels in the oil. The more the oil is reused, the higher the free fatty acid content. Testing the levels of FFA in oil can be done using the FFA test, because FFA can affect the selling price of CPO when it is marketed. In addition, FFA affects the levels of free fatty acids of CPO. This study aims to determine the analysis of FFA production in palm oil products based on the level of freshness of t
APA, Harvard, Vancouver, ISO, and other styles
29

Appaia, Loganathan, and Shalini Kandaswamy. "Selection of single sampling plans by attributes under the conditions of zero-inflated Poisson distribution." International Journal of Quality & Reliability Management 31, no. 9 (2014): 1002–11. http://dx.doi.org/10.1108/ijqrm-06-2012-0093.

Full text
Abstract:
Purpose – The purpose of this paper is to determine single sampling plans (SSPs) by attributes when the number of nonconformities is distributed according to a zero-inflated Poisson (ZIP) distribution. Design/methodology/approach – Manufacturing processes have now-a-days been aligned properly and are monitored well, so that the occurrence of nonconformities would be a rare phenomenon. The information related to number of nonconformities per product will have more number of zeros. Under such circumstances, the appropriate probability distribution of the number of nonconformities is a ZIP distri
APA, Harvard, Vancouver, ISO, and other styles
30

McCord, Mark R., Dario Hidalgo, Prem Goel, and Morton E. O’Kelly. "Value of Traffic Assignment and Flow Prediction in Multiattribute Network Design: Framework, Issues, and Preliminary Results." Transportation Research Record: Journal of the Transportation Research Board 1607, no. 1 (1997): 171–77. http://dx.doi.org/10.3141/1607-23.

Full text
Abstract:
The well-defined concept of value of perfect information (VOPI) was used to assess the value of improving flow prediction and the relative value of improving components of the prediction system. The concept is introduced with a simplified example of choosing whether to build a highway segment in a corridor, and then the example is extended to a network and more realistic components are incorporated. The examples are worked through to illustrate the general approach, the types of results that could be obtained, and the issues that arise. The probability distributions of the attributes—cost, tim
APA, Harvard, Vancouver, ISO, and other styles
31

Liu, Jian, and Yong Jian Liu. "Reliability Analysis of Continuous Steel Truss Bridge Stiffened with Rigid Cables Based on Stochastic Finite Element Method." Advanced Materials Research 639-640 (January 2013): 1060–66. http://dx.doi.org/10.4028/www.scientific.net/amr.639-640.1060.

Full text
Abstract:
To analysis the reliability of Dongjiang Bridge, a steel truss bridge stiffened with rigid cables, a new analysis method that combined with each advantage of some common reliability computing method was put forward, which could get the probability distribution and numeric attributes of complex bridge structure's response expediently, and then get the reliability index and failure probability. A stochastic finite element model was established to analyze this bridge, in which some parameters such as material, geometric dimensioning, loads, and so on, were simulated as stochastic variables. The a
APA, Harvard, Vancouver, ISO, and other styles
32

Wakui, Takashi, Yoichi Takagishi, and Masatoshi Futakawa. "Cavitation Damage Prediction in Mercury Target for Pulsed Spallation Neutron Source Using Monte Carlo Simulation." Materials 16, no. 17 (2023): 5830. http://dx.doi.org/10.3390/ma16175830.

Full text
Abstract:
Cavitation damage on a mercury target vessel for a pulsed spallation neutron source is induced by a proton beam injection in mercury. Cavitation damage is one of factors affecting the allowable beam power and the life time of a mercury target vessel. The prediction method of the cavitation damage using Monte Carlo simulations was proposed taking into account the uncertainties of the core position of cavitation bubbles and impact pressure distributions. The distribution of impact pressure attributed to individual cavitation bubble collapsing was assumed to be Gaussian distribution and the proba
APA, Harvard, Vancouver, ISO, and other styles
33

Ferrero, Alessandro, Harsha Vardhana Jetti, Sina Ronaghi, and Simona Salicone. "A method to consider a maximum admissible risk in decision-making procedures based on measurement results." Acta IMEKO 12, no. 2 (2023): 1–9. http://dx.doi.org/10.21014/actaimeko.v12i2.1518.

Full text
Abstract:
Measurement uncertainty plays a very important role ensuring validity of decision-making procedures, since it is the main source of incorrect decisions in conformity assessment. The guidelines given by the actual Standards allow one to take a decision of conformity or non-conformity, according to the given limit and measurement uncertainty associated to the measured value. Due to measurement uncertainty, a risk of a wrong decision is always present, and the Standards also give indications on how to evaluate this risk, although they mostly refer to a normal probability density function to repre
APA, Harvard, Vancouver, ISO, and other styles
34

Behrens, R. A., M. K. MacLeod, T. T. Tran, and A. C. Alimi. "Incorporating Seismic Attribute Maps in 3D Reservoir Models." SPE Reservoir Evaluation & Engineering 1, no. 02 (1998): 122–26. http://dx.doi.org/10.2118/36499-pa.

Full text
Abstract:
Summary We introduce a new geostatistical method to incorporate seismic attribute maps into a three-dimensional (3D) reservoir model. The method explicitly honors the difference in vertical resolution between seismic and well-log data. The method, called sequential Gaussian simulation with block Kriging (SGSBK), treats the seismic map as a soft estimate of the average reservoir property. With this method, the average of the cell values in anyone vertical column of grid cells is constrained by the value of the seismic map over that column. The result is a model that contains vertical variabilit
APA, Harvard, Vancouver, ISO, and other styles
35

Morisawa, Junji, Takahiro Otani, Jo Nishino, Ryo Emoto, Kunihiko Takahashi, and Shigeyuki Matsui. "Semi-parametric empirical Bayes factor for genome-wide association studies." European Journal of Human Genetics 29, no. 5 (2021): 800–807. http://dx.doi.org/10.1038/s41431-020-00800-x.

Full text
Abstract:
AbstractBayes factor analysis has the attractive property of accommodating the risks of both false negatives and false positives when identifying susceptibility gene variants in genome-wide association studies (GWASs). For a particular SNP, the critical aspect of this analysis is that it incorporates the probability of obtaining the observed value of a statistic on disease association under the alternative hypotheses of non-null association. An approximate Bayes factor (ABF) was proposed by Wakefield (Genetic Epidemiology 2009;33:79–86) based on a normal prior for the underlying effect-size di
APA, Harvard, Vancouver, ISO, and other styles
36

Pradana, Khaliq, and F. Budiman. "OPTIMIZATION ACCURACY VALUE OF AGRICULTURAL LAND FERTILITY CLASSIFICATION USING SOFT VOTING METHOD." Sinkron 9, no. 1 (2024): 152–64. http://dx.doi.org/10.33395/sinkron.v9i1.13159.

Full text
Abstract:
Soil fertility on an agricultural land is very influential with agricultural yields, where plants can grow well and fertile if nutrient intake is met. The purpose of this research is to improve the accuracy in predicting soil fertility by utilizing machine learning by combining two classification algorithms using soft voting methods in the classification of agricultural land fertility. In this research, one of the ensemble learning methods called soft voting is employed. Soft voting is used to enhance accuracy by optimizing the combination of algorithms based on the highest probability provide
APA, Harvard, Vancouver, ISO, and other styles
37

Hasan, Md Kamrul Hasan, and Lalit Kumar Kumar. "Determining Adequate Sample Size for Social Survey Research." Journal of the Bangladesh Agricultural University 22, no. 2 (2024): 146–57. http://dx.doi.org/10.3329/jbau.v22i2.74547.

Full text
Abstract:
Determination of a valid sample size is a fundamental step in research. This paper explains how existing formulas are tied in a single thread by applying the concept of standard error, margin of error, Z and t scores, confidence interval and sampling distribution. Bringing the concept of sample control ratio, we suggest a unified formula which is where I is the sample size, N is the population, t is the t-value at a desired level of probability with df = (N-1) and  is the sample control ratio to be estimated by for continuous variables and for categorical variables where  is the proportion o
APA, Harvard, Vancouver, ISO, and other styles
38

F.S, Al-Hinai, Gunawardhana L.N., and Al-Shaqsi I.S. "A Comparison of the Design Peak-Flow Estimated by the Simulated and Observed Storm Hydrographs." International Journal of Innovative Science and Research Technology 7, no. 11 (2022): 1149–59. https://doi.org/10.5281/zenodo.7439896.

Full text
Abstract:
The estimation of peak-flow corresponding to a given return period is a significant factor in designing hydraulic structures. There are many methods used to estimate the peak-flow. In this study, two methods were used to estimate the peak-flow. Firstly, an appropriate probability distribution function is fitted to the recorded annual maximum (AM) wadi-flow series to determine wadi-flow rate with a certain exceedance probability. Secondly, in the absence of long-term wadi-flow data, peak-flows simulated by rainfall-runoff model are used. In this study, these two methods were used to estimate th
APA, Harvard, Vancouver, ISO, and other styles
39

Narmontas, Martynas, Petras Rupšys, and Edmundas Petrauskas. "Construction of Reducible Stochastic Differential Equation Systems for Tree Height–Diameter Connections." Mathematics 8, no. 8 (2020): 1363. http://dx.doi.org/10.3390/math8081363.

Full text
Abstract:
This study proposes a general bivariate stochastic differential equation model of population growth which includes random forces governing the dynamics of the bivariate distribution of size variables. The dynamics of the bivariate probability density function of the size variables in a population are described by the mixed-effect parameters Vasicek, Gompertz, Bertalanffy, and the gamma-type bivariate stochastic differential equations (SDEs). The newly derived bivariate probability density function and its marginal univariate, as well as the conditional univariate function, can be applied for t
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Ya, and Qiang Xiong. "Color perception and recognition method for Guangdong embroidery image based on discrete mathematical model." Journal of Intelligent & Fuzzy Systems 40, no. 3 (2021): 3887–97. http://dx.doi.org/10.3233/jifs-191484.

Full text
Abstract:
The traditional method of Guangdong embroidery image color perception recognition has poor stereoscopic color reduction. Therefore, this paper introduces discrete mathematical model to design a new method of Guangdong embroidery image color perception recognition. Through histogram equalization, the input image with relatively concentrated gray distribution is transformed into the histogram output image with approximately uniform distribution to enhance the dynamic range of pixel gray value. The image of Yuexiu is smoothed and filtered by median filtering method to remove the noise in the imag
APA, Harvard, Vancouver, ISO, and other styles
41

Rosli, Muhammad Raziq, Mohd Khairul Idlan Muhammad, and Zulfaqar Sa’adi. "Best Fit Distribution for Annual Maximum Rainfall Events: IDF Curve Construction for Sibu, Sarawak." IOP Conference Series: Earth and Environmental Science 1505, no. 1 (2025): 012016. https://doi.org/10.1088/1755-1315/1505/1/012016.

Full text
Abstract:
Abstract Intensity-duration-frequency (IDF) curves depict the relationship between rainfall intensity, duration, and frequency and are vital for designing flood mitigation infrastructure. This study aimed to develop IDF curves for Sibu, Malaysia, using 21 years (2000-2020) of sub-daily rainfall data. The performance of three probability distribution functions (PDFs) - generalized extreme value (GEV), Gumbel, and generalized Pareto (GP) - and two-parameter estimation methods - maximum likelihood estimation (MLE) and generalized maximum likelihood estimation (GMLE) - were evaluated using goodnes
APA, Harvard, Vancouver, ISO, and other styles
42

LIANG Xiao, WANG Yanjin, and WANG Ruili. "Uncertainty Analysis of Detonation Based on Probability Learning on Manifold." Acta Physica Sinica 74, no. 12 (2025): 0. https://doi.org/10.7498/aps.74.20241501.

Full text
Abstract:
Detonation test suffers small experimental datasets due to high risk of implementation and substantial cost of samples production and measurement. The major challenges of limited data consist in constructing the probability distribution of physical quantities and application of machine learning. Probability learning on manifold (PLoM) can generate a large number of realizations reconcilable with practical common sense, and the underlying physical mechanism is preserved in these samples generated. So PLoM is viewed as an efficient tool of tackling small samples. To begin with, experimental data
APA, Harvard, Vancouver, ISO, and other styles
43

Muhammad Saiful, Amri Muliawan Nur, Aswian Editri Sutriandi, Eka Puspita, and B. Nadila Nuzululnisa. "Implementasi Data Mining Menggunakan Algoritma Naïve Bayes Untuk Klasifikasi Penyaluran Dana Zakat." Infotek: Jurnal Informatika dan Teknologi 8, no. 1 (2025): 239–49. https://doi.org/10.29408/jit.v8i1.28624.

Full text
Abstract:
Zakat is one of the pillars in Islam that aims to reduce economic disparity and assist those in need. Effective distribution of zakat requires a system capable of accurately identifying and targeting mustahik (zakat recipients). This study aims to implement data mining techniques using the Naïve Bayes Algorithm for the classification of zakat fund distribution at the National Amil Zakat Agency (BAZNAS) in East Lombok Regency. The Naïve Bayes Algorithm was chosen for its ability to predict categories based on probability and historical data. The data used is private data obtained through the fi
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Rui, Zhenqi Jing, Jingjing Ma, Long Qin, Kai Yan, and Chang Wen. "The Influence of Pore Distribution of Coal Char in the Char Fragmentation and Included Minerals Partitioning: A Percolation Modeling." Atmosphere 13, no. 4 (2022): 628. http://dx.doi.org/10.3390/atmos13040628.

Full text
Abstract:
The processes of char fragmentation, including mineral partitioning and particulate matter (PM) formation during dense and porous char combustion, were observed by a site percolation model. This model simulated the diffusion-controlled regime of char combustion, and the size distributions of included minerals in typical bituminous coal were determined by the computer-controlled scanning electron microscope (CCSEM), and the data were put into the char matrix randomly. The model presents the influence of initial pore distribution on char oxidation and fragmentation, the impact of the char conver
APA, Harvard, Vancouver, ISO, and other styles
45

Volodymyr, Sadkovyi, Pospelov Boris, Rybka Evgenіy, et al. "Development of a method for assessing the reliability of fire detection in premises." Eastern-European Journal of Enterprise Technologies 3, no. 10 (117) (2022): 56–62. https://doi.org/10.15587/1729-4061.2022.259493.

Full text
Abstract:
The object of this study is the detection of fires in the premises. The problem that was solved is the development of tools to assess the reliability of detection of fires in the premises based on the recurrence of the vector of increases in dangerous parameters of the gas environment. The method includes the sequential implementation of five procedures related to the formation of the vector of current increases in dangerous parameters, determining the recurrence of the current vector and evaluating the empirical distribution function relative to the calculated current recurrence of the state
APA, Harvard, Vancouver, ISO, and other styles
46

Su, Yanfang, Kanglong Liu, and Andrew K. F. Cheung. "Epistemic modality in translated and non-translated English court judgments of Hong Kong: A corpus-based study." Journal of Specialised Translation, no. 40 (July 25, 2023): 56–80. https://doi.org/10.26034/cm.jostrans.2023.525.

Full text
Abstract:
Court judgments serve as important precedents for future judicial decision-making in common law systems. The legal meanings of judgments are conveyed by specific linguistic devices, among which epistemic modality plays an important role in indicating the probability of propositions to construct convincing arguments and recognise potential differing opinions (Abbuhl 2006). This study adopts a corpus-based approach to compare different categories of epistemic modality in translated and non-translated English court judgments in Hong Kong. Based on the framework put forward in Halliday and Matthie
APA, Harvard, Vancouver, ISO, and other styles
47

Louie, Kenway. "Asymmetric and adaptive reward coding via normalized reinforcement learning." PLOS Computational Biology 18, no. 7 (2022): e1010350. http://dx.doi.org/10.1371/journal.pcbi.1010350.

Full text
Abstract:
Learning is widely modeled in psychology, neuroscience, and computer science by prediction error-guided reinforcement learning (RL) algorithms. While standard RL assumes linear reward functions, reward-related neural activity is a saturating, nonlinear function of reward; however, the computational and behavioral implications of nonlinear RL are unknown. Here, we show that nonlinear RL incorporating the canonical divisive normalization computation introduces an intrinsic and tunable asymmetry in prediction error coding. At the behavioral level, this asymmetry explains empirical variability in
APA, Harvard, Vancouver, ISO, and other styles
48

Lu, Peng, Zhijun Li, Liqiong Shi, and Wenfeng Huang. "Marine radar observations of iceberg distribution in the summer Southern Ocean." Annals of Glaciology 54, no. 62 (2013): 35–40. http://dx.doi.org/10.3189/2013aog62a086.

Full text
Abstract:
AbstractDuring the 19th Chinese National Antarctic Research Expedition from December 2002 to January 2003, 1085 icebergs were observed along the cruise track within the range 58–68° S in the Southern Ocean using the marine radar on the R/V Xuelong. These icebergs were located mainly in the Ross Sea, Weddell Sea and Prydz Bay with lengths ranging from 68 to 8169 m. Both power-law and Weibull functions are applied to the curve fitting of cumulative probability distribution of iceberg length in each region. The results reveal that the power-law function underestimates the measured data in the mid
APA, Harvard, Vancouver, ISO, and other styles
49

Sin, So-Yun, Seok-Jin Hong, Ik Kim, Noh-hyun Lim, and Tak Hur. "A Study on Uncertainty analysis of LCI with dividing industrial categories apply in Input-Output Analysis." Korean Journal of Life Cycle Assessment 8, no. 1 (2007): 53–57. http://dx.doi.org/10.62765/kjlca.2007.8.1.53.

Full text
Abstract:
To build LCI, the step of collecting and calculating lots of process data is prerequisite. Even though it is, we have to admit that more or less unpredicted circumstances might be involved in doing this kind of operation. Especially in IOA case alone, several uncertainties while in handling data, dividing industry categories into class and regional-timing differences, mostly attributed to IOA's own inherent traits, have been found. So it is very high time to consider seriously the claims for research on analysing and evaluating those uncertainties quantitatively and properly. Here in this stud
APA, Harvard, Vancouver, ISO, and other styles
50

Williams, Christopher R., V. N. Bringi, Lawrence D. Carey, et al. "Describing the Shape of Raindrop Size Distributions Using Uncorrelated Raindrop Mass Spectrum Parameters." Journal of Applied Meteorology and Climatology 53, no. 5 (2014): 1282–96. http://dx.doi.org/10.1175/jamc-d-13-076.1.

Full text
Abstract:
AbstractRainfall retrieval algorithms often assume a gamma-shaped raindrop size distribution (DSD) with three mathematical parameters Nw, Dm, and μ. If only two independent measurements are available, as with the dual-frequency precipitation radar on the Global Precipitation Measurement (GPM) mission core satellite, then retrieval algorithms are underconstrained and require assumptions about DSD parameters. To reduce the number of free parameters, algorithms can assume that μ is either a constant or a function of Dm. Previous studies have suggested μ–Λ constraints [where Λ = (4 + μ)/Dm], but c
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!