Academic literature on the topic 'Noisy-OR model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Noisy-OR model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Noisy-OR model"

1

Quintanar-Gago, David A., and Pamela F. Nelson. "The extended Recursive Noisy OR model: Static and dynamic considerations." International Journal of Approximate Reasoning 139 (December 2021): 185–200. http://dx.doi.org/10.1016/j.ijar.2021.09.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Kuang, Arnaud Martin, and Quan Pan. "The Belief Noisy-OR Model Applied to Network Reliability Analysis." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 24, no. 06 (2016): 937–60. http://dx.doi.org/10.1142/s0218488516500434.

Full text
Abstract:
One difficulty faced in knowledge engineering for Bayesian Network (BN) is the quantification step where the Conditional Probability Tables (CPTs) are determined. The number of parameters included in CPTs increases exponentially with the number of parent variables. The most common solution is the application of the so-called canonical gates. The Noisy-OR (NOR) gate, which takes advantage of the independence of causal interactions, provides a logarithmic reduction of the number of parameters required to specify a CPT. In this paper, an extension of NOR model based on the theory of belief functi
APA, Harvard, Vancouver, ISO, and other styles
3

Li, W., P. Poupart, and P. Van Beek. "Exploiting Structure in Weighted Model Counting Approaches to Probabilistic Inference." Journal of Artificial Intelligence Research 40 (April 19, 2011): 729–65. http://dx.doi.org/10.1613/jair.3232.

Full text
Abstract:
Previous studies have demonstrated that encoding a Bayesian network into a SAT formula and then performing weighted model counting using a backtracking search algorithm can be an effective method for exact inference. In this paper, we present techniques for improving this approach for Bayesian networks with noisy-OR and noisy-MAX relations---two relations that are widely used in practice as they can dramatically reduce the number of probabilities one needs to specify. In particular, we present two SAT encodings for noisy-OR and two encodings for noisy-MAX that exploit the structure or semantic
APA, Harvard, Vancouver, ISO, and other styles
4

Büttner, Martha, Lisa Schneider, Aleksander Krasowski, Joachim Krois, Ben Feldberg, and Falk Schwendicke. "Impact of Noisy Labels on Dental Deep Learning—Calculus Detection on Bitewing Radiographs." Journal of Clinical Medicine 12, no. 9 (2023): 3058. http://dx.doi.org/10.3390/jcm12093058.

Full text
Abstract:
Supervised deep learning requires labelled data. On medical images, data is often labelled inconsistently (e.g., too large) with varying accuracies. We aimed to assess the impact of such label noise on dental calculus detection on bitewing radiographs. On 2584 bitewings calculus was accurately labeled using bounding boxes (BBs) and artificially increased and decreased stepwise, resulting in 30 consistently and 9 inconsistently noisy datasets. An object detection network (YOLOv5) was trained on each dataset and evaluated on noisy and accurate test data. Training on accurately labeled data yield
APA, Harvard, Vancouver, ISO, and other styles
5

Shang, Yuming, He-Yan Huang, Xian-Ling Mao, Xin Sun, and Wei Wei. "Are Noisy Sentences Useless for Distant Supervised Relation Extraction?" Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8799–806. http://dx.doi.org/10.1609/aaai.v34i05.6407.

Full text
Abstract:
The noisy labeling problem has been one of the major obstacles for distant supervised relation extraction. Existing approaches usually consider that the noisy sentences are useless and will harm the model's performance. Therefore, they mainly alleviate this problem by reducing the influence of noisy sentences, such as applying bag-level selective attention or removing noisy sentences from sentence-bags. However, the underlying cause of the noisy labeling problem is not the lack of useful information, but the missing relation labels. Intuitively, if we can allocate credible labels for noisy sen
APA, Harvard, Vancouver, ISO, and other styles
6

Zheng, Guoqing, Ahmed Hassan Awadallah, and Susan Dumais. "Meta Label Correction for Noisy Label Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 11053–61. http://dx.doi.org/10.1609/aaai.v35i12.17319.

Full text
Abstract:
Leveraging weak or noisy supervision for building effective machine learning models has long been an important research problem. Its importance has further increased recently due to the growing need for large-scale datasets to train deep learning models. Weak or noisy supervision could originate from multiple sources including non-expert annotators or automatic labeling based on heuristics or user interaction signals. There is an extensive amount of previous work focusing on leveraging noisy labels. Most notably, recent work has shown impressive gains by using a meta-learned instance re-weight
APA, Harvard, Vancouver, ISO, and other styles
7

Maeda, Shin-ichi, Wen-Jie Song, and Shin Ishii. "Nonlinear and Noisy Extension of Independent Component Analysis: Theory and Its Application to a Pitch Sensation Model." Neural Computation 17, no. 1 (2005): 115–44. http://dx.doi.org/10.1162/0899766052530866.

Full text
Abstract:
In this letter, we propose a noisy nonlinear version of independent component analysis (ICA). Assuming that the probability density function (p.d.f.) of sources is known, a learning rule is derived based on maximum likelihood estimation (MLE). Our model involves some algorithms of noisy linear ICA (e.g., Bermond & Cardoso, 1999) or noise-free nonlinear ICA (e.g., Lee, Koehler, & Orglmeister, 1997) as special cases. Especially when the nonlinear function is linear, the learning rule derived as a generalized expectation-maximization algorithm has a similar form to the noisy ICA algorithm
APA, Harvard, Vancouver, ISO, and other styles
8

Zhan, Peida, Hong Jiao, Kaiwen Man, and Lijun Wang. "Using JAGS for Bayesian Cognitive Diagnosis Modeling: A Tutorial." Journal of Educational and Behavioral Statistics 44, no. 4 (2019): 473–503. http://dx.doi.org/10.3102/1076998619826040.

Full text
Abstract:
In this article, we systematically introduce the just another Gibbs sampler (JAGS) software program to fit common Bayesian cognitive diagnosis models (CDMs) including the deterministic inputs, noisy “and” gate model; the deterministic inputs, noisy “or” gate model; the linear logistic model; the reduced reparameterized unified model; and the log-linear CDM (LCDM). Further, we introduce the unstructured latent structural model and the higher order latent structural model. We also show how to extend these models to consider polytomous attributes, the testlet effect, and longitudinal diagnosis. F
APA, Harvard, Vancouver, ISO, and other styles
9

Hong, Zhiwei, Xiaocheng Fan, Tao Jiang, and Jianxing Feng. "End-to-End Unpaired Image Denoising with Conditional Adversarial Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 4140–49. http://dx.doi.org/10.1609/aaai.v34i04.5834.

Full text
Abstract:
Image denoising is a classic low level vision problem that attempts to recover a noise-free image from a noisy observation. Recent advances in deep neural networks have outperformed traditional prior based methods for image denoising. However, the existing methods either require paired noisy and clean images for training or impose certain assumptions on the noise distribution and data types. In this paper, we present an end-to-end unpaired image denoising framework (UIDNet) that denoises images with only unpaired clean and noisy training images. The critical component of our model is a noise l
APA, Harvard, Vancouver, ISO, and other styles
10

Kağan Akkaya, Emre, and Burcu Can. "Transfer learning for Turkish named entity recognition on noisy text." Natural Language Engineering 27, no. 1 (2020): 35–64. http://dx.doi.org/10.1017/s1351324919000627.

Full text
Abstract:
AbstractIn this article, we investigate using deep neural networks with different word representation techniques for named entity recognition (NER) on Turkish noisy text. We argue that valuable latent features for NER can, in fact, be learned without using any hand-crafted features and/or domain-specific resources such as gazetteers and lexicons. In this regard, we utilize character-level, character n-gram-level, morpheme-level, and orthographic character-level word representations. Since noisy data with NER annotation are scarce for Turkish, we introduce a transfer learning model in order to
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!