Academic literature on the topic 'Deep Discriminative Probabilistic Models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep Discriminative Probabilistic Models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Deep Discriminative Probabilistic Models"

1

Kamran, Fahad, and Jenna Wiens. "Estimating Calibrated Individualized Survival Curves with Deep Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (2021): 240–48. http://dx.doi.org/10.1609/aaai.v35i1.16098.

Full text
Abstract:
In survival analysis, deep learning approaches have been proposed for estimating an individual's probability of survival over some time horizon. Such approaches can capture complex non-linear relationships, without relying on restrictive assumptions regarding the relationship between an individual's characteristics and their underlying survival process. To date, however, these methods have focused primarily on optimizing discriminative performance and have ignored model calibration. Well-calibrated survival curves present realistic and meaningful probabilistic estimates of the true underlying
APA, Harvard, Vancouver, ISO, and other styles
2

Al Moubayed, Noura, Stephen McGough, and Bashar Awwad Shiekh Hasan. "Beyond the topics: how deep learning can improve the discriminability of probabilistic topic modelling." PeerJ Computer Science 6 (January 27, 2020): e252. http://dx.doi.org/10.7717/peerj-cs.252.

Full text
Abstract:
The article presents a discriminative approach to complement the unsupervised probabilistic nature of topic modelling. The framework transforms the probabilities of the topics per document into class-dependent deep learning models that extract highly discriminatory features suitable for classification. The framework is then used for sentiment analysis with minimum feature engineering. The approach transforms the sentiment analysis problem from the word/document domain to the topics domain making it more robust to noise and incorporating complex contextual information that are not represented o
APA, Harvard, Vancouver, ISO, and other styles
3

Bhattacharya, Debswapna. "refineD: improved protein structure refinement using machine learning based restrained relaxation." Bioinformatics 35, no. 18 (2019): 3320–28. http://dx.doi.org/10.1093/bioinformatics/btz101.

Full text
Abstract:
AbstractMotivationProtein structure refinement aims to bring moderately accurate template-based protein models closer to the native state through conformational sampling. However, guiding the sampling towards the native state by effectively using restraints remains a major issue in structure refinement.ResultsHere, we develop a machine learning based restrained relaxation protocol that uses deep discriminative learning based binary classifiers to predict multi-resolution probabilistic restraints from the starting structure and subsequently converts these restraints to be integrated into Rosett
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Boxi, Jie Jiang, Haidong Ren, et al. "Towards In-Distribution Compatible Out-of-Distribution Detection." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10333–41. http://dx.doi.org/10.1609/aaai.v37i9.26230.

Full text
Abstract:
Deep neural network, despite its remarkable capability of discriminating targeted in-distribution samples, shows poor performance on detecting anomalous out-of-distribution data. To address this defect, state-of-the-art solutions choose to train deep networks on an auxiliary dataset of outliers. Various training criteria for these auxiliary outliers are proposed based on heuristic intuitions. However, we find that these intuitively designed outlier training criteria can hurt in-distribution learning and eventually lead to inferior performance. To this end, we identify three causes of the in-di
APA, Harvard, Vancouver, ISO, and other styles
5

Roy, Debaditya, Sarunas Girdzijauskas, and Serghei Socolovschi. "Confidence-Calibrated Human Activity Recognition." Sensors 21, no. 19 (2021): 6566. http://dx.doi.org/10.3390/s21196566.

Full text
Abstract:
Wearable sensors are widely used in activity recognition (AR) tasks with broad applicability in health and well-being, sports, geriatric care, etc. Deep learning (DL) has been at the forefront of progress in activity classification with wearable sensors. However, most state-of-the-art DL models used for AR are trained to discriminate different activity classes at high accuracy, not considering the confidence calibration of predictive output of those models. This results in probabilistic estimates that might not capture the true likelihood and is thus unreliable. In practice, it tends to produc
APA, Harvard, Vancouver, ISO, and other styles
6

Tsuda, Koji, Motoaki Kawanabe, Gunnar Rätsch, Sören Sonnenburg, and Klaus-Robert Müller. "A New Discriminative Kernel from Probabilistic Models." Neural Computation 14, no. 10 (2002): 2397–414. http://dx.doi.org/10.1162/08997660260293274.

Full text
Abstract:
Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiment
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmed, Nisar, and Mark Campbell. "On estimating simple probabilistic discriminative models with subclasses." Expert Systems with Applications 39, no. 7 (2012): 6659–64. http://dx.doi.org/10.1016/j.eswa.2011.12.042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Du, Fang, Jiangshe Zhang, Junying Hu, and Rongrong Fei. "Discriminative multi-modal deep generative models." Knowledge-Based Systems 173 (June 2019): 74–82. http://dx.doi.org/10.1016/j.knosys.2019.02.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Che, Tong, Xiaofeng Liu, Site Li, et al. "Deep Verifier Networks: Verification of Deep Discriminative Models with Deep Generative Models." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (2021): 7002–10. http://dx.doi.org/10.1609/aaai.v35i8.16862.

Full text
Abstract:
AI Safety is a major concern in many deep learning applications such as autonomous driving. Given a trained deep learning model, an important natural problem is how to reliably verify the model's prediction. In this paper, we propose a novel framework --- deep verifier networks (DVN) to detect unreliable inputs or predictions of deep discriminative models, using separately trained deep generative models. Our proposed model is based on conditional variational auto-encoders with disentanglement constraints to separate the label information from the latent representation. We give both intuitive a
APA, Harvard, Vancouver, ISO, and other styles
10

Masegosa, Andrés R., Rafael Cabañas, Helge Langseth, Thomas D. Nielsen, and Antonio Salmerón. "Probabilistic Models with Deep Neural Networks." Entropy 23, no. 1 (2021): 117. http://dx.doi.org/10.3390/e23010117.

Full text
Abstract:
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic modeling. Historically, probabilistic modeling has been constrained to very restricted model classes, where exact or approximate probabilistic inference is feasible. However, developments in variational inference, a general form of approximate probabilistic inference that originated in statistical physics, have enabled probabilistic modeling to overcome these limitations: (i) Approximate probabilistic inference is now possible over a broad class of probabilistic models containing a large number of
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!