Gotowa bibliografia na temat „Deep Discriminative Probabilistic Models”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Deep Discriminative Probabilistic Models”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Deep Discriminative Probabilistic Models"

1

Kamran, Fahad, and Jenna Wiens. "Estimating Calibrated Individualized Survival Curves with Deep Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (2021): 240–48. http://dx.doi.org/10.1609/aaai.v35i1.16098.

Pełny tekst źródła
Streszczenie:
In survival analysis, deep learning approaches have been proposed for estimating an individual's probability of survival over some time horizon. Such approaches can capture complex non-linear relationships, without relying on restrictive assumptions regarding the relationship between an individual's characteristics and their underlying survival process. To date, however, these methods have focused primarily on optimizing discriminative performance and have ignored model calibration. Well-calibrated survival curves present realistic and meaningful probabilistic estimates of the true underlying
Style APA, Harvard, Vancouver, ISO itp.
2

Al Moubayed, Noura, Stephen McGough, and Bashar Awwad Shiekh Hasan. "Beyond the topics: how deep learning can improve the discriminability of probabilistic topic modelling." PeerJ Computer Science 6 (January 27, 2020): e252. http://dx.doi.org/10.7717/peerj-cs.252.

Pełny tekst źródła
Streszczenie:
The article presents a discriminative approach to complement the unsupervised probabilistic nature of topic modelling. The framework transforms the probabilities of the topics per document into class-dependent deep learning models that extract highly discriminatory features suitable for classification. The framework is then used for sentiment analysis with minimum feature engineering. The approach transforms the sentiment analysis problem from the word/document domain to the topics domain making it more robust to noise and incorporating complex contextual information that are not represented o
Style APA, Harvard, Vancouver, ISO itp.
3

Bhattacharya, Debswapna. "refineD: improved protein structure refinement using machine learning based restrained relaxation." Bioinformatics 35, no. 18 (2019): 3320–28. http://dx.doi.org/10.1093/bioinformatics/btz101.

Pełny tekst źródła
Streszczenie:
AbstractMotivationProtein structure refinement aims to bring moderately accurate template-based protein models closer to the native state through conformational sampling. However, guiding the sampling towards the native state by effectively using restraints remains a major issue in structure refinement.ResultsHere, we develop a machine learning based restrained relaxation protocol that uses deep discriminative learning based binary classifiers to predict multi-resolution probabilistic restraints from the starting structure and subsequently converts these restraints to be integrated into Rosett
Style APA, Harvard, Vancouver, ISO itp.
4

Wu, Boxi, Jie Jiang, Haidong Ren, et al. "Towards In-Distribution Compatible Out-of-Distribution Detection." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10333–41. http://dx.doi.org/10.1609/aaai.v37i9.26230.

Pełny tekst źródła
Streszczenie:
Deep neural network, despite its remarkable capability of discriminating targeted in-distribution samples, shows poor performance on detecting anomalous out-of-distribution data. To address this defect, state-of-the-art solutions choose to train deep networks on an auxiliary dataset of outliers. Various training criteria for these auxiliary outliers are proposed based on heuristic intuitions. However, we find that these intuitively designed outlier training criteria can hurt in-distribution learning and eventually lead to inferior performance. To this end, we identify three causes of the in-di
Style APA, Harvard, Vancouver, ISO itp.
5

Roy, Debaditya, Sarunas Girdzijauskas, and Serghei Socolovschi. "Confidence-Calibrated Human Activity Recognition." Sensors 21, no. 19 (2021): 6566. http://dx.doi.org/10.3390/s21196566.

Pełny tekst źródła
Streszczenie:
Wearable sensors are widely used in activity recognition (AR) tasks with broad applicability in health and well-being, sports, geriatric care, etc. Deep learning (DL) has been at the forefront of progress in activity classification with wearable sensors. However, most state-of-the-art DL models used for AR are trained to discriminate different activity classes at high accuracy, not considering the confidence calibration of predictive output of those models. This results in probabilistic estimates that might not capture the true likelihood and is thus unreliable. In practice, it tends to produc
Style APA, Harvard, Vancouver, ISO itp.
6

Tsuda, Koji, Motoaki Kawanabe, Gunnar Rätsch, Sören Sonnenburg, and Klaus-Robert Müller. "A New Discriminative Kernel from Probabilistic Models." Neural Computation 14, no. 10 (2002): 2397–414. http://dx.doi.org/10.1162/08997660260293274.

Pełny tekst źródła
Streszczenie:
Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiment
Style APA, Harvard, Vancouver, ISO itp.
7

Ahmed, Nisar, and Mark Campbell. "On estimating simple probabilistic discriminative models with subclasses." Expert Systems with Applications 39, no. 7 (2012): 6659–64. http://dx.doi.org/10.1016/j.eswa.2011.12.042.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Du, Fang, Jiangshe Zhang, Junying Hu, and Rongrong Fei. "Discriminative multi-modal deep generative models." Knowledge-Based Systems 173 (June 2019): 74–82. http://dx.doi.org/10.1016/j.knosys.2019.02.023.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Che, Tong, Xiaofeng Liu, Site Li, et al. "Deep Verifier Networks: Verification of Deep Discriminative Models with Deep Generative Models." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (2021): 7002–10. http://dx.doi.org/10.1609/aaai.v35i8.16862.

Pełny tekst źródła
Streszczenie:
AI Safety is a major concern in many deep learning applications such as autonomous driving. Given a trained deep learning model, an important natural problem is how to reliably verify the model's prediction. In this paper, we propose a novel framework --- deep verifier networks (DVN) to detect unreliable inputs or predictions of deep discriminative models, using separately trained deep generative models. Our proposed model is based on conditional variational auto-encoders with disentanglement constraints to separate the label information from the latent representation. We give both intuitive a
Style APA, Harvard, Vancouver, ISO itp.
10

Masegosa, Andrés R., Rafael Cabañas, Helge Langseth, Thomas D. Nielsen, and Antonio Salmerón. "Probabilistic Models with Deep Neural Networks." Entropy 23, no. 1 (2021): 117. http://dx.doi.org/10.3390/e23010117.

Pełny tekst źródła
Streszczenie:
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic modeling. Historically, probabilistic modeling has been constrained to very restricted model classes, where exact or approximate probabilistic inference is feasible. However, developments in variational inference, a general form of approximate probabilistic inference that originated in statistical physics, have enabled probabilistic modeling to overcome these limitations: (i) Approximate probabilistic inference is now possible over a broad class of probabilistic models containing a large number of
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!