Academic literature on the topic 'Bayesian surprise'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bayesian surprise.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bayesian surprise"

1

Itti, Laurent, and Pierre Baldi. "Bayesian surprise attracts human attention." Vision Research 49, no. 10 (2009): 1295–306. http://dx.doi.org/10.1016/j.visres.2008.09.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gijsen, Sam, Miro Grundei, Robert T. Lange, Dirk Ostwald, and Felix Blankenburg. "Neural surprise in somatosensory Bayesian learning." PLOS Computational Biology 17, no. 2 (2021): e1008068. http://dx.doi.org/10.1371/journal.pcbi.1008068.

Full text
Abstract:
Tracking statistical regularities of the environment is important for shaping human behavior and perception. Evidence suggests that the brain learns environmental dependencies using Bayesian principles. However, much remains unknown about the employed algorithms, for somesthesis in particular. Here, we describe the cortical dynamics of the somatosensory learning system to investigate both the form of the generative model as well as its neural surprise signatures. Specifically, we recorded EEG data from 40 participants subjected to a somatosensory roving-stimulus paradigm and performed single-trial modeling across peri-stimulus time in both sensor and source space. Our Bayesian model selection procedure indicates that evoked potentials are best described by a non-hierarchical learning model that tracks transitions between observations using leaky integration. From around 70ms post-stimulus onset, secondary somatosensory cortices are found to represent confidence-corrected surprise as a measure of model inadequacy. Indications of Bayesian surprise encoding, reflecting model updating, are found in primary somatosensory cortex from around 140ms. This dissociation is compatible with the idea that early surprise signals may control subsequent model update rates. In sum, our findings support the hypothesis that early somatosensory processing reflects Bayesian perceptual learning and contribute to an understanding of its underlying mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
3

Bayarri, M. J., and J. Morales. "Bayesian measures of surprise for outlier detection." Journal of Statistical Planning and Inference 111, no. 1-2 (2003): 3–22. http://dx.doi.org/10.1016/s0378-3758(02)00282-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Stern, J. M., and C. A. De Braganca Pereira. "Bayesian epistemic values: focus on surprise, measure probability!" Logic Journal of IGPL 22, no. 2 (2013): 236–54. http://dx.doi.org/10.1093/jigpal/jzt023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Correll, Michael, and Jeffrey Heer. "Surprise! Bayesian Weighting for De-Biasing Thematic Maps." IEEE Transactions on Visualization and Computer Graphics 23, no. 1 (2017): 651–60. http://dx.doi.org/10.1109/tvcg.2016.2598618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Burns, Kevin. "Computing the creativeness of amusing advertisements: A Bayesian model of Burma-Shave's muse." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 29, no. 1 (2014): 109–28. http://dx.doi.org/10.1017/s0890060414000699.

Full text
Abstract:
AbstractHow do humans judge the creativeness of an artwork or other artifact? This article suggests that such judgments are based on the pleasures of an aesthetic experience, which can be modeled as a mathematical product of psychological arousal and appraisal. The arousal stems from surprise, and is computed as a marginal entropy using information theory. The appraisal assigns meaning, by which the surprise is resolved, and is computed as a posterior probability using Bayesian theory. This model is tested by obtaining human ratings of surprise, meaning, and creativeness for artifacts in a domain of advertising design. The empirical results show that humans do judge creativeness as a product of surprise and meaning, consistent with the computational model of arousal and appraisal. Implications of the model are discussed with respect to advancing artificial intelligence in the arts as well as improving the computational evaluation of creativity in engineering and design.
APA, Harvard, Vancouver, ISO, and other styles
7

Visalli, Antonino, Mariagrazia Capizzi, Ettore Ambrosini, Bruno Kopp, and Antonino Vallesi. "Electroencephalographic correlates of temporal Bayesian belief updating and surprise." NeuroImage 231 (May 2021): 117867. http://dx.doi.org/10.1016/j.neuroimage.2021.117867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ostwald, Dirk, Bernhard Spitzer, Matthias Guggenmos, Timo T. Schmidt, Stefan J. Kiebel, and Felix Blankenburg. "Evidence for neural encoding of Bayesian surprise in human somatosensation." NeuroImage 62, no. 1 (2012): 177–88. http://dx.doi.org/10.1016/j.neuroimage.2012.04.050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Evans, Michael. "Bayesian ikference procedures derived via the concept of relative surprise." Communications in Statistics - Theory and Methods 26, no. 5 (1997): 1125–43. http://dx.doi.org/10.1080/03610929708831972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, J., Y. Fan, and S. A. Sisson. "Bayesian threshold selection for extremal models using measures of surprise." Computational Statistics & Data Analysis 85 (May 2015): 84–99. http://dx.doi.org/10.1016/j.csda.2014.12.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Bayesian surprise"

1

Tertel, Kathrin [Verfasser]. "Models of Bayesian Learning and Neural Surprise in Somesthesis / Kathrin Tertel." Berlin : Freie Universität Berlin, 2018. http://d-nb.info/1176707507/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hörberg, Thomas. "Probabilistic and Prominence-driven Incremental Argument Interpretation in Swedish." Doctoral thesis, Stockholms universitet, Institutionen för lingvistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-129763.

Full text
Abstract:
This dissertation investigates how grammatical functions in transitive sentences (i.e., `subject' and `direct object') are distributed in written Swedish discourse with respect to morphosyntactic as well as semantic and referential (i.e., prominence-based) information. It also investigates how assignment of grammatical functions during on-line comprehension of transitive sentences in Swedish is influenced by interactions between morphosyntactic and prominence-based information. In the dissertation, grammatical functions are assumed to express role-semantic (e.g., Actor and Undergoer) and discourse-pragmatic (e.g., Topic and Focus) functions of NP arguments. Grammatical functions correlate with prominence-based information that is associated with these functions (e.g., animacy and definiteness). Because of these correlations, both prominence-based and morphosyntactic information are assumed to serve as argument interpretation cues during on-line comprehension. These cues are utilized in a probabilistic fashion. The weightings, interplay and availability of them are reflected in their distribution in language use, as shown in corpus data. The dissertation investigates these assumptions by using various methods in a triangulating fashion. The first contribution of the dissertation is an ERP (event-related brain potentials) experiment that investigates the ERP response to grammatical function reanalysis, i.e., a revision of a tentative grammatical function assignment, during on-line comprehension of transitive sentences. Grammatical function reanalysis engenders a response that correlates with the (re-)assignment of thematic roles to the NP arguments. This suggests that the comprehension of grammatical functions involves assigning role-semantic functions to the NPs. The second contribution is a corpus study that investigates the distribution of prominence-based, verb-semantic and morphosyntactic features in transitive sentences in written discourse. The study finds that overt morphosyntactic information about grammatical functions is used more frequently when the grammatical functions cannot be determined on the basis of word order or animacy. This suggests that writers are inclined to accommodate the understanding of their recipients by more often providing formal markers of grammatical functions in potentially ambiguous sentences. The study also finds that prominence features and their interactions with verb-semantic features are systematically distributed across grammatical functions and therefore can predict these functions with a high degree of confidence. The third contribution consists of three computational models of incremental grammatical function assignment. These models are based upon the distribution of argument interpretation cues in written discourse. They predict processing difficulties during grammatical function assignment in terms of on-line change in the expectation of different grammatical function assignments over the presentation of sentence constituents. The most prominent model predictions are qualitatively consistent with reading times in a self-paced reading experiment of Swedish transitive sentences. These findings indicate that grammatical function assignment draws upon statistical regularities in the distribution of morphosyntactic and prominence-based information in language use. Processing difficulties in the comprehension of Swedish transitive sentences can therefore be predicted on the basis of corpus distributions.
APA, Harvard, Vancouver, ISO, and other styles
3

Jang, Gun Ho. "Invariant Procedures for Model Checking, Checking for Prior-Data Conflict and Bayesian Inference." Thesis, 2010. http://hdl.handle.net/1807/24771.

Full text
Abstract:
We consider a statistical theory as being invariant when the results of two statisticians' independent data analyses, based upon the same statistical theory and using effectively the same statistical ingredients, are the same. We discuss three aspects of invariant statistical theories. Both model checking and checking for prior-data conflict are assessments of single null hypothesis without any specific alternative hypothesis. Hence, we conduct these assessments using a measure of surprise based on a discrepancy statistic. For the discrete case, it is natural to use the probability of obtaining a data point that is less probable than the observed data. For the continuous case, the natural analog of this is not invariant under equivalent choices of discrepancies. A new method is developed to obtain an invariant assessment. This approach also allows several discrepancies to be combined into one discrepancy via a single P-value. Second, Bayesians developed many noninformative priors that are supposed to contain no information concerning the true parameter value. Any of these are data dependent or improper which can lead to a variety of difficulties. Gelman (2006) introduced the notion of the weak informativity as a comprimise between informative and noninformative priors without a precise definition. We give a precise definition of weak informativity using a measure of prior-data conflict that assesses whether or not a prior places its mass around the parameter values having relatively high likelihood. In particular, we say a prior Pi_2 is weakly informative relative to another prior Pi_1 whenever Pi_2 leads to fewer prior-data conflicts a priori than Pi_1. This leads to a precise quantitative measure of how much less informative a weakly informative prior is. In Bayesian data analysis, highest posterior density inference is a commonly used method. This approach is not invariant to the choice of dominating measure or reparametrizations. We explore properties of relative surprise inferences suggested by Evans (1997). Relative surprise inferences which compare the belief changes from a priori to a posteriori are invariant under reparametrizations. We mainly focus on the connection of relative surprise inferences to classical Bayesian decision theory as well as important optimalities.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Bayesian surprise"

1

Strauss, Charlie E. M., and Paul L. Houston. "The Inference of Physical Phenomena in Chemistry: Abstract Tomography, Gedanken Experiments, and Surprisal Analysis." In Maximum Entropy and Bayesian Methods. Springer Netherlands, 1992. http://dx.doi.org/10.1007/978-94-017-2219-3_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Yi, Faustino Gomez, and Jürgen Schmidhuber. "Planning to Be Surprised: Optimal Bayesian Exploration in Dynamic Environments." In Artificial General Intelligence. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22887-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Bayesian surprise"

1

Ranganathan, A., and F. Dellaert. "Bayesian surprise and landmark detection." In 2009 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2009. http://dx.doi.org/10.1109/robot.2009.5152376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Feld, Sebastian, Andreas Sedlmeier, Markus Friedrich, Jan Franz, and Lenz Belzner. "Bayesian Surprise in Indoor Environments." In SIGSPATIAL '19: 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. ACM, 2019. http://dx.doi.org/10.1145/3347146.3359358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hasanbelliu, Erion, Kittipat Kampa, Jose C. Principe, and James T. Cobb. "Online learning using a Bayesian surprise metric." In 2012 International Joint Conference on Neural Networks (IJCNN 2012 - Brisbane). IEEE, 2012. http://dx.doi.org/10.1109/ijcnn.2012.6252734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bencomo, Nelly, and Amel Belaggoun. "A world full of surprises: bayesian theory of surprise to quantify degrees of uncertainty." In ICSE '14: 36th International Conference on Software Engineering. ACM, 2014. http://dx.doi.org/10.1145/2591062.2591118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Voorhies, Randolph C., Lior Elazary, and Laurent Itti. "Neuromorphic Bayesian Surprise for Far-Range Event Detection." In 2012 9th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2012. http://dx.doi.org/10.1109/avss.2012.49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Zhuofeng, Cheng Li, Zhe Zhao, Fei Wu, and Qiaozhu Mei. "Identify Shifts of Word Semantics through Bayesian Surprise." In SIGIR '18: The 41st International ACM SIGIR conference on research and development in Information Retrieval. ACM, 2018. http://dx.doi.org/10.1145/3209978.3210040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Domingues, Ines, and Jaime S. Cardoso. "Using Bayesian surprise to detect calcifications in mammogram images." In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2014. http://dx.doi.org/10.1109/embc.2014.6943784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gkioulekas, Ioannis, Georgios Evangelopoulos, and Petros Maragos. "Spatial bayesian surprise for image saliency and quality assessment." In 2010 17th IEEE International Conference on Image Processing (ICIP 2010). IEEE, 2010. http://dx.doi.org/10.1109/icip.2010.5650991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Visalli, Antonino, Mariagrazia Capizzi, Ettore Ambrosini, Bruno Kopp, and Antonino Vallesi. "Electroencephalographic Correlates of Temporal Bayesian Belief Updating and Surprise." In 2019 Conference on Cognitive Computational Neuroscience. Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1103-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Musiolek, Lea, Felix Blankenburg, Dirk Ostwald, and Milena Rabovsky. "Modeling the N400 brain potential as Semantic Bayesian Surprise." In 2019 Conference on Cognitive Computational Neuroscience. Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1184-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!