Academic literature on the topic 'Bayes-Lernen'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bayes-Lernen.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bayes-Lernen"

1

Krüger, Dirk, and Moritz Krell. "Maschinelles Lernen mit Aussagen zur Modellkompetenz." Zeitschrift für Didaktik der Naturwissenschaften 26, no. 1 (2020): 157–72. http://dx.doi.org/10.1007/s40573-020-00118-7.

Full text
Abstract:
ZusammenfassungVerfahren des maschinellen Lernens können dazu beitragen, Aussagen in Aufgaben im offenen Format in großen Stichproben zu analysieren. Am Beispiel von Aussagen von Biologielehrkräften, Biologie-Lehramtsstudierenden und Fachdidaktiker*innen zu den fünf Teilkompetenzen von Modellkompetenz (NTraining = 456; NKlassifikation = 260) wird die Qualität maschinellen Lernens mit vier Algorithmen (naïve Bayes, logistic regression, support vector machines und decision trees) untersucht. Evidenz für die Validität der Interpretation der Kodierungen einzelner Algorithmen liegt mit zufriedenstellender bis guter Übereinstimmung zwischen menschlicher und computerbasierter Kodierung beim Training (345–607 Aussagen je nach Teilkompetenz) vor, bei der Klassifikation (157–260 Aussagen je nach Teilkompetenz) reduziert sich dies auf eine moderate Übereinstimmung. Positive Korrelationen zwischen dem kodierten Niveau und dem externen Kriterium Antwortlänge weisen darauf hin, dass die Kodierung mit naïve Bayes keine gültigen Ergebnisse liefert. Bedeutsame Attribute, die die Algorithmen bei der Klassifikation nutzen, entsprechen relevanten Begriffen der Niveaufestlegungen im zugrunde liegenden Kodierleitfaden. Abschließend wird diskutiert, inwieweit maschinelles Lernen mit den eingesetzten Algorithmen bei Aussagen zur Modellkompetenz die Qualität einer menschlichen Kodierung erreicht und damit für Zweitkodierungen oder in Vermittlungssituationen genutzt werden könnte.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Bayes-Lernen"

1

Jähnichen, Patrick. "Time Dynamic Topic Models." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-200796.

Full text
Abstract:
Information extraction from large corpora can be a useful tool for many applications in industry and academia. For instance, political communication science has just recently begun to use the opportunities that come with the availability of massive amounts of information available through the Internet and the computational tools that natural language processing can provide. We give a linguistically motivated interpretation of topic modeling, a state-of-the-art algorithm for extracting latent semantic sets of words from large text corpora, and extend this interpretation to cover issues and issue-cycles as theoretical constructs coming from political communication science. We build on a dynamic topic model, a model whose semantic sets of words are allowed to evolve over time governed by a Brownian motion stochastic process and apply a new form of analysis to its result. Generally this analysis is based on the notion of volatility as in the rate of change of stocks or derivatives known from econometrics. We claim that the rate of change of sets of semantically related words can be interpreted as issue-cycles, the word sets as describing the underlying issue. Generalizing over the existing work, we introduce dynamic topic models that are driven by general (Brownian motion is a special case of our model) Gaussian processes, a family of stochastic processes defined by the function that determines their covariance structure. We use the above assumption and apply a certain class of covariance functions to allow for an appropriate rate of change in word sets while preserving the semantic relatedness among words. Applying our findings to a large newspaper data set, the New York Times Annotated corpus (all articles between 1987 and 2007), we are able to identify sub-topics in time, \\\\textit{time-localized topics} and find patterns in their behavior over time. However, we have to drop the assumption of semantic relatedness over all available time for any one topic. Time-localized topics are consistent in themselves but do not necessarily share semantic meaning between each other. They can, however, be interpreted to capture the notion of issues and their behavior that of issue-cycles.
APA, Harvard, Vancouver, ISO, and other styles
2

Müller, Christoph. "Bayesian learning in financial markets: price adjustments, fundamentals, and risk /." Münster : Verl.-Haus Monsenstein und Vannerdat, 2009. http://d-nb.info/995684332/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gönner, Lorenz, Julien Vitay, and Fred Hamker. "Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map: A Computational Model." Universitätsbibliothek Chemnitz, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-230378.

Full text
Abstract:
Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions.
APA, Harvard, Vancouver, ISO, and other styles
4

Kariv, Shachar. "Theoretical and experimental essays in social learning /." 2003. http://www.gbv.de/dms/zbw/557893747.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jähnichen, Patrick. "Time Dynamic Topic Models." Doctoral thesis, 2015. https://ul.qucosa.de/id/qucosa%3A14614.

Full text
Abstract:
Information extraction from large corpora can be a useful tool for many applications in industry and academia. For instance, political communication science has just recently begun to use the opportunities that come with the availability of massive amounts of information available through the Internet and the computational tools that natural language processing can provide. We give a linguistically motivated interpretation of topic modeling, a state-of-the-art algorithm for extracting latent semantic sets of words from large text corpora, and extend this interpretation to cover issues and issue-cycles as theoretical constructs coming from political communication science. We build on a dynamic topic model, a model whose semantic sets of words are allowed to evolve over time governed by a Brownian motion stochastic process and apply a new form of analysis to its result. Generally this analysis is based on the notion of volatility as in the rate of change of stocks or derivatives known from econometrics. We claim that the rate of change of sets of semantically related words can be interpreted as issue-cycles, the word sets as describing the underlying issue. Generalizing over the existing work, we introduce dynamic topic models that are driven by general (Brownian motion is a special case of our model) Gaussian processes, a family of stochastic processes defined by the function that determines their covariance structure. We use the above assumption and apply a certain class of covariance functions to allow for an appropriate rate of change in word sets while preserving the semantic relatedness among words. Applying our findings to a large newspaper data set, the New York Times Annotated corpus (all articles between 1987 and 2007), we are able to identify sub-topics in time, \\\\textit{time-localized topics} and find patterns in their behavior over time. However, we have to drop the assumption of semantic relatedness over all available time for any one topic. Time-localized topics are consistent in themselves but do not necessarily share semantic meaning between each other. They can, however, be interpreted to capture the notion of issues and their behavior that of issue-cycles.
APA, Harvard, Vancouver, ISO, and other styles
6

Meder, Björn. "Seeing versus Doing: Causal Bayes Nets as Psychological Models of Causal Reasoning." Doctoral thesis, 2006. http://hdl.handle.net/11858/00-1735-0000-0006-AC65-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Bayes-Lernen"

1

1941-, Hart Peter E., and Stork David G, eds. Pattern classification. 2nd ed. Wiley, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Duda, Richard O., David G. Stork, and Peter E. Hart. Pattern Classification. Wiley & Sons, Incorporated, John, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Duda, Richard O., David G. Stork, and Peter E. Hart. Pattern Classification. Wiley & Sons, Incorporated, John, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Bayes-Lernen"

1

Rödder, W., and H. P. Reidmacher. "Bayes-Lernen in Inferenznetzwerken." In Operations Research Proceedings. Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-75639-9_71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Frochte, Jörg. "Statistische Grundlagen und Bayes-Klassifikator." In Maschinelles Lernen. Carl Hanser Verlag GmbH & Co. KG, 2018. http://dx.doi.org/10.3139/9783446457058.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Frochte, Jörg. "Statistische Grundlagen und Bayes-Klassifikator." In Maschinelles Lernen. Carl Hanser Verlag GmbH & Co. KG, 2019. http://dx.doi.org/10.3139/9783446459977.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Frochte, Jörg. "Statistische Grundlagen und Bayes-Klassifikator." In Maschinelles Lernen. Carl Hanser Verlag GmbH & Co. KG, 2020. http://dx.doi.org/10.3139/9783446463554.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Richter, Stefan. "Nichtparametrische Methoden und der naive Bayes-Klassifizierer." In Statistisches und maschinelles Lernen. Springer Berlin Heidelberg, 2019. http://dx.doi.org/10.1007/978-3-662-59354-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!