To see the other types of publications on this topic, follow the link: Distribution learning theory.

Journal articles on the topic 'Distribution learning theory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Distribution learning theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Vidyasagar, M., and S. R. Kulkarni. "Some contributions to fixed-distribution learning theory." IEEE Transactions on Automatic Control 45, no. 2 (2000): 217–34. http://dx.doi.org/10.1109/9.839945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tsutsumi, Emiko, Ryo Kinoshita, and Maomi Ueno. "Deep Item Response Theory as a Novel Test Theory Based on Deep Learning." Electronics 10, no. 9 (2021): 1020. http://dx.doi.org/10.3390/electronics10091020.

Full text
Abstract:
Item Response Theory (IRT) evaluates, on the same scale, examinees who take different tests. It requires the linkage of examinees’ ability scores as estimated from different tests. However, the IRT linkage techniques assume independently random sampling of examinees’ abilities from a standard normal distribution. Because of this assumption, the linkage not only requires much labor to design, but it also has no guarantee of optimality. To resolve that shortcoming, this study proposes a novel IRT based on deep learning, Deep-IRT, which requires no assumption of randomly sampled examinees’ abilit
APA, Harvard, Vancouver, ISO, and other styles
3

Najarian, Kayvan. "A Fixed-Distribution PAC Learning Theory for Neural FIR Models." Journal of Intelligent Information Systems 25, no. 3 (2005): 275–91. http://dx.doi.org/10.1007/s10844-005-0194-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Jing, and Xin Geng. "Theoretical Analysis of Label Distribution Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5256–63. http://dx.doi.org/10.1609/aaai.v33i01.33015256.

Full text
Abstract:
As a novel learning paradigm, label distribution learning (LDL) explicitly models label ambiguity with the definition of label description degree. Although lots of work has been done to deal with real-world applications, theoretical results on LDL remain unexplored. In this paper, we rethink LDL from theoretical aspects, towards analyzing learnability of LDL. Firstly, risk bounds for three representative LDL algorithms (AA-kNN, AA-BP and SA-ME) are provided. For AA-kNN, Lipschitzness of the label distribution function is assumed to bound the risk, and for AA-BP and SA-ME, rademacher complexity
APA, Harvard, Vancouver, ISO, and other styles
5

Cohen, William W. "USING DISTRIBUTION-FREE LEARNING THEORY TO ANALYZE SOLUTION-PATH CACHING MECHANISMS." Computational Intelligence 8, no. 2 (1992): 336–75. http://dx.doi.org/10.1111/j.1467-8640.1992.tb00370.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Juba, Brendan. "On learning finite-state quantum sources." Quantum Information and Computation 12, no. 1&2 (2012): 105–18. http://dx.doi.org/10.26421/qic12.1-2-7.

Full text
Abstract:
We examine the complexity of learning the distributions produced by finite-state quantum sources. We show how prior techniques for learning hidden Markov models can be adapted to the {\em quantum generator} model to find that the analogous state of affairs holds: information-theoretically, a polynomial number of samples suffice to approximately identify the distribution, but computationally, the problem is as hard as learning parities with noise, a notorious open question in computational learning theory.
APA, Harvard, Vancouver, ISO, and other styles
7

Haghtalab, Nika, Matthew O. Jackson, and Ariel D. Procaccia. "Belief polarization in a complex world: A learning theory perspective." Proceedings of the National Academy of Sciences 118, no. 19 (2021): e2010144118. http://dx.doi.org/10.1073/pnas.2010144118.

Full text
Abstract:
We present two models of how people form beliefs that are based on machine learning theory. We illustrate how these models give insight into observed human phenomena by showing how polarized beliefs can arise even when people are exposed to almost identical sources of information. In our first model, people form beliefs that are deterministic functions that best fit their past data (training sets). In that model, their inability to form probabilistic beliefs can lead people to have opposing views even if their data are drawn from distributions that only slightly disagree. In the second model,
APA, Harvard, Vancouver, ISO, and other styles
8

Caicedo, Santiago, Robert E. Lucas, and Esteban Rossi-Hansberg. "Learning, Career Paths, and the Distribution of Wages." American Economic Journal: Macroeconomics 11, no. 1 (2019): 49–88. http://dx.doi.org/10.1257/mac.20170390.

Full text
Abstract:
We develop a theory of career paths and earnings where agents organize in production hierarchies. Agents climb these hierarchies as they learn stochastically from others. Earnings grow as agents acquire knowledge and occupy positions with more subordinates. We contrast these and other implications with US census data for the period 1990 to 2010, matching the Lorenz curve of earnings and the observed mean experience-earnings profiles. We show the increase in wage inequality over this period can be rationalized with a shift in the level of the complexity and profitability of technologies relativ
APA, Harvard, Vancouver, ISO, and other styles
9

Ghosh, Himadri, and Prajneshu. "Statistical learning theory for fitting multimodal distribution to rainfall data: an application." Journal of Applied Statistics 38, no. 11 (2011): 2533–45. http://dx.doi.org/10.1080/02664763.2011.559210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Powell, Nathan, and Andrew J. Kurdila. "Distribution-free learning theory for approximating submanifolds from reptile motion capture data." Computational Mechanics 68, no. 2 (2021): 337–56. http://dx.doi.org/10.1007/s00466-021-02034-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Fang, Zhiying, Zheng-Chu Guo, and Ding-Xuan Zhou. "Optimal learning rates for distribution regression." Journal of Complexity 56 (February 2020): 101426. http://dx.doi.org/10.1016/j.jco.2019.101426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhou, Baohua, David Hofmann, Itai Pinkoviezky, Samuel J. Sober, and Ilya Nemenman. "Chance, long tails, and inference in a non-Gaussian, Bayesian theory of vocal learning in songbirds." Proceedings of the National Academy of Sciences 115, no. 36 (2018): E8538—E8546. http://dx.doi.org/10.1073/pnas.1713020115.

Full text
Abstract:
Traditional theories of sensorimotor learning posit that animals use sensory error signals to find the optimal motor command in the face of Gaussian sensory and motor noise. However, most such theories cannot explain common behavioral observations, for example, that smaller sensory errors are more readily corrected than larger errors and large abrupt (but not gradually introduced) errors lead to weak learning. Here, we propose a theory of sensorimotor learning that explains these observations. The theory posits that the animal controls an entire probability distribution of motor commands rathe
APA, Harvard, Vancouver, ISO, and other styles
13

Kearns, Michael J., and Robert E. Schapire. "Efficient distribution-free learning of probabilistic concepts." Journal of Computer and System Sciences 48, no. 3 (1994): 464–97. http://dx.doi.org/10.1016/s0022-0000(05)80062-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Fuhs, Mark C., and David S. Touretzky. "Context Learning in the Rodent Hippocampus." Neural Computation 19, no. 12 (2007): 3173–215. http://dx.doi.org/10.1162/neco.2007.19.12.3173.

Full text
Abstract:
We present a Bayesian statistical theory of context learning in the rodent hippocampus. While context is often defined in an experimental setting in relation to specific background cues or task demands, we advance a single, more general notion of context that suffices for a variety of learning phenomena. Specifically, a context is defined as a statistically stationary distribution of experiences, and context learning is defined as the problem of how to form contexts out of groups of experiences that cluster together in time. The challenge of context learning is solving the model selection prob
APA, Harvard, Vancouver, ISO, and other styles
15

González, Carlos R., and Yaser S. Abu-Mostafa. "Mismatched Training and Test Distributions Can Outperform Matched Ones." Neural Computation 27, no. 2 (2015): 365–87. http://dx.doi.org/10.1162/neco_a_00697.

Full text
Abstract:
In learning theory, the training and test sets are assumed to be drawn from the same probability distribution. This assumption is also followed in practical situations, where matching the training and test distributions is considered desirable. Contrary to conventional wisdom, we show that mismatched training and test distributions in supervised learning can in fact outperform matched distributions in terms of the bottom line, the out-of-sample performance, independent of the target function in question. This surprising result has theoretical and algorithmic ramifications that we discuss.
APA, Harvard, Vancouver, ISO, and other styles
16

Rezek, I., D. S. Leslie, S. Reece, et al. "On Similarities between Inference in Game Theory and Machine Learning." Journal of Artificial Intelligence Research 33 (October 23, 2008): 259–83. http://dx.doi.org/10.1613/jair.2523.

Full text
Abstract:
In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learnin
APA, Harvard, Vancouver, ISO, and other styles
17

Dwiyanto, Dwiyanto, Catra Indra, Ahmad Faisal, Iswandi Idris, and Rizaldy Khair. "Rancang Bangun Media Pembelajaran Avionic - Radio Theory II Berbasis Multimedia Animasi pada ATKP Medan." Jurnal Sistem Komputer dan Informatika (JSON) 1, no. 3 (2020): 247. http://dx.doi.org/10.30865/json.v1i3.2185.

Full text
Abstract:
The importance of Learning Media is used by ATKP Medan to continuously improve the quality of learning. The problem that is often encountered in avionic learning is the limited resources available because to access avionic learning the cadets must access from the CBT LAB and cannot be from elsewhere. This is because Avionic software is only installed inside the lab and cannot be learned from outside the lab. The purpose of this research is to improve the digital avionic-Radio Theory II learning process which is packaged in multimedia animation to make it easier for cadets to learn avionic Radi
APA, Harvard, Vancouver, ISO, and other styles
18

Amari, Shun-ichi, and Noboru Murata. "Statistical Theory of Learning Curves under Entropic Loss Criterion." Neural Computation 5, no. 1 (1993): 140–53. http://dx.doi.org/10.1162/neco.1993.5.1.140.

Full text
Abstract:
The present paper elucidates a universal property of learning curves, which shows how the generalization error, training error, and the complexity of the underlying stochastic machine are related and how the behavior of a stochastic machine is improved as the number of training examples increases. The error is measured by the entropic loss. It is proved that the generalization error converges to H0, the entropy of the conditional distribution of the true machine, as H0 + m*/(2t), while the training error converges as H0 - m*/(2t), where t is the number of examples and m* shows the complexity o
APA, Harvard, Vancouver, ISO, and other styles
19

Sakai, Y., and A. Maruoka. "Learning Monotone Log-Term DNF Formulas under the Uniform Distribution." Theory of Computing Systems 33, no. 1 (2000): 17–33. http://dx.doi.org/10.1007/s002249910002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Danju, İpek, Burak Demir, Birce Birsel Çağlar, Cagla Deniz Özçelik, Elif Karaagac Coruhlu, and Seral Özturan. "Comparative content analysis of studies on new approaches in education." Laplage em Revista 6, Extra-C (2020): 128–42. http://dx.doi.org/10.24115/s2446-622020206extra-c635p.128-142.

Full text
Abstract:
The aim of this article is to analyze the content of the articles published in Google Scholar on new approaches in education. These approaches; Multiple intelligence theory, Constructivism, Social Learning theory, Style-oriented and Scenario-based learning. The distribution of the analyzed studies according to the years of publication, the languages ​​they were published in, the disciplines, the methods, data collection tools, the distribution of the participants according to their gender, the distribution of the participants according to their occupational characteristics, the distribution by
APA, Harvard, Vancouver, ISO, and other styles
21

Lu. "Semantic Information G Theory and Logical Bayesian Inference for Machine Learning." Information 10, no. 8 (2019): 261. http://dx.doi.org/10.3390/info10080261.

Full text
Abstract:
An important problem in machine learning is that, when using more than two labels, it is very difficult to construct and optimize a group of learning functions that are still useful when the prior distribution of instances is changed. To resolve this problem, semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms are combined to form a systematic solution. A semantic channel in G theory consists of a group of truth functions or membership functions. In comparison with the likelihood functions, Bayesian posteriors, and Logistic functions
APA, Harvard, Vancouver, ISO, and other styles
22

Campeau, Anthony G. "Distribution of Learning Styles and Preferences for Learning Environment Characteristics Among Emergency Medical Care Assistants (EMCAs) in Ontario, Canada." Prehospital and Disaster Medicine 13, no. 1 (1998): 47–54. http://dx.doi.org/10.1017/s1049023x00033033.

Full text
Abstract:
AbstractIntroduction:In Ontario, Canada, Emergency Medical Care Assistants (EMCAs) have many opportunities for continuing education. However, little is known about how EMCAs learn.Objectives:The intent of this study was to explore the distribution of learning styles, preferences for major learning environment characteristics, and the associations between these two factors among the EMCA population in Ontario, Canada.Methods:Following review of the literature, a 32-item survey of learning environment characteristics was constructed to measure the respondents' preferences. Using a random number
APA, Harvard, Vancouver, ISO, and other styles
23

Vos, Hans J. "Applications of Bayesian Decision Theory to Sequential Mastery Testing." Journal of Educational and Behavioral Statistics 24, no. 3 (1999): 271–92. http://dx.doi.org/10.3102/10769986024003271.

Full text
Abstract:
The purpose of this paper is to formulate optimal sequential rules for mastery tests. The framework for the approach is derived from Bayesian sequential decision theory. Both a threshold and linear loss structure are considered. The binomial probability distribution is adopted as the psychometric model involved. Conditions sufficient for sequentially setting optimal cutting scores are presented. Optimal sequential rules will be derived for the case of a subjective beta distribution representing prior true level of functioning. An empirical example of sequential mastery esting for concept-learn
APA, Harvard, Vancouver, ISO, and other styles
24

NOTSU, Akira, Seiki UBUKATA, and Katsuhiro HONDA. "Beta Distribution Propagating Reinforcement Learning Based on Prospect Theory for the Efficient Exploration and Exploitation." Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 29, no. 1 (2017): 507–16. http://dx.doi.org/10.3156/jsoft.29.1_507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zhou, Yu-Hang, and Zhi-Hua Zhou. "Large Margin Distribution Learning with Cost Interval and Unlabeled Data." IEEE Transactions on Knowledge and Data Engineering 28, no. 7 (2016): 1749–63. http://dx.doi.org/10.1109/tkde.2016.2535283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Zengmao, Bo Du, Weiping Tu, Lefei Zhang, and Dacheng Tao. "Incorporating Distribution Matching into Uncertainty for Multiple Kernel Active Learning." IEEE Transactions on Knowledge and Data Engineering 33, no. 1 (2021): 128–42. http://dx.doi.org/10.1109/tkde.2019.2923211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Galvani, Marta, Chiara Bardelli, Silvia Figini, and Pietro Muliere. "A Bayesian Nonparametric Learning Approach to Ensemble Models Using the Proper Bayesian Bootstrap." Algorithms 14, no. 1 (2021): 11. http://dx.doi.org/10.3390/a14010011.

Full text
Abstract:
Bootstrap resampling techniques, introduced by Efron and Rubin, can be presented in a general Bayesian framework, approximating the statistical distribution of a statistical functional ϕ(F), where F is a random distribution function. Efron’s and Rubin’s bootstrap procedures can be extended, introducing an informative prior through the Proper Bayesian bootstrap. In this paper different bootstrap techniques are used and compared in predictive classification and regression models based on ensemble approaches, i.e., bagging models involving decision trees. Proper Bayesian bootstrap, proposed by Mu
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Fen, Bin Zou, and Na Chen. "The consistency of least-square regularized regression with negative association sequence." International Journal of Wavelets, Multiresolution and Information Processing 16, no. 03 (2018): 1850019. http://dx.doi.org/10.1142/s0219691318500194.

Full text
Abstract:
In the last few years, many known works in learning theory stepped over the classical assumption that samples are independent and identical distribution and investigated learning performance based on non-independent samples, as mixing sequences (e.g., [Formula: see text]-mixing, [Formula: see text]-mixing, [Formula: see text]-mixing etc.), they derived similar results with the investigation based on classical sample assumption. Negative association (NA) sequence is a kind of significant dependent random variables and plays an important role in non-independent sequences. It is widely applied to
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Wei, Hao Wang, Chen Zhang, and Yang Gao. "Cross-Domain Metric and Multiple Kernel Learning Based on Information Theory." Neural Computation 30, no. 3 (2018): 820–55. http://dx.doi.org/10.1162/neco_a_01053.

Full text
Abstract:
Learning an appropriate distance metric plays a substantial role in the success of many learning machines. Conventional metric learning algorithms have limited utility when the training and test samples are drawn from related but different domains (i.e., source domain and target domain). In this letter, we propose two novel metric learning algorithms for domain adaptation in an information-theoretic setting, allowing for discriminating power transfer and standard learning machine propagation across two domains. In the first one, a cross-domain Mahalanobis distance is learned by combining three
APA, Harvard, Vancouver, ISO, and other styles
30

KRAUSE, PAUL J. "Learning probabilistic networks." Knowledge Engineering Review 13, no. 4 (1999): 321–51. http://dx.doi.org/10.1017/s0269888998004019.

Full text
Abstract:
A probabilistic network is a graphical model that encodes probabilistic relationships between variables of interest. Such a model records qualitative influences between variables in addition to the numerical parameters of the probability distribution. As such it provides an ideal form for combining prior knowledge, which might be limited solely to experience of the influences between some of the variables of interest, and data. In this paper, we first show how data can be used to revise initial estimates of the parameters of a model. We then progress to showing how the structure of the model c
APA, Harvard, Vancouver, ISO, and other styles
31

Lee, Jonathan N., Michael Laskey, Ajay Kumar Tanwani, Anil Aswani, and Ken Goldberg. "Dynamic regret convergence analysis and an adaptive regularization algorithm for on-policy robot imitation learning." International Journal of Robotics Research 40, no. 10-11 (2021): 1284–305. http://dx.doi.org/10.1177/0278364920985879.

Full text
Abstract:
On-policy imitation learning algorithms such as DAgger evolve a robot control policy by executing it, measuring performance (loss), obtaining corrective feedback from a supervisor, and generating the next policy. As the loss between iterations can vary unpredictably, a fundamental question is under what conditions this process will eventually achieve a converged policy. If one assumes the underlying trajectory distribution is static (stationary), it is possible to prove convergence for DAgger. However, in more realistic models for robotics, the underlying trajectory distribution is dynamic bec
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Shangfei, Guozhu Peng, and Zhuangqiang Zheng. "Capturing Joint Label Distribution for Multi-Label Classification Through Adversarial Learning." IEEE Transactions on Knowledge and Data Engineering 32, no. 12 (2020): 2310–21. http://dx.doi.org/10.1109/tkde.2019.2922603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

BEAUCHAMP, GUY. "Learning Rules for Social Foragers: Implications for the Producer–Scrounger Game and Ideal Free Distribution Theory." Journal of Theoretical Biology 207, no. 1 (2000): 21–35. http://dx.doi.org/10.1006/jtbi.2000.2153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Fan, Ying, Shun Kun Wang, Feng Zhou, Zhi Cheng Tian, and Guang Shuai Ding. "Parameter Estimation for Small Sample Censored Data Based on SVM." Advanced Materials Research 145 (October 2010): 31–36. http://dx.doi.org/10.4028/www.scientific.net/amr.145.31.

Full text
Abstract:
It is difficult to identify distribution types and to estimate parameters of the distribution for small sample censored data when you deal with mechanical equipment reliability analysis. Here, an intelligent distribution identification model was established based on statistical learning theory and the algorithm of multi-element classifier of Support Vector Machine (SVM), and also applied to parameter estimation of small sample censored data, in order to improve the precision of traditional method. Firstly, the algorithm of training based on SVM and the RBF kernel function was selected; secondl
APA, Harvard, Vancouver, ISO, and other styles
35

Et.al, Nurnasran Puteh. "Sentiment Analysis with Deep Learning: A Bibliometric Review." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (2021): 1509–19. http://dx.doi.org/10.17762/turcomat.v12i3.952.

Full text
Abstract:
Sentiment analysis is an active area of research in natural language processing field. Prior research indicates numerous techniques have been used to perform the sentiment classification tasks which include the machine learning approaches. Deep learning is a specific type of machine learning that has been successfully applied in various field such as computer vision and various NLP tasks including sentiment analysis. This paper attempts to provide a bibliometricanalysisof academic literature related to the sentiment analysis with deep learningmethods which were retrieved from Scopusuntil the t
APA, Harvard, Vancouver, ISO, and other styles
36

Balasubramanian, Vijay. "Statistical Inference, Occam's Razor, and Statistical Mechanics on the Space of Probability Distributions." Neural Computation 9, no. 2 (1997): 349–68. http://dx.doi.org/10.1162/neco.1997.9.2.349.

Full text
Abstract:
The task of parametric model selection is cast in terms of a statistical mechanics on the space of probability distributions. Using the techniques of low-temperature expansions, I arrive at a systematic series for the Bayesian posterior probability of a model family that significantly extends known results in the literature. In particular, I arrive at a precise understanding of how Occam's razor, the principle that simpler models should be preferred until the data justify more complex models, is automatically embodied by probability theory. These results require a measure on the space of model
APA, Harvard, Vancouver, ISO, and other styles
37

Febrilia, Baiq Rika Ayu. "Pembelajaran Distribusi Poisson dan Penerapannya dalam Kehidupan Sehari-hari." Jurnal Didaktik Matematika 4, no. 1 (2017): 1–14. http://dx.doi.org/10.24815/jdm.v4i1.7610.

Full text
Abstract:
The Mathematical Statistics learning process that integrates theoretical understanding with practice can enhance students' insight into the application of what they have learned (theoretically) in the classroom into everyday life. Theoretical learning will create a boring learning environment. Students have no idea of the use of the theory they have acquired. Based on these reasons, it is necessary to design a learning activity that can help students to apply their understanding of the theory they have acquired. This research describes the learning process of Poisson distribution which is desi
APA, Harvard, Vancouver, ISO, and other styles
38

Wallis, Guy, and Roland Baddeley. "Optimal, Unsupervised Learning in Invariant Object Recognition." Neural Computation 9, no. 4 (1997): 883–94. http://dx.doi.org/10.1162/neco.1997.9.4.883.

Full text
Abstract:
A means for establishing transformation-invariant representations of objects is proposed and analyzed, in which different views are associated on the basis of the temporal order of the presentation of these views, as well as their spatial similarity. Assuming knowledge of the distribution of presentation times, an optimal linear learning rule is derived. Simulations of a competitive network trained on a character recognition task are then used to highlight the success of this learning rule in relation to simple Hebbian learning and to show that the theory can give accurate quantitative predict
APA, Harvard, Vancouver, ISO, and other styles
39

Quintián, Héctor, and Emilio Corchado. "Beta Hebbian Learning as a New Method for Exploratory Projection Pursuit." International Journal of Neural Systems 27, no. 06 (2017): 1750024. http://dx.doi.org/10.1142/s0129065717500241.

Full text
Abstract:
In this research, a novel family of learning rules called Beta Hebbian Learning (BHL) is thoroughly investigated to extract information from high-dimensional datasets by projecting the data onto low-dimensional (typically two dimensional) subspaces, improving the existing exploratory methods by providing a clear representation of data’s internal structure. BHL applies a family of learning rules derived from the Probability Density Function (PDF) of the residual based on the beta distribution. This family of rules may be called Hebbian in that all use a simple multiplication of the output of th
APA, Harvard, Vancouver, ISO, and other styles
40

Sampson, Thomas. "Dynamic Selection: An Idea Flows Theory of Entry, Trade, and Growth *." Quarterly Journal of Economics 131, no. 1 (2015): 315–80. http://dx.doi.org/10.1093/qje/qjv032.

Full text
Abstract:
Abstract This article develops an idea flows theory of trade and growth with heterogeneous firms. Entrants learn from incumbent firms, and the diffusion technology is such that learning depends not on the frontier technology, but on the entire distribution of productivity. By shifting the productivity distribution upward, selection causes technology diffusion, and in equilibrium this dynamic selection process leads to endogenous growth without scale effects. On the balanced growth path, the productivity distribution is a traveling wave with a lower bound that increases over time. The free entr
APA, Harvard, Vancouver, ISO, and other styles
41

Zhao, Qianying, and Jingyang Jiang. "Verb valency in interlanguage: An extension to valency theory and new perspective on L2 learning." Poznan Studies in Contemporary Linguistics 56, no. 2 (2020): 339–63. http://dx.doi.org/10.1515/psicl-2020-0010.

Full text
Abstract:
AbstractValency theory has been applied to investigate various languages, such as German, Chinese and English. However, most studies in this field were based on the linguistic materials produced by native speakers. The current research aimed to examine the valency structures in the interlanguage. Based on the English writing produced by L2 Chinese learners, we adopted the quantitative approach, trying to find out whether the distributional features of verb valency in the interlanguage also had regular probability distributions as those in the native languages, and whether there was a relations
APA, Harvard, Vancouver, ISO, and other styles
42

SHIN, KYULEE, and JIN SEO CHO. "TESTING FOR NEGLECTED NONLINEARITY USING EXTREME LEARNING MACHINES." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, supp02 (2013): 117–29. http://dx.doi.org/10.1142/s0218488513400205.

Full text
Abstract:
We introduce a statistic testing for neglected nonlinearity using extreme learning machines and call it ELMNN test. The ELMNN test is very convenient and can be widely applied because it is obtained as a by-product of estimating linear models. For the proposed test statistic, we provide a set of regularity conditions under which it asymptotically follows a chi-squared distribution under the null. We conduct Monte Carlo experiments and examine how it behaves when the sample size is finite. Our experiment shows that the test exhibits the properties desired by our theory.
APA, Harvard, Vancouver, ISO, and other styles
43

DE FRANCO, CARMINE, JOHANN NICOLLE, and HUYÊN PHAM. "BAYESIAN LEARNING FOR THE MARKOWITZ PORTFOLIO SELECTION PROBLEM." International Journal of Theoretical and Applied Finance 22, no. 07 (2019): 1950037. http://dx.doi.org/10.1142/s0219024919500377.

Full text
Abstract:
We study the Markowitz portfolio selection problem with unknown drift vector in the multi-dimensional framework. The prior belief on the uncertain expected rate of return is modeled by an arbitrary probability law, and a Bayesian approach from filtering theory is used to learn the posterior distribution about the drift given the observed market data of the assets. The Bayesian Markowitz problem is then embedded into an auxiliary standard control problem that we characterize by a dynamic programming method and prove the existence and uniqueness of a smooth solution to the related semi-linear pa
APA, Harvard, Vancouver, ISO, and other styles
44

Ueda, Yoshihiro, Hitoshi Narita, Naotaka Kato, Katsuaki Hayashi, Hidetaka Nambo, and Haruhiko Kimura. "An automatic email distribution by using text mining and reinforcement learning." Systems and Computers in Japan 37, no. 12 (2006): 82–95. http://dx.doi.org/10.1002/scj.20387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Sinaga, Juster Donal, and Kristina Betty Artati. "Experiential learning theory (ELT)-based classical guidance model to improve responsible character." SCHOULID: Indonesian Journal of School Counseling 2, no. 1 (2017): 14. http://dx.doi.org/10.23916/008621833-00-0.

Full text
Abstract:
This research aims to: (1) determine the improvement of responsible character of students before and after experiencing character education based on classical guidance service using Experiential Learning Theory (ELT) approach; (2) determine the effectiveness of character education based on classical guidance service ELT approach. This is a quantitative study using pre-experimental One-Group Pretest-Posttest Design. The data collection used Responsible Character Questionnaire with reliability 0.788 in Alpha Cronbach. Subjects in this study were 30 students of class VII A batch 2014-2015 in Kani
APA, Harvard, Vancouver, ISO, and other styles
46

Shao, X. Y., Jun Wu, Ya Qiong Lv, and Chao Deng. "Reliability Assessment Methods of Complicated Mechanical Product Based on Statistical Learning Theory." Advanced Materials Research 44-46 (June 2008): 575–80. http://dx.doi.org/10.4028/www.scientific.net/amr.44-46.575.

Full text
Abstract:
As the reliability test data of complicated mechanical products is rare in quantity on the system-level and difficult to determine the accurate composition of the life distribution unit as well, the traditional reliability evaluation method based on evolutionary theory has been of little use. And the Statistical Learning Theory begins to be widely focused on as a novel small sample statistic method, which has been mostly applied to pattern recognition, fault detection, time series prediction and so on. This paper creates a new method for reliability evaluation derived from Statistical Learning
APA, Harvard, Vancouver, ISO, and other styles
47

Mishra, Akshansh, and Tarushi Pathak. "Estimation of Grain Size Distribution of Friction Stir Welded Joint by using Machine Learning Approach." ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal 10, no. 1 (2020): 99–110. http://dx.doi.org/10.14201/adcaij202110199110.

Full text
Abstract:
Machine learning has widely spread in the areas of pattern recognition, prediction or forecasting, cognitive game theory and in bioinformatics. In recent days, machine learning is being introduced into manufacturing and material industries for the development of new materials and simulating the manufacturing of the required products. In the recent paper, machine learning algorithm is developed by using Python programming for the determination of grain size distribution in the microstructure of stir zone seam of Friction Stir Welded magnesium AZ31B alloy plate The grain size parameters such as
APA, Harvard, Vancouver, ISO, and other styles
48

Boersma, Paul, and Bruce Hayes. "Empirical Tests of the Gradual Learning Algorithm." Linguistic Inquiry 32, no. 1 (2001): 45–86. http://dx.doi.org/10.1162/002438901554586.

Full text
Abstract:
The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and Smolensky (1993, 1996, 1998, 2000), which initiated the learnability research program for Optimality Theory. We argue that the Gradual Learning Algorithm has a number of special advantages: it can learn free variation, deal effectively with noisy learning data, and account for gradient well-formedness j
APA, Harvard, Vancouver, ISO, and other styles
49

SMALE, STEVE, and DING-XUAN ZHOU. "ONLINE LEARNING WITH MARKOV SAMPLING." Analysis and Applications 07, no. 01 (2009): 87–113. http://dx.doi.org/10.1142/s0219530509001293.

Full text
Abstract:
This paper attempts to give an extension of learning theory to a setting where the assumption of i.i.d. data is weakened by keeping the independence but abandoning the identical restriction. We hypothesize that a sequence of examples (xt, yt) in X × Y for t = 1, 2, 3,… is drawn from a probability distribution ρt on X × Y. The marginal probabilities on X are supposed to converge to a limit probability on X. Two main examples for this time process are discussed. The first is a stochastic one which in the special case of a finite space X is defined by a stochastic matrix and more generally by a s
APA, Harvard, Vancouver, ISO, and other styles
50

Jing, Yun, Si-Ye Guo, Xuan Wang, and Fang-Qiu Chen. "Research on Coordinated Development of a Railway Freight Collection and Distribution System Based on an “Entropy-TOPSIS Coupling Development Degree Model” Integrated with Machine Learning." Journal of Advanced Transportation 2020 (September 15, 2020): 1–14. http://dx.doi.org/10.1155/2020/8885808.

Full text
Abstract:
In recent years, with the gradual networking of high-speed railways in China, the existing railway transportation capacity has been released. In order to improve transportation capacity, railway freight transportation enterprises companies have gradually shifted the transportation of goods from dedicated freight lines to passenger-cargo lines. In terms of the organization form of collection and distribution, China has a complete research system for heavy-haul railway collection and distribution, but the research on the integration of collection and distribution of the ordinary-speed railway fr
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!