Segui questo link per vedere altri tipi di pubblicazioni sul tema: Neural Language Model.

Articoli di riviste sul tema "Neural Language Model"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Neural Language Model".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Emami, Ahmad, e Frederick Jelinek. "A Neural Syntactic Language Model". Machine Learning 60, n. 1-3 (2 giugno 2005): 195–227. http://dx.doi.org/10.1007/s10994-005-0916-y.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Buckman, Jacob, e Graham Neubig. "Neural Lattice Language Models". Transactions of the Association for Computational Linguistics 6 (dicembre 2018): 529–41. http://dx.doi.org/10.1162/tacl_a_00036.

Testo completo
Abstract (sommario):
In this work, we propose a new language modeling paradigm that has the ability to perform both prediction and moderation of information flow at multiple granularities: neural lattice language models. These models construct a lattice of possible paths through a sentence and marginalize across this lattice to calculate sequence probabilities or optimize parameters. This approach allows us to seamlessly incorporate linguistic intuitions — including polysemy and the existence of multiword lexical items — into our language model. Experiments on multiple language modeling tasks show that English neural lattice language models that utilize polysemous embeddings are able to improve perplexity by 9.95% relative to a word-level baseline, and that a Chinese model that handles multi-character tokens is able to improve perplexity by 20.94% relative to a character-level baseline.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Zhang, Yike, Pengyuan Zhang e Yonghong Yan. "Tailoring an Interpretable Neural Language Model". IEEE/ACM Transactions on Audio, Speech, and Language Processing 27, n. 7 (luglio 2019): 1164–78. http://dx.doi.org/10.1109/taslp.2019.2913087.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Kunchukuttan, Anoop, Mitesh Khapra, Gurneet Singh e Pushpak Bhattacharyya. "Leveraging Orthographic Similarity for Multilingual Neural Transliteration". Transactions of the Association for Computational Linguistics 6 (dicembre 2018): 303–16. http://dx.doi.org/10.1162/tacl_a_00022.

Testo completo
Abstract (sommario):
We address the task of joint training of transliteration models for multiple language pairs ( multilingual transliteration). This is an instance of multitask learning, where individual tasks (language pairs) benefit from sharing knowledge with related tasks. We focus on transliteration involving related tasks i.e., languages sharing writing systems and phonetic properties ( orthographically similar languages). We propose a modified neural encoder-decoder model that maximizes parameter sharing across language pairs in order to effectively leverage orthographic similarity. We show that multilingual transliteration significantly outperforms bilingual transliteration in different scenarios (average increase of 58% across a variety of languages we experimented with). We also show that multilingual transliteration models can generalize well to languages/language pairs not encountered during training and hence perform well on the zeroshot transliteration task. We show that further improvements can be achieved by using phonetic feature input.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Tang, Zhiyuan, Dong Wang, Yixiang Chen, Lantian Li e Andrew Abel. "Phonetic Temporal Neural Model for Language Identification". IEEE/ACM Transactions on Audio, Speech, and Language Processing 26, n. 1 (gennaio 2018): 134–44. http://dx.doi.org/10.1109/taslp.2017.2764271.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Souri, Adnan, Mohammed Al Achhab, Badr Eddine Elmohajir e Abdelali Zbakh. "Neural network dealing with Arabic language". International Journal of Informatics and Communication Technology (IJ-ICT) 9, n. 2 (1 agosto 2020): 73. http://dx.doi.org/10.11591/ijict.v9i2.pp73-82.

Testo completo
Abstract (sommario):
Artificial Neural Networks have proved their efficiency in a large number of research domains. In this paper, we have applied Artificial Neural Networks on Arabic text to prove correct language modeling, text generation, and missing text prediction. In one hand, we have adapted Recurrent Neural Networks architectures to model Arabic language in order to generate correct Arabic sequences. In the other hand, Convolutional Neural Networks have been parameterized, basing on some specific features of Arabic, to predict missing text in Arabic documents. We have demonstrated the power of our adapted models in generating and predicting correct Arabic text comparing to the standard model. The model had been trained and tested on known free Arabic datasets. Results have been promising with sufficient accuracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Qi, Kunxun, e Jianfeng Du. "Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 05 (3 aprile 2020): 8632–39. http://dx.doi.org/10.1609/aaai.v34i05.6387.

Testo completo
Abstract (sommario):
Cross-lingual natural language inference is a fundamental task in cross-lingual natural language understanding, widely addressed by neural models recently. Existing neural model based methods either align sentence embeddings between source and target languages, heavily relying on annotated parallel corpora, or exploit pre-trained cross-lingual language models that are fine-tuned on a single language and hard to transfer knowledge to another language. To resolve these limitations in existing methods, this paper proposes an adversarial training framework to enhance both pre-trained models and classical neural models for cross-lingual natural language inference. It trains on the union of data in the source language and data in the target language, learning language-invariant features to improve the inference performance. Experimental results on the XNLI benchmark demonstrate that three popular neural models enhanced by the proposed framework significantly outperform the original models.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Ferreira, Pedro M., Diogo Pernes, Ana Rebelo e Jaime S. Cardoso. "Signer-Independent Sign Language Recognition with Adversarial Neural Networks". International Journal of Machine Learning and Computing 11, n. 2 (marzo 2021): 121–29. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1024.

Testo completo
Abstract (sommario):
Sign Language Recognition (SLR) has become an appealing topic in modern societies because such technology can ideally be used to bridge the gap between deaf and hearing people. Although important steps have been made towards the development of real-world SLR systems, signer-independent SLR is still one of the bottleneck problems of this research field. In this regard, we propose a deep neural network along with an adversarial training objective, specifically designed to address the signer-independent problem. Specifically, the proposed model consists of an encoder, mapping from input images to latent representations, and two classifiers operating on these underlying representations: (i) the sign-classifier, for predicting the class/sign labels, and (ii) the signer-classifier, for predicting their signer identities. During the learning stage, the encoder is simultaneously trained to help the sign-classifier as much as possible while trying to fool the signer-classifier. This adversarial training procedure allows learning signer-invariant latent representations that are in fact highly discriminative for sign recognition. Experimental results demonstrate the effectiveness of the proposed model and its capability of dealing with the large inter-signer variations.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Takahashi, Shuntaro, e Kumiko Tanaka-Ishii. "Evaluating Computational Language Models with Scaling Properties of Natural Language". Computational Linguistics 45, n. 3 (settembre 2019): 481–513. http://dx.doi.org/10.1162/coli_a_00355.

Testo completo
Abstract (sommario):
In this article, we evaluate computational models of natural language with respect to the universal statistical behaviors of natural language. Statistical mechanical analyses have revealed that natural language text is characterized by scaling properties, which quantify the global structure in the vocabulary population and the long memory of a text. We study whether five scaling properties (given by Zipf’s law, Heaps’ law, Ebeling’s method, Taylor’s law, and long-range correlation analysis) can serve for evaluation of computational models. Specifically, we test n-gram language models, a probabilistic context-free grammar, language models based on Simon/Pitman-Yor processes, neural language models, and generative adversarial networks for text generation. Our analysis reveals that language models based on recurrent neural networks with a gating mechanism (i.e., long short-term memory; a gated recurrent unit; and quasi-recurrent neural networks) are the only computational models that can reproduce the long memory behavior of natural language. Furthermore, through comparison with recently proposed model-based evaluation methods, we find that the exponent of Taylor’s law is a good indicator of model quality.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

P., Dr Karrupusamy. "Analysis of Neural Network Based Language Modeling". March 2020 2, n. 1 (30 marzo 2020): 53–63. http://dx.doi.org/10.36548/jaicn.2020.1.006.

Testo completo
Abstract (sommario):
The fundamental and core process of the natural language processing is the language modelling usually referred as the statistical language modelling. The language modelling is also considered to be vital in the processing the natural languages as the other chores such as the completion of sentences, recognition of speech automatically, translations of the statistical machines, and generation of text and so on. The success of the viable natural language processing totally relies on the quality of the modelling of the language. In the previous spans the research field such as the linguistics, psychology, speech recognition, data compression, neuroscience, machine translation etc. As the neural network are the very good choices for having a quality language modelling the paper presents the analysis of neural networks in the modelling of the language. Utilizing some of the dataset such as the Penn Tree bank, Billion Word Benchmark and the Wiki Test the neural network models are evaluated on the basis of the word error rate, perplexity and the bilingual evaluation under study scores to identify the optimal model.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

P., Dr Karrupusamy. "Analysis of Neural Network Based Language Modeling". March 2020 2, n. 1 (30 marzo 2020): 53–63. http://dx.doi.org/10.36548/jaicn.2020.3.006.

Testo completo
Abstract (sommario):
The fundamental and core process of the natural language processing is the language modelling usually referred as the statistical language modelling. The language modelling is also considered to be vital in the processing the natural languages as the other chores such as the completion of sentences, recognition of speech automatically, translations of the statistical machines, and generation of text and so on. The success of the viable natural language processing totally relies on the quality of the modelling of the language. In the previous spans the research field such as the linguistics, psychology, speech recognition, data compression, neuroscience, machine translation etc. As the neural network are the very good choices for having a quality language modelling the paper presents the analysis of neural networks in the modelling of the language. Utilizing some of the dataset such as the Penn Tree bank, Billion Word Benchmark and the Wiki Test the neural network models are evaluated on the basis of the word error rate, perplexity and the bilingual evaluation under study scores to identify the optimal model.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Ananthanarayana, Tejaswini, Priyanshu Srivastava, Akash Chintha, Akhil Santha, Brian Landy, Joseph Panaro, Andre Webster et al. "Deep Learning Methods for Sign Language Translation". ACM Transactions on Accessible Computing 14, n. 4 (31 dicembre 2021): 1–30. http://dx.doi.org/10.1145/3477498.

Testo completo
Abstract (sommario):
Many sign languages are bona fide natural languages with grammatical rules and lexicons hence can benefit from machine translation methods. Similarly, since sign language is a visual-spatial language, it can also benefit from computer vision methods for encoding it. With the advent of deep learning methods in recent years, significant advances have been made in natural language processing (specifically neural machine translation) and in computer vision methods (specifically image and video captioning). Researchers have therefore begun expanding these learning methods to sign language understanding. Sign language interpretation is especially challenging, because it involves a continuous visual-spatial modality where meaning is often derived based on context. The focus of this article, therefore, is to examine various deep learning–based methods for encoding sign language as inputs, and to analyze the efficacy of several machine translation methods, over three different sign language datasets. The goal is to determine which combinations are sufficiently robust for sign language translation without any gloss-based information. To understand the role of the different input features, we perform ablation studies over the model architectures (input features + neural translation models) for improved continuous sign language translation. These input features include body and finger joints, facial points, as well as vector representations/embeddings from convolutional neural networks. The machine translation models explored include several baseline sequence-to-sequence approaches, more complex and challenging networks using attention, reinforcement learning, and the transformer model. We implement the translation methods over multiple sign languages—German (GSL), American (ASL), and Chinese sign languages (CSL). From our analysis, the transformer model combined with input embeddings from ResNet50 or pose-based landmark features outperformed all the other sequence-to-sequence models by achieving higher BLEU2-BLEU4 scores when applied to the controlled and constrained GSL benchmark dataset. These combinations also showed significant promise on the other less controlled ASL and CSL datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Johnson, Melvin, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat et al. "Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation". Transactions of the Association for Computational Linguistics 5 (dicembre 2017): 339–51. http://dx.doi.org/10.1162/tacl_a_00065.

Testo completo
Abstract (sommario):
We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT systems using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-theart results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and also show some interesting examples when mixing languages.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Demeter, David, e Doug Downey. "Just Add Functions: A Neural-Symbolic Language Model". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 05 (3 aprile 2020): 7634–42. http://dx.doi.org/10.1609/aaai.v34i05.6264.

Testo completo
Abstract (sommario):
Neural network language models (NNLMs) have achieved ever-improving accuracy due to more sophisticated architectures and increasing amounts of training data. However, the inductive bias of these models (formed by the distributional hypothesis of language), while ideally suited to modeling most running text, results in key limitations for today's models. In particular, the models often struggle to learn certain spatial, temporal, or quantitative relationships, which are commonplace in text and are second-nature for human readers. Yet, in many cases, these relationships can be encoded with simple mathematical or logical expressions. How can we augment today's neural models with such encodings?In this paper, we propose a general methodology to enhance the inductive bias of NNLMs by incorporating simple functions into a neural architecture to form a hierarchical neural-symbolic language model (NSLM). These functions explicitly encode symbolic deterministic relationships to form probability distributions over words. We explore the effectiveness of this approach on numbers and geographic locations, and show that NSLMs significantly reduce perplexity in small-corpus language modeling, and that the performance improvement persists for rare tokens even on much larger corpora. The approach is simple and general, and we discuss how it can be applied to other word classes beyond numbers and geography.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Tsuji, Masayuki, Teijiro Isokawa, Takayuki Yumoto, Nobuyuki Matsui e Naotake Kamiura. "Heterogeneous recurrent neural networks for natural language model". Artificial Life and Robotics 24, n. 2 (23 novembre 2018): 245–49. http://dx.doi.org/10.1007/s10015-018-0507-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Pan, Yirong, Xiao Li, Yating Yang e Rui Dong. "Multi-Source Neural Model for Machine Translation of Agglutinative Language". Future Internet 12, n. 6 (3 giugno 2020): 96. http://dx.doi.org/10.3390/fi12060096.

Testo completo
Abstract (sommario):
Benefitting from the rapid development of artificial intelligence (AI) and deep learning, the machine translation task based on neural networks has achieved impressive performance in many high-resource language pairs. However, the neural machine translation (NMT) models still struggle in the translation task on agglutinative languages with complex morphology and limited resources. Inspired by the finding that utilizing the source-side linguistic knowledge can further improve the NMT performance, we propose a multi-source neural model that employs two separate encoders to encode the source word sequence and the linguistic feature sequences. Compared with the standard NMT model, we utilize an additional encoder to incorporate the linguistic features of lemma, part-of-speech (POS) tag, and morphological tag by extending the input embedding layer of the encoder. Moreover, we use a serial combination method to integrate the conditional information from the encoders with the outputs of the decoder, which aims to enhance the neural model to learn a high-quality context representation of the source sentence. Experimental results show that our approach is effective for the agglutinative language translation, which achieves the highest improvements of +2.4 BLEU points on Turkish–English translation task and +0.6 BLEU points on Uyghur–Chinese translation task.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Perlovsky, Leonid. "Language and Cognition Interaction Neural Mechanisms". Computational Intelligence and Neuroscience 2011 (2011): 1–13. http://dx.doi.org/10.1155/2011/454587.

Testo completo
Abstract (sommario):
How language and cognition interact in thinking? Is language just used for communication of completed thoughts, or is it fundamental for thinking? Existing approaches have not led to a computational theory. We develop a hypothesis that language and cognition are two separate but closely interacting mechanisms. Language accumulates cultural wisdom; cognition develops mental representations modeling surrounding world and adapts cultural knowledge to concrete circumstances of life. Language is acquired from surrounding language “ready-made” and therefore can be acquired early in life. This early acquisition of language in childhood encompasses the entire hierarchy from sounds to words, to phrases, and to highest concepts existing in culture. Cognition is developed from experience. Yet cognition cannot be acquired from experience alone; language is a necessary intermediary, a “teacher.” A mathematical model is developed; it overcomes previous difficulties and leads to a computational theory. This model is consistent with Arbib's “language prewired brain” built on top of mirror neuron system. It models recent neuroimaging data about cognition, remaining unnoticed by other theories. A number of properties of language and cognition are explained, which previously seemed mysterious, including influence of language grammar on cultural evolution, which may explain specifics of English and Arabic cultures.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Passban, Peyman, Qun Liu e Andy Way. "Providing Morphological Information for SMT Using Neural Networks". Prague Bulletin of Mathematical Linguistics 108, n. 1 (1 giugno 2017): 271–82. http://dx.doi.org/10.1515/pralin-2017-0026.

Testo completo
Abstract (sommario):
Abstract Treating morphologically complex words (MCWs) as atomic units in translation would not yield a desirable result. Such words are complicated constituents with meaningful subunits. A complex word in a morphologically rich language (MRL) could be associated with a number of words or even a full sentence in a simpler language, which means the surface form of complex words should be accompanied with auxiliary morphological information in order to provide a precise translation and a better alignment. In this paper we follow this idea and propose two different methods to convey such information for statistical machine translation (SMT) models. In the first model we enrich factored SMT engines by introducing a new morphological factor which relies on subword-aware word embeddings. In the second model we focus on the language-modeling component. We explore a subword-level neural language model (NLM) to capture sequence-, word- and subword-level dependencies. Our NLM is able to approximate better scores for conditional word probabilities, so the decoder generates more fluent translations. We studied two languages Farsi and German in our experiments and observed significant improvements for both of them.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Enweiji, Musbah Zaid, Taras Lehinevych e Аndrey Glybovets. "CROSS-LANGUAGE TEXT CLASSIFICATION WITH CONVOLUTIONAL NEURAL NETWORKS FROM SCRATCH". EUREKA: Physics and Engineering 2 (31 marzo 2017): 24–33. http://dx.doi.org/10.21303/2461-4262.2017.00304.

Testo completo
Abstract (sommario):
Cross language classification is an important task in multilingual learning, where documents in different languages often share the same set of categories. The main goal is to reduce the labeling cost of training classification model for each individual language. The novel approach by using Convolutional Neural Networks for multilingual language classification is proposed in this article. It learns representation of knowledge gained from languages. Moreover, current method works for new individual language, which was not used in training. The results of empirical study on large dataset of 21 languages demonstrate robustness and competitiveness of the presented approach.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Sharma, Richa, Sudha Morwal e Basant Agarwal. "Named entity recognition using neural language model and CRF for Hindi language". Computer Speech & Language 74 (luglio 2022): 101356. http://dx.doi.org/10.1016/j.csl.2022.101356.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Lalrempuii, Candy, Badal Soni e Partha Pakray. "An Improved English-to-Mizo Neural Machine Translation". ACM Transactions on Asian and Low-Resource Language Information Processing 20, n. 4 (26 maggio 2021): 1–21. http://dx.doi.org/10.1145/3445974.

Testo completo
Abstract (sommario):
Machine Translation is an effort to bridge language barriers and misinterpretations, making communication more convenient through the automatic translation of languages. The quality of translations produced by corpus-based approaches predominantly depends on the availability of a large parallel corpus. Although machine translation of many Indian languages has progressively gained attention, there is very limited research on machine translation and the challenges of using various machine translation techniques for a low-resource language such as Mizo. In this article, we have implemented and compared statistical-based approaches with modern neural-based approaches for the English–Mizo language pair. We have experimented with different tokenization methods, architectures, and configurations. The performance of translations predicted by the trained models has been evaluated using automatic and human evaluation measures. Furthermore, we have analyzed the prediction errors of the models and the quality of predictions based on variations in sentence length and compared the model performance with the existing baselines.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Martin, Andrea E. "A Compositional Neural Architecture for Language". Journal of Cognitive Neuroscience 32, n. 8 (agosto 2020): 1407–27. http://dx.doi.org/10.1162/jocn_a_01552.

Testo completo
Abstract (sommario):
Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de)compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations, and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Zhu, SongGui, Hailang He e Yuanyuan Zheng. "Locating and Tracking Model for Language Radiation Transmission Based on Neural Network and FAHP". Mathematical Problems in Engineering 2020 (26 ottobre 2020): 1–8. http://dx.doi.org/10.1155/2020/7625141.

Testo completo
Abstract (sommario):
With the development of internationalization, the distribution of languages and the office addresses of multinational companies are changing constantly. This paper makes the following research and exploration on this phenomenon: impact on the development of languages around the world. This paper studies the changes of native and second-language users and uses the historical data to predict the development trend by using the gray number series prediction model. Get the types of factors that affect the second language. Then, use fuzzy analytic hierarchy process to calculate the score of each factor. Finally, the global language trend equation is simulated: predictions for the development of language. In this paper, radiation propagation is calculated, and the method of CNN neural network is used to train big data, and the language trend positioning equation is drawn. Finally, the optimal language is obtained by using wavelet analysis and linear programming at different addresses. About model checking, according to the model’s internal prediction ability and the significance of internal parameters, it is concluded that the model has high practicability, sensitivity, and stability.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Zhumagambetov, Rustam, Ferdinand Molnár, Vsevolod A. Peshkov e Siamac Fazli. "Transmol: repurposing a language model for molecular generation". RSC Advances 11, n. 42 (2021): 25921–32. http://dx.doi.org/10.1039/d1ra03086h.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Zhang, Yunyan, Guangluan Xu, Yang Wang, Xiao Liang, Lei Wang e Tinglei Huang. "Empower event detection with bi-directional neural language model". Knowledge-Based Systems 167 (marzo 2019): 87–97. http://dx.doi.org/10.1016/j.knosys.2019.01.008.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Si, Yujing. "Enhanced Word Classing for Recurrent Neural Network Language Model". Journal of Information and Computational Science 10, n. 12 (10 agosto 2013): 3595–604. http://dx.doi.org/10.12733/jics20102110.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Plebe, Alessio, Marco Mazzone e Vivian M. De La Cruz. "A BIOLOGICALLY INSPIRED NEURAL MODEL OF VISION-LANGUAGE INTEGRATION". Neural Network World 21, n. 3 (2011): 227–50. http://dx.doi.org/10.14311/nnw.2011.21.014.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Koike, Shuhei, e Akinobu Lee. "Spoken keyword detection using recurrent neural network language model". Journal of the Acoustical Society of America 140, n. 4 (ottobre 2016): 3116. http://dx.doi.org/10.1121/1.4969757.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Chen, Mu-Yen, Hsiu-Sen Chiang, Arun Kumar Sangaiah e Tsung-Che Hsieh. "Recurrent neural network with attention mechanism for language model". Neural Computing and Applications 32, n. 12 (21 giugno 2019): 7915–23. http://dx.doi.org/10.1007/s00521-019-04301-x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Shi, Yangyang, Martha Larson e Catholijn M. Jonker. "Recurrent neural network language model adaptation with curriculum learning". Computer Speech & Language 33, n. 1 (settembre 2015): 136–54. http://dx.doi.org/10.1016/j.csl.2014.11.004.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Gulcehre, Caglar, Orhan Firat, Kelvin Xu, Kyunghyun Cho e Yoshua Bengio. "On integrating a language model into neural machine translation". Computer Speech & Language 45 (settembre 2017): 137–48. http://dx.doi.org/10.1016/j.csl.2017.01.014.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Jishan, Md Asifuzzaman, Khan Raqib Mahmud e Abul Kalam Al Azad. "Natural language description of images using hybrid recurrent neural network". International Journal of Electrical and Computer Engineering (IJECE) 9, n. 4 (1 agosto 2019): 2932. http://dx.doi.org/10.11591/ijece.v9i4.pp2932-2940.

Testo completo
Abstract (sommario):
We presented a learning model that generated natural language description of images. The model utilized the connections between natural language and visual data by produced text line based contents from a given image. Our Hybrid Recurrent Neural Network model is based on the intricacies of Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Bi-directional Recurrent Neural Network (BRNN) models. We conducted experiments on three benchmark datasets, e.g., Flickr8K, Flickr30K, and MS COCO. Our hybrid model utilized LSTM model to encode text line or sentences independent of the object location and BRNN for word representation, this reduced the computational complexities without compromising the accuracy of the descriptor. The model produced better accuracy in retrieving natural language based description on the dataset.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Neuman, Yair. "Language mediated mentalization: A proposed model". Semiotica 2019, n. 227 (5 marzo 2019): 261–72. http://dx.doi.org/10.1515/sem-2017-0156.

Testo completo
Abstract (sommario):
AbstractMentalization describes the process through which we understand the mental states of oneself and others. In this paper, I present a computational semiotic model of mentalization and illustrate it through a worked-out example. The model draws on classical semiotic ideas, such as abductive inference and hypostatic abstraction, but pours them into new ideas and tools from natural language processing, machine learning, and neural networks, to form a novel model of language-mediated-mentalization.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Thomas, Merin, Dr Latha C A e Antony Puthussery. "Identification of language in a cross linguistic environment". Indonesian Journal of Electrical Engineering and Computer Science 18, n. 1 (1 aprile 2020): 544. http://dx.doi.org/10.11591/ijeecs.v18.i1.pp544-548.

Testo completo
Abstract (sommario):
<p class="normal">World has become very small due to software internationationalism. Applications of machine translations are increasing day by day. Using multiple languages in the social media text is an developing trend. .Availability of fonts in the native language enhanced the usage of native text in internet communications. Usage of transliterations of language has become quite common. In Indian scenario current generations are familiar to talk in native language but not to read and write in the native language, hence they started using English representation of native language in textual messages. This paper describes the identification of the transliterated text in cross lingual environment .In this paper a Neural network model identifies the prominent language in the text and hence the same can be used to identify the meaning of the text in the concerned language. The model is based upon Recurrent Neural Networks that found to be the most efficient in machine translations. Language identification can serve as a base for many applications in multi linguistic environment. Currently the South Indian Languages Malayalam, Tamil are identified from given text. An algorithmic approach of Stop words based model is depicted in this paper. Model can be also enhanced to address all the Indian Languages that are in use.</p>
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Selot, Smita, Neeta Tripathi e A. S. Zadgaonkar. "Neural Network Model for Semantic Analysis of Sanskrit Text". International Journal of Natural Computing Research 7, n. 1 (gennaio 2018): 1–14. http://dx.doi.org/10.4018/ijncr.2018010101.

Testo completo
Abstract (sommario):
Semantic analysis is the process of extracting meaning of the sentence, from a given language. From the perspective of computer processing, challenge lies in making computer understand the meaning of the given sentence. Understandability depends upon the grammar, syntactic and semantic representation of the language and methods employed for extracting these parameters. Semantics interpretation methods of natural language varies from language to language, as grammatical structure and morphological representation of one language may be different from another. One ancient Indian language, Sanskrit, has its own unique way of embedding syntactic information within words of relevance in a sentence. Sanskrit grammar is defined in 4000 rules by PaninI reveals the mechanism of adding suffixes to words according to its use in sentence. Through this article, a method of extracting meaningful information through suffixes and classifying the word into a defined semantic category is presented. The application of NN-based classification has improved the processing of text.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Kuwana, Ayato, Atsushi Oba, Ranto Sawai e Incheon Paik. "Automatic Taxonomy Classification by Pretrained Language Model". Electronics 10, n. 21 (29 ottobre 2021): 2656. http://dx.doi.org/10.3390/electronics10212656.

Testo completo
Abstract (sommario):
In recent years, automatic ontology generation has received significant attention in information science as a means of systemizing vast amounts of online data. As our initial attempt of ontology generation with a neural network, we proposed a recurrent neural network-based method. However, updating the architecture is possible because of the development in natural language processing (NLP). By contrast, the transfer learning of language models trained by a large, unlabeled corpus has yielded a breakthrough in NLP. Inspired by these achievements, we propose a novel workflow for ontology generation comprising two-stage learning. Our results showed that our best method improved accuracy by over 12.5%. As an application example, we applied our model to the Stanford Question Answering Dataset to show ontology generation in a real field. The results showed that our model can generate a good ontology, with some exceptions in the real field, indicating future research directions to improve the quality.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Bird, Jordan J., Anikó Ekárt e Diego R. Faria. "British Sign Language Recognition via Late Fusion of Computer Vision and Leap Motion with Transfer Learning to American Sign Language". Sensors 20, n. 18 (9 settembre 2020): 5151. http://dx.doi.org/10.3390/s20185151.

Testo completo
Abstract (sommario):
In this work, we show that a late fusion approach to multimodality in sign language recognition improves the overall ability of the model in comparison to the singular approaches of image classification (88.14%) and Leap Motion data classification (72.73%). With a large synchronous dataset of 18 BSL gestures collected from multiple subjects, two deep neural networks are benchmarked and compared to derive a best topology for each. The Vision model is implemented by a Convolutional Neural Network and optimised Artificial Neural Network, and the Leap Motion model is implemented by an evolutionary search of Artificial Neural Network topology. Next, the two best networks are fused for synchronised processing, which results in a better overall result (94.44%) as complementary features are learnt in addition to the original task. The hypothesis is further supported by application of the three models to a set of completely unseen data where a multimodality approach achieves the best results relative to the single sensor method. When transfer learning with the weights trained via British Sign Language, all three models outperform standard random weight distribution when classifying American Sign Language (ASL), and the best model overall for ASL classification was the transfer learning multimodality approach, which scored 82.55% accuracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Jishan, Md Asifuzzaman, Khan Raqib Mahmud, Abul Kalam Al Azad, Mohammad Rifat Ahmmad Rashid, Bijan Paul e Md Shahabub Alam. "Bangla language textual image description by hybrid neural network model". Indonesian Journal of Electrical Engineering and Computer Science 21, n. 2 (1 febbraio 2021): 757. http://dx.doi.org/10.11591/ijeecs.v21.i2.pp757-767.

Testo completo
Abstract (sommario):
Automatic image captioning task in different language is a challenging task which has not been well investigated yet due to the lack of dataset and effective models. It also requires good understanding of scene and contextual embedding for robust semantic interpretation of images for natural language image descriptor. To generate image descriptor in Bangla, we created a new Bangla dataset of images paired with target language label, named as Bangla Natural Language Image to Text (BNLIT) dataset. To deal with the image understanding, we propose a hybrid encoder-decoder model based on encoder-decoder architecture and the model is evaluated on our newly created dataset. This proposed approach achieves significance performance improvement on task of semantic retrieval of images. Our hybrid model uses the Convolutional Neural<br />Network as an encoder whereas the Bidirectional Long Short Term Memory is used for the sentence representation that decreases the computational complexities without trading off the exactness of the descriptor. The model yielded benchmark accuracy in recovering Bangla natural language and we also conducted a thorough numerical analysis of the model performance on the BNLIT dataset.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Szwed, Marcin, Fabien Vinckier, Laurent Cohen e Stanislas Dehaene. "Towards a universal neurobiological architecture for learning to read". Behavioral and Brain Sciences 35, n. 5 (29 agosto 2012): 308–9. http://dx.doi.org/10.1017/s0140525x12000283.

Testo completo
Abstract (sommario):
AbstractLetter-position tolerance varies across languages. This observation suggests that the neural code for letter strings may also be subtly different. Although language-specific models remain useful, we should endeavor to develop a universal model of reading acquisition which incorporates crucial neurobiological constraints. Such a model, through a progressive internalization of phonological and lexical regularities, could perhaps converge onto the language-specific properties outlined by Frost.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Zhang, Shujing. "Language Processing Model Construction and Simulation Based on Hybrid CNN and LSTM". Computational Intelligence and Neuroscience 2021 (6 luglio 2021): 1–11. http://dx.doi.org/10.1155/2021/2578422.

Testo completo
Abstract (sommario):
Deep learning is the latest trend of machine learning and artificial intelligence research. As a new field with rapid development over the past decade, it has attracted more and more researchers’ attention. Convolutional Neural Network (CNN) model is one of the most important classical structures in deep learning models, and its performance has been gradually improved in deep learning tasks in recent years. Convolutional neural networks have been widely used in image classification, target detection, semantic segmentation, and natural language processing because they can automatically learn the feature representation of sample data. Firstly, this paper analyzes the model structure of a typical convolutional neural network model to increase the network depth and width in order to improve its performance, analyzes the network structure that further improves the model performance by using the attention mechanism, and then summarizes and analyzes the current special model structure. In order to further improve the text language processing effect, a convolutional neural network model, Hybrid convolutional neural network (CNN), and Long Short-Term Memory (LSTM) based on the fusion of text features and language knowledge are proposed. The text features and language knowledge are integrated into the language processing model, and the accuracy of the text language processing model is improved by parameter optimization. Experimental results on data sets show that the accuracy of the proposed model reaches 93.0%, which is better than the reference model in the literature.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Mo, Hsu Myat, e Khin Mar Soe. "Myanmar named entity corpus and its use in syllable-based neural named entity recognition". International Journal of Electrical and Computer Engineering (IJECE) 10, n. 2 (1 aprile 2020): 1544. http://dx.doi.org/10.11591/ijece.v10i2.pp1544-1551.

Testo completo
Abstract (sommario):
Myanmar language is a low-resource language and this is one of the main reasons why Myanmar Natural Language Processing lagged behind compared to other languages. Currently, there is no publicly available named entity corpus for Myanmar language. As part of this work, a very first manually annotated Named Entity tagged corpus for Myanmar language was developed and proposed to support the evaluation of named entity extraction. At present, our named entity corpus contains approximately 170,000 name entities and 60,000 sentences. This work also contributes the first evaluation of various deep neural network architectures on Myanmar Named Entity Recognition. Experimental results of the 10-fold cross validation revealed that syllable-based neural sequence models without additional feature engineering can give better results compared to baseline CRF model. This work also aims to discover the effectiveness of neural network approaches to textual processing for Myanmar language as well as to promote future research works on this understudied language.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Lee, Jason, Kyunghyun Cho e Thomas Hofmann. "Fully Character-Level Neural Machine Translation without Explicit Segmentation". Transactions of the Association for Computational Linguistics 5 (dicembre 2017): 365–78. http://dx.doi.org/10.1162/tacl_a_00067.

Testo completo
Abstract (sommario):
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT’15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of the BLEU score and human judgment.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Bera, Abhijit, Mrinal Kanti Ghose e Dibyendu Kumar Pal. "Sentiment Analysis of Multilingual Tweets Based on Natural Language Processing (NLP)". International Journal of System Dynamics Applications 10, n. 4 (ottobre 2021): 1–12. http://dx.doi.org/10.4018/ijsda.20211001.oa16.

Testo completo
Abstract (sommario):
Multilingual Sentiment analysis plays an important role in a country like India with many languages as the style of expression varies in different languages. The Indian people speak in total 22 different languages and with the help of Google Indic keyboard people can express their sentiments i.e reviews about anything in the social media in their native language from individual smart phones. It has been found that machine learning approach has overcome the limitations of other approaches. In this paper, a detailed study has been carried out based on Natural Language Processing (NLP) using Simple Neural Network (SNN) ,Convolutional Neural Network(CNN), and Long Short Term Memory (LSTM)Neural Network followed by another amalgamated model adding a CNN layer on top of the LSTM without worrying about versatility of multilingualism. Around 4000 samples of reviews in English, Hindi and in Bengali languages are considered to generate outputs for the above models and analyzed. The experimental results on these realistic reviews are found to be effective for further research work.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Calvillo, Jesús, Harm Brouwer e Matthew W. Crocker. "Semantic Systematicity in Connectionist Language Production". Information 12, n. 8 (16 agosto 2021): 329. http://dx.doi.org/10.3390/info12080329.

Testo completo
Abstract (sommario):
Decades of studies trying to define the extent to which artificial neural networks can exhibit systematicity suggest that systematicity can be achieved by connectionist models but not by default. Here we present a novel connectionist model of sentence production that employs rich situation model representations originally proposed for modeling systematicity in comprehension. The high performance of our model demonstrates that such representations are also well suited to model language production. Furthermore, the model can produce multiple novel sentences for previously unseen situations, including in a different voice (actives vs. passive) and with words in new syntactic roles, thus demonstrating semantic and syntactic generalization and arguably systematicity. Our results provide yet further evidence that such connectionist approaches can achieve systematicity, in production as well as comprehension. We propose our positive results to be a consequence of the regularities of the microworld from which the semantic representations are derived, which provides a sufficient structure from which the neural network can interpret novel inputs.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Min, Wu, e Zhu Shanshan. "Language Recognition Method of Convolutional Neural Network Based on Spectrogram". Journal of Education, Teaching and Social Studies 1, n. 2 (30 dicembre 2019): p113. http://dx.doi.org/10.22158/jetss.v1n2p113.

Testo completo
Abstract (sommario):
Language recognition is an important branch of speech technology. As a front-end technology of speech information processing, higher recognition accuracy is required. It is found through research that there are obvious differences between the language maps of different languages, which can be used for language identification. This paper uses a convolutional neural network as a classification model, and compares the language recognition effects of traditional language recognition features and spectrogram features on the five language recognition tasks of Chinese, Japanese, Vietnamese, Russian, and Spanish through experiments. The best effect is the ivector feature, and the spectrogram feature has a higher F value than the low-dimensional ivector feature.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Tukeyev, Ualsher, Aidana Karibayeva e Balzhan Abduali. "Neural machine translation system for the Kazakh language based on synthetic corpora". MATEC Web of Conferences 252 (2019): 03006. http://dx.doi.org/10.1051/matecconf/201925203006.

Testo completo
Abstract (sommario):
The lack of big parallel data is present for the Kazakh language. This problem seriously impairs the quality of machine translation from and into Kazakh. This article considers the neural machine translation of the Kazakh language on the basis of synthetic corpora. The Kazakh language belongs to the Turkic languages, which are characterised by rich morphology. Neural machine translation of natural languages requires large training data. The article will show the model for the creation of synthetic corpora, namely the generation of sentences based on complete suffixes for the Kazakh language. The novelty of this approach of the synthetic corpora generation for the Kazakh language is the generation of sentences on the basis of the complete system of suffixes of the Kazakh language. By using generated synthetic corpora we are improving the translation quality in neural machine translation of Kazakh-English and Kazakh-Russian pairs.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Et. al., Gizachew Belayneh Gebre. "Artificial Neural Network Based Amharic Language Speaker Recognition". Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, n. 3 (11 aprile 2021): 5105–16. http://dx.doi.org/10.17762/turcomat.v12i3.2043.

Testo completo
Abstract (sommario):
In this artificial intelligence time, speaker recognition is the most useful biometric recognition technique. Security is a big issue that needs careful attention because of every activities have been becoming automated and internet based. For security purpose, unique features of authorized user are highly needed. Voice is one of the wonderful unique biometric features. So, developing speaker recognition based on scientific research is the most concerned issue. Nowadays, criminal activities are increasing day to day in different clever way. So, every country should have strengthen forensic investigation using such technologies. The study was done by inspiration of contextualizing this concept for our country. In this study, text-independent Amharic language speaker recognition model was developed using Mel-Frequency Cepstral Coefficients to extract features from preprocessed speech signals and Artificial Neural Network to model the feature vector obtained from the Mel-Frequency Cepstral Coefficients and to classify objects while testing. The researcher used 20 sampled speeches of 10 each speaker (total of 200 speech samples) for training and testing separately. By setting the number of hidden neurons to 15, 20, and 25, three different models have been developed and evaluated for accuracy. The fourth-generation high-level programming language and interactive environment MATLAB is used to conduct the overall study implementations. At the end, very promising findings have been obtained. The study achieved better performance than other related researches which used Vector Quantization and Gaussian Mixture Model modelling techniques. Implementable result could obtain for the future by increasing number of speakers and speech samples and including the four Amharic accents.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Babić, Karlo, Sanda Martinčić-Ipšić e Ana Meštrović. "Survey of Neural Text Representation Models". Information 11, n. 11 (30 ottobre 2020): 511. http://dx.doi.org/10.3390/info11110511.

Testo completo
Abstract (sommario):
In natural language processing, text needs to be transformed into a machine-readable representation before any processing. The quality of further natural language processing tasks greatly depends on the quality of those representations. In this survey, we systematize and analyze 50 neural models from the last decade. The models described are grouped by the architecture of neural networks as shallow, recurrent, recursive, convolutional, and attention models. Furthermore, we categorize these models by representation level, input level, model type, and model supervision. We focus on task-independent representation models, discuss their advantages and drawbacks, and subsequently identify the promising directions for future neural text representation models. We describe the evaluation datasets and tasks used in the papers that introduced the models and compare the models based on relevant evaluations. The quality of a representation model can be evaluated as its capability to generalize to multiple unrelated tasks. Benchmark standardization is visible amongst recent models and the number of different tasks models are evaluated on is increasing.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Huang, Shuang, Xuan Zhou, Ke Xue, Xiqiong Wan, Zhenyi Yang, Duo Xu, Mirjana Ivanović e Xueer Yu. "Neural Cognition and Affective Computing on Cyber Language". Computational Intelligence and Neuroscience 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/749326.

Testo completo
Abstract (sommario):
Characterized by its customary symbol system and simple and vivid expression patterns, cyber language acts as not only a tool for convenient communication but also a carrier of abundant emotions and causes high attention in public opinion analysis, internet marketing, service feedback monitoring, and social emergency management. Based on our multidisciplinary research, this paper presents a classification of the emotional symbols in cyber language, analyzes the cognitive characteristics of different symbols, and puts forward a mechanism model to show the dominant neural activities in that process. Through the comparative study of Chinese, English, and Spanish, which are used by the largest population in the world, this paper discusses the expressive patterns of emotions in international cyber languages and proposes an intelligent method for affective computing on cyber language in a unified PAD (Pleasure-Arousal-Dominance) emotional space.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Rabovsky, Milena, e James L. McClelland. "Quasi-compositional mapping from form to meaning: a neural network-based approach to capturing neural responses during human language comprehension". Philosophical Transactions of the Royal Society B: Biological Sciences 375, n. 1791 (16 dicembre 2019): 20190313. http://dx.doi.org/10.1098/rstb.2019.0313.

Testo completo
Abstract (sommario):
We argue that natural language can be usefully described as quasi-compositional and we suggest that deep learning-based neural language models bear long-term promise to capture how language conveys meaning. We also note that a successful account of human language processing should explain both the outcome of the comprehension process and the continuous internal processes underlying this performance. These points motivate our discussion of a neural network model of sentence comprehension, the Sentence Gestalt model, which we have used to account for the N400 component of the event-related brain potential (ERP), which tracks meaning processing as it happens in real time. The model, which shares features with recent deep learning-based language models, simulates N400 amplitude as the automatic update of a probabilistic representation of the situation or event described by the sentence, corresponding to a temporal difference learning signal at the level of meaning. We suggest that this process happens relatively automatically, and that sometimes a more-controlled attention-dependent process is necessary for successful comprehension, which may be reflected in the subsequent P600 ERP component. We relate this account to current deep learning models as well as classic linguistic theory, and use it to illustrate a domain general perspective on some specific linguistic operations postulated based on compositional analyses of natural language. This article is part of the theme issue ‘Towards mechanistic models of meaning composition’.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia