Literatura académica sobre el tema "N-gram language models"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "N-gram language models".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "N-gram language models"

1

LLORENS, DAVID, JUAN MIGUEL VILAR y FRANCISCO CASACUBERTA. "FINITE STATE LANGUAGE MODELS SMOOTHED USING n-GRAMS". International Journal of Pattern Recognition and Artificial Intelligence 16, n.º 03 (mayo de 2002): 275–89. http://dx.doi.org/10.1142/s0218001402001666.

Texto completo
Resumen
We address the problem of smoothing the probability distribution defined by a finite state automaton. Our approach extends the ideas employed for smoothing n-gram models. This extension is obtained by interpreting n-gram models as finite state models. The experiments show that our smoothing improves perplexity over smoothed n-grams and Error Correcting Parsing techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

MEMUSHAJ, ALKET y TAREK M. SOBH. "USING GRAPHEME n-GRAMS IN SPELLING CORRECTION AND AUGMENTATIVE TYPING SYSTEMS". New Mathematics and Natural Computation 04, n.º 01 (marzo de 2008): 87–106. http://dx.doi.org/10.1142/s1793005708000970.

Texto completo
Resumen
Probabilistic language models have gained popularity in Natural Language Processing due to their ability to successfully capture language structures and constraints with computational efficiency. Probabilistic language models are flexible and easily adapted to language changes over time as well as to some new languages. Probabilistic language models can be trained and their accuracy strongly related to the availability of large text corpora. In this paper, we investigate the usability of grapheme probabilistic models, specifically grapheme n-grams models in spellchecking as well as augmentative typing systems. Grapheme n-gram models require substantially smaller training corpora and that is one of the main drivers for this thesis in which we build grapheme n-gram language models for the Albanian language. There are presently no available Albanian language corpora to be used for probabilistic language modeling. Our technique attempts to augment spellchecking and typing systems by utilizing grapheme n-gram language models in improving suggestion accuracy in spellchecking and augmentative typing systems. Our technique can be implemented in a standalone tool or incorporated in another tool to offer additional selection/scoring criteria.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Mezzoudj, Freha y Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models". International Journal of Innovative Computing and Applications 9, n.º 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.095762.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Mezzoudj, Freha y Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models". International Journal of Innovative Computing and Applications 9, n.º 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.10016827.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Takase, Sho, Jun Suzuki y Masaaki Nagata. "Character n-Gram Embeddings to Improve RNN Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 5074–82. http://dx.doi.org/10.1609/aaai.v33i01.33015074.

Texto completo
Resumen
This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction (Wieting et al. 2016). Our proposed method constructs word embeddings from character ngram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Santos, André L., Gonçalo Prendi, Hugo Sousa y Ricardo Ribeiro. "Stepwise API usage assistance using n -gram language models". Journal of Systems and Software 131 (septiembre de 2017): 461–74. http://dx.doi.org/10.1016/j.jss.2016.06.063.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Nederhof, Mark-Jan. "A General Technique to Train Language Models on Language Models". Computational Linguistics 31, n.º 2 (junio de 2005): 173–85. http://dx.doi.org/10.1162/0891201054223986.

Texto completo
Resumen
We show that under certain conditions, a language model can be trained on the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained automaton is provably minimal. This is a substantial generalization of an existing algorithm to train an n-gram model on the basis of a probabilistic context-free grammar.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Crego, Josep M. y François Yvon. "Factored bilingual n-gram language models for statistical machine translation". Machine Translation 24, n.º 2 (junio de 2010): 159–75. http://dx.doi.org/10.1007/s10590-010-9082-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Lin, Jimmy y W. John Wilbur. "Modeling actions of PubMed users with n-gram language models". Information Retrieval 12, n.º 4 (12 de septiembre de 2008): 487–503. http://dx.doi.org/10.1007/s10791-008-9067-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

GUO, YUQING, HAIFENG WANG y JOSEF VAN GENABITH. "Dependency-based n-gram models for general purpose sentence realisation". Natural Language Engineering 17, n.º 4 (29 de noviembre de 2010): 455–83. http://dx.doi.org/10.1017/s1351324910000288.

Texto completo
Resumen
AbstractThis paper presents a general-purpose, wide-coverage, probabilistic sentence generator based on dependency n-gram models. This is particularly interesting as many semantic or abstract syntactic input specifications for sentence realisation can be represented as labelled bi-lexical dependencies or typed predicate-argument structures. Our generation method captures the mapping between semantic representations and surface forms by linearising a set of dependencies directly, rather than via the application of grammar rules as in more traditional chart-style or unification-based generators. In contrast to conventional n-gram language models over surface word forms, we exploit structural information and various linguistic features inherent in the dependency representations to constrain the generation space and improve the generation quality. A series of experiments shows that dependency-based n-gram models generalise well to different languages (English and Chinese) and representations (LFG and CoNLL). Compared with state-of-the-art generation systems, our general-purpose sentence realiser is highly competitive with the added advantages of being simple, fast, robust and accurate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "N-gram language models"

1

Kulhanek, Raymond Daniel. "A Latent Dirichlet Allocation/N-gram Composite Language Model". Wright State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=wright1379520876.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhou, Hanqing. "DBpedia Type and Entity Detection Using Word Embeddings and N-gram Models". Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37324.

Texto completo
Resumen
Nowadays, knowledge bases are used more and more in Semantic Web tasks, such as knowledge acquisition (Hellmann et al., 2013), disambiguation (Garcia et al., 2009) and named entity corpus construction (Hahm et al., 2014), to name a few. DBpedia is playing a central role on the linked open data cloud; therefore, the quality of this knowledge base is becoming a central point of focus. However, there are some issues with the quality of DBpedia. In particular, DBpedia suffers from three major types of problems: a) invalid types for entities, b) missing types for entities, and c) invalid entities in the resources’ description. In order to enhance the quality of DBpedia, it is important to detect these invalid types and resources, as well as complete missing types. The three main goals of this thesis are: a) invalid entity type detection in order to solve the problem of invalid DBpedia types for entities, b) automatic detection of the types of entities in order to solve the problem of missing DBpedia types for entities, and c) invalid entity detection in order to solve the problem of invalid entities in the resource description of a DBpedia entity. We compare several methods for the detection of invalid types, automatic typing of entities, and invalid entities detection in the resource descriptions. In particular, we compare different classification and clustering algorithms based on various sets of features: entity embedding features (Skip-gram and CBOW models) and traditional n-gram features. We present evaluation results for 358 DBpedia classes extracted from the DBpedia ontology. The main contribution of this work consists of the development of automatic invalid type detection, automatic entity typing, and automatic invalid entity detection methods using clustering and classification. Our results show that entity embedding models usually perform better than n-gram models, especially the Skip-gram embedding model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Mehay, Dennis Nolan. "Bean Soup Translation: Flexible, Linguistically-motivated Syntax for Machine Translation". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345433807.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ebadat, Ali-Reza. "Toward Robust Information Extraction Models for Multimedia Documents". Phd thesis, INSA de Rennes, 2012. http://tel.archives-ouvertes.fr/tel-00760383.

Texto completo
Resumen
Au cours de la dernière décennie, d'énormes quantités de documents multimédias ont été générées. Il est donc important de trouver un moyen de gérer ces données, notamment d'un point de vue sémantique, ce qui nécessite une connaissance fine de leur contenu. Il existe deux familles d'approches pour ce faire, soit par l'extraction d'informations à partir du document (par ex., audio, image), soit en utilisant des données textuelles extraites du document ou de sources externes (par ex., Web). Notre travail se place dans cette seconde famille d'approches ; les informations extraites des textes peuvent ensuite être utilisées pour annoter les documents multimédias et faciliter leur gestion. L'objectif de cette thèse est donc de développer de tels modèles d'extraction d'informations. Mais les textes extraits des documents multimédias étant en général petits et bruités, ce travail veille aussi à leur nécessaire robustesse. Nous avons donc privilégié des techniques simples nécessitant peu de connaissances externes comme garantie de robustesse, en nous inspirant des travaux en recherche d'information et en analyse statistique des textes. Nous nous sommes notamment concentré sur trois tâches : l'extraction supervisée de relations entre entités, la découverte de relations, et la découverte de classes d'entités. Pour l'extraction de relations, nous proposons une approche supervisée basée sur les modèles de langues et l'algorithme d'apprentissage des k-plus-proches voisins. Les résultats expérimentaux montrent l'efficacité et la robustesse de nos modèles, dépassant les systèmes état-de-l'art tout en utilisant des informations linguistiques plus simples à obtenir. Dans la seconde tâche, nous passons à un modèle non supervisé pour découvrir les relations au lieu d'en extraire des prédéfinies. Nous modélisons ce problème comme une tâche de clustering avec une fonction de similarité là encore basée sur les modèles de langues. Les performances, évaluées sur un corpus de vidéos de matchs de football, montrnt l'intérêt de notre approche par rapport aux modèles classiques. Enfin, dans la dernière tâche, nous nous intéressons non plus aux relations mais aux entités, source d'informations essentielles dans les documents. Nous proposons une technique de clustering d'entités afin de faire émerger, sans a priori, des classes sémantiques parmi celles-ci, en adoptant une représentation nouvelle des données permettant de mieux tenir compte des chaque occurrence des entités. En guise de conclusion, nous avons montré expérimentalement que des techniques simples, exigeant peu de connaissances a priori, et utilisant des informations linguistique facilement accessibles peuvent être suffisantes pour extraire efficacement des informations précises à partir du texte. Dans notre cas, ces bons résultats sont obtenus en choisissant une représentation adaptée pour les données, basée sur une analyse statistique ou des modèles de recherche d'information. Le chemin est encore long avant d'être en mesure de traiter directement des documents multimédia, mais nous espérons que nos propositions pourront servir de tremplin pour les recherches futures dans ce domaine.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Jiang, Yuandong. "Large Scale Distributed Semantic N-gram Language Model". Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1316200173.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Hannemann, Mirko. "Rozpoznávácí sítě založené na konečných stavových převodnících pro dopředné a zpětné dekódování v rozpoznávání řeči". Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-412550.

Texto completo
Resumen
Pomocí matematického formalismu váhovaných konečných stavových převodníků (weighted finite state transducers WFST) může být formulována řada úloh včetně automatického rozpoznávání řeči (automatic speech recognition ASR). Dnešní ASR systémy široce využívají složených pravděpodobnostních modelů nazývaných dekódovací grafy nebo rozpoznávací sítě. Ty jsou z jednotlivých komponent konstruovány pomocí WFST operací, např. kompozice. Každá komponenta je zde zdrojem znalostí a omezuje vyhledávání nejlepší cesty ve složeném grafu v operaci zvané dekódování. Využití koherentního teoretického rámce garantuje, že výsledná struktura bude optimální podle definovaného kritéria. WFST mohou být v rámci daného polookruhu (semi-ring) optimalizovány pomocí determinizace a minimalizace. Aplikací těchto algoritmů získáme optimální strukturu pro prohledávání, optimální distribuce vah je pak získána aplikací "weight pushing" algoritmu. Cílem této práce je zdokonalit postupy a algoritmy pro konstrukci optimálních rozpoznávacích sítí. Zavádíme alternativní weight pushing algoritmus, který je vhodný pro důležitou třídu modelů -- převodníky jazykového modelu (language model transducers) a obecně pro všechny cyklické WFST a WFST se záložními (back-off) přechody. Představujeme také způsob konstrukce rozpoznávací sítě vhodné pro dekódování zpětně v čase, které prokazatelně produkuje ty samé pravděpodobnosti jako dopředná síť. K tomuto účelu jsme vyvinuli algoritmus pro exaktní reverzi back-off jazykových modelů a převodníků, které je reprezentují. Pomocí zpětných rozpoznávacích sítí optimalizujeme dekódování: ve statickém dekodéru je využíváme pro dvoustupňové dekódování (dopředné a zpětné vyhledávání). Tento přístup --- "sledovací" dekódování (tracked decoding) --- umožnuje zahrnout výsledky vyhledávání z prvního stupně do druhého stupně tak, že se sledují hypotézy obsažené v rozpoznávacím grafu (lattice) prvního stupně. Výsledkem je podstatné zrychlení dekódování, protože tato technika umožnuje prohledávat s  variabilním prohledávacím paprskem (search beam) -- ten je povětšinou mnohem užší než u základního přístupu. Ukazujeme rovněž, že uvedenou techniku je možné využít v dynamickém dekodéru tím, že postupně zjemňujeme rozpoznávání. To navíc vede i k částečné paralelizaci dekódování.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

O'Boyle, Peter L. "A study of an N-gram language model for speech recognition". Thesis, Queen's University Belfast, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333827.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Civera, Saiz Jorge. "Novel statistical approaches to text classification, machine translation and computer-assisted translation". Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/2502.

Texto completo
Resumen
Esta tesis presenta diversas contribuciones en los campos de la clasificación automática de texto, traducción automática y traducción asistida por ordenador bajo el marco estadístico. En clasificación automática de texto, se propone una nueva aplicación llamada clasificación de texto bilingüe junto con una serie de modelos orientados a capturar dicha información bilingüe. Con tal fin se presentan dos aproximaciones a esta aplicación; la primera de ellas se basa en una asunción naive que contempla la independencia entre las dos lenguas involucradas, mientras que la segunda, más sofisticada, considera la existencia de una correlación entre palabras en diferentes lenguas. La primera aproximación dió lugar al desarrollo de cinco modelos basados en modelos de unigrama y modelos de n-gramas suavizados. Estos modelos fueron evaluados en tres tareas de complejidad creciente, siendo la más compleja de estas tareas analizada desde el punto de vista de un sistema de ayuda a la indexación de documentos. La segunda aproximación se caracteriza por modelos de traducción capaces de capturar correlación entre palabras en diferentes lenguas. En nuestro caso, el modelo de traducción elegido fue el modelo M1 junto con un modelo de unigramas. Este modelo fue evaluado en dos de las tareas más simples superando la aproximación naive, que asume la independencia entre palabras en differentes lenguas procedentes de textos bilingües. En traducción automática, los modelos estadísticos de traducción basados en palabras M1, M2 y HMM son extendidos bajo el marco de la modelización mediante mixturas, con el objetivo de definir modelos de traducción dependientes del contexto. Asimismo se extiende un algoritmo iterativo de búsqueda basado en programación dinámica, originalmente diseñado para el modelo M2, para el caso de mixturas de modelos M2. Este algoritmo de búsqueda n
Civera Saiz, J. (2008). Novel statistical approaches to text classification, machine translation and computer-assisted translation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/2502
Palancia
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Randák, Richard. "N-Grams as a Measure of Naturalness and Complexity". Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-90006.

Texto completo
Resumen
We live in a time where software is used everywhere. It is used even for creating other software by helping developers with writing or generating new code. To do this properly, metrics to measure software quality are being used to evaluate the final code. However, they are sometimes too costly to compute, or simply don't have the expected effect. Therefore, new and better ways of software evaluation are needed. In this research, we are investigating the usage of the statistical approaches used commonly in the natural language processing (NLP) area. In order to introduce and evaluate new metrics, a Java N-gram language model is created from a large Java language code corpus. Naturalness, a method-level metric, is introduced and calculated for chosen projects. The correlation with well-known software complexity metrics are calculated and discussed. The results, however, show that the metric, in the form that we have defined it, is not suitable for software complexity evaluation since it is highly correlated with a well-known metric (token count), which is much easier to compute. Different definition of the metric is suggested, which could be a target of future study and research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Gangireddy, Siva Reddy. "Recurrent neural network language models for automatic speech recognition". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28990.

Texto completo
Resumen
The goal of this thesis is to advance the use of recurrent neural network language models (RNNLMs) for large vocabulary continuous speech recognition (LVCSR). RNNLMs are currently state-of-the-art and shown to consistently reduce the word error rates (WERs) of LVCSR tasks when compared to other language models. In this thesis we propose various advances to RNNLMs. The advances are: improved learning procedures for RNNLMs, enhancing the context, and adaptation of RNNLMs. We learned better parameters by a novel pre-training approach and enhanced the context using prosody and syntactic features. We present a pre-training method for RNNLMs, in which the output weights of a feed-forward neural network language model (NNLM) are shared with the RNNLM. This is accomplished by first fine-tuning the weights of the NNLM, which are then used to initialise the output weights of an RNNLM with the same number of hidden units. To investigate the effectiveness of the proposed pre-training method, we have carried out text-based experiments on the Penn Treebank Wall Street Journal data, and ASR experiments on the TED lectures data. Across the experiments, we observe small but significant improvements in perplexity (PPL) and ASR WER. Next, we present unsupervised adaptation of RNNLMs. We adapted the RNNLMs to a target domain (topic or genre or television programme (show)) at test time using ASR transcripts from first pass recognition. We investigated two approaches to adapt the RNNLMs. In the first approach the forward propagating hidden activations are scaled - learning hidden unit contributions (LHUC). In the second approach we adapt all parameters of RNNLM.We evaluated the adapted RNNLMs by showing the WERs on multi genre broadcast speech data. We observe small (on an average 0.1% absolute) but significant improvements in WER compared to a strong unadapted RNNLM model. Finally, we present the context-enhancement of RNNLMs using prosody and syntactic features. The prosody features were computed from the acoustics of the context words and the syntactic features were from the surface form of the words in the context. We trained the RNNLMs with word duration, pause duration, final phone duration, syllable duration, syllable F0, part-of-speech tag and Combinatory Categorial Grammar (CCG) supertag features. The proposed context-enhanced RNNLMs were evaluated by reporting PPL and WER on two speech recognition tasks, Switchboard and TED lectures. We observed substantial improvements in PPL (5% to 15% relative) and small but significant improvements in WER (0.1% to 0.5% absolute).
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Libros sobre el tema "N-gram language models"

1

Voutilainen, Atro. Part-of-Speech Tagging. Editado por Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0011.

Texto completo
Resumen
This article outlines the recently used methods for designing part-of-speech taggers; computer programs for assigning contextually appropriate grammatical descriptors to words in texts. It begins with the description of general architecture and task setting. It gives an overview of the history of tagging and describes the central approaches to tagging. These approaches are: taggers based on handwritten local rules, taggers based on n-grams automatically derived from text corpora, taggers based on hidden Markov models, taggers using automatically generated symbolic language models derived using methods from machine tagging, taggers based on handwritten global rules, and hybrid taggers, which combine the advantages of handwritten and automatically generated taggers. This article focuses on handwritten tagging rules. Well-tagged training corpora are a valuable resource for testing and improving language model. The text corpus reminds the grammarian about any oversight while designing a rule.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "N-gram language models"

1

Hecht, Robert, Jürgen Riedler y Gerhard Backfried. "Fitting German into N-Gram Language Models". En Text, Speech and Dialogue, 341–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46154-x_49.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Popel, Martin y David Mareček. "Perplexity of n-Gram and Dependency Language Models". En Text, Speech and Dialogue, 173–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15760-8_23.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Abdallah, Tarek Amr y Beatriz de La Iglesia. "URL-Based Web Page Classification: With n-Gram Language Models". En Communications in Computer and Information Science, 19–33. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25840-9_2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Peng, Fuchun y Dale Schuurmans. "Combining Naive Bayes and n-Gram Language Models for Text Classification". En Lecture Notes in Computer Science, 335–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36618-0_24.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Varjokallio, Matti, Mikko Kurimo y Sami Virpioja. "Class n-Gram Models for Very Large Vocabulary Speech Recognition of Finnish and Estonian". En Statistical Language and Speech Processing, 133–44. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-45925-7_11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Huang, Xiangji, Fuchun Peng, Aijun An, Dale Schuurmans y Nick Cercone. "Session Boundary Detection for Association Rule Learning Using n-Gram Language Models". En Advances in Artificial Intelligence, 237–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44886-1_19.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Chang, Harry. "Enriching Domain-Specific Language Models Using Domain Independent WWW N-Gram Corpus". En Artificial Intelligence and Soft Computing, 38–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29350-4_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Aceves-Pérez, Rita M., Luis Villaseñor-Pineda y Manuel Montes-y-Gómez. "Using N-Gram Models to Combine Query Translations in Cross-Language Question Answering". En Computational Linguistics and Intelligent Text Processing, 453–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11671299_47.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Pakoci, Edvin y Branislav Popović. "Methods for Using Class Based N-gram Language Models in the Kaldi Toolkit". En Speech and Computer, 492–503. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87802-3_45.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Hamed, Injy, Mohamed Elmahdy y Slim Abdennadher. "Expanding N-grams for Code-Switch Language Models". En Advances in Intelligent Systems and Computing, 221–29. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99010-1_20.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "N-gram language models"

1

Hirsimaki, Teemu. "On Compressing N-Gram Language Models". En 2007 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 2007. http://dx.doi.org/10.1109/icassp.2007.367228.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Sak, Hasim, Cyril Allauzen, Kaisuke Nakajima y Francoise Beaufays. "Mixture of mixture n-gram language models". En 2013 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU). IEEE, 2013. http://dx.doi.org/10.1109/asru.2013.6707701.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Chen, Mingqing, Ananda Theertha Suresh, Rajiv Mathews, Adeline Wong, Cyril Allauzen, Françoise Beaufays y Michael Riley. "Federated Learning of N-Gram Language Models". En Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/k19-1012.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Bickel, Steffen, Peter Haider y Tobias Scheffer. "Predicting sentences using N-gram language models". En the conference. Morristown, NJ, USA: Association for Computational Linguistics, 2005. http://dx.doi.org/10.3115/1220575.1220600.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Rastrow, Ariya, Abhinav Sethy y Bhuvana Ramabhadran. "Constrained discriminative training of N-gram language models". En Understanding (ASRU). IEEE, 2009. http://dx.doi.org/10.1109/asru.2009.5373338.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Huang, Ruizhe, Ke Li, Ashish Arora, Daniel Povey y Sanjeev Khudanpur. "Efficient MDI Adaptation for n-Gram Language Models". En Interspeech 2020. ISCA: ISCA, 2020. http://dx.doi.org/10.21437/interspeech.2020-2909.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Kuznetsov, Vitaly, Hank Liao, Mehryar Mohri, Michael Riley y Brian Roark. "Learning N-Gram Language Models from Uncertain Data". En Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-1093.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Huang, Songfang y Steve Renals. "Power law discounting for n-gram language models". En 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5495007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Bogoychev, Nikolay y Adam Lopez. "N-gram language models for massively parallel devices". En Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/p16-1183.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Wang, Song, Devin Chollak, Dana Movshovitz-Attias y Lin Tan. "Bugram: bug detection with n-gram language models". En ASE'16: ACM/IEEE International Conference on Automated Software Engineering. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2970276.2970341.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía