Artículos de revistas sobre el tema "N-gram language models"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "N-gram language models".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
LLORENS, DAVID, JUAN MIGUEL VILAR y FRANCISCO CASACUBERTA. "FINITE STATE LANGUAGE MODELS SMOOTHED USING n-GRAMS". International Journal of Pattern Recognition and Artificial Intelligence 16, n.º 03 (mayo de 2002): 275–89. http://dx.doi.org/10.1142/s0218001402001666.
Texto completoMEMUSHAJ, ALKET y TAREK M. SOBH. "USING GRAPHEME n-GRAMS IN SPELLING CORRECTION AND AUGMENTATIVE TYPING SYSTEMS". New Mathematics and Natural Computation 04, n.º 01 (marzo de 2008): 87–106. http://dx.doi.org/10.1142/s1793005708000970.
Texto completoMezzoudj, Freha y Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models". International Journal of Innovative Computing and Applications 9, n.º 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.095762.
Texto completoMezzoudj, Freha y Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models". International Journal of Innovative Computing and Applications 9, n.º 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.10016827.
Texto completoTakase, Sho, Jun Suzuki y Masaaki Nagata. "Character n-Gram Embeddings to Improve RNN Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 5074–82. http://dx.doi.org/10.1609/aaai.v33i01.33015074.
Texto completoSantos, André L., Gonçalo Prendi, Hugo Sousa y Ricardo Ribeiro. "Stepwise API usage assistance using n -gram language models". Journal of Systems and Software 131 (septiembre de 2017): 461–74. http://dx.doi.org/10.1016/j.jss.2016.06.063.
Texto completoNederhof, Mark-Jan. "A General Technique to Train Language Models on Language Models". Computational Linguistics 31, n.º 2 (junio de 2005): 173–85. http://dx.doi.org/10.1162/0891201054223986.
Texto completoCrego, Josep M. y François Yvon. "Factored bilingual n-gram language models for statistical machine translation". Machine Translation 24, n.º 2 (junio de 2010): 159–75. http://dx.doi.org/10.1007/s10590-010-9082-5.
Texto completoLin, Jimmy y W. John Wilbur. "Modeling actions of PubMed users with n-gram language models". Information Retrieval 12, n.º 4 (12 de septiembre de 2008): 487–503. http://dx.doi.org/10.1007/s10791-008-9067-7.
Texto completoGUO, YUQING, HAIFENG WANG y JOSEF VAN GENABITH. "Dependency-based n-gram models for general purpose sentence realisation". Natural Language Engineering 17, n.º 4 (29 de noviembre de 2010): 455–83. http://dx.doi.org/10.1017/s1351324910000288.
Texto completoSennrich, Rico. "Modelling and Optimizing on Syntactic N-Grams for Statistical Machine Translation". Transactions of the Association for Computational Linguistics 3 (diciembre de 2015): 169–82. http://dx.doi.org/10.1162/tacl_a_00131.
Texto completoDoval, Yerai y Carlos Gómez-Rodríguez. "Comparing neural- and N-gram-based language models for word segmentation". Journal of the Association for Information Science and Technology 70, n.º 2 (2 de diciembre de 2018): 187–97. http://dx.doi.org/10.1002/asi.24082.
Texto completoTaranukha, V. "Ways to Improve N-Gram Language Models for OCR and Speech Recognition of Slavic Languages". Advanced Science Journal 2014, n.º 4 (31 de marzo de 2014): 65–69. http://dx.doi.org/10.15550/asj.2014.04.065.
Texto completoLong, Qiang, Wei Wang, Jinfu Deng, Song Liu, Wenhao Huang, Fangying Chen y Sifan Liu. "A distributed system for large-scale n-gram language models at Tencent". Proceedings of the VLDB Endowment 12, n.º 12 (agosto de 2019): 2206–17. http://dx.doi.org/10.14778/3352063.3352136.
Texto completoWang, Rui, Masao Utiyama, Isao Goto, Eiichiro Sumita, Hai Zhao y Bao-Liang Lu. "Converting Continuous-Space Language Models into N -gram Language Models with Efficient Bilingual Pruning for Statistical Machine Translation". ACM Transactions on Asian and Low-Resource Language Information Processing 15, n.º 3 (8 de marzo de 2016): 1–26. http://dx.doi.org/10.1145/2843942.
Texto completoHuang, Fei, Arun Ahuja, Doug Downey, Yi Yang, Yuhong Guo y Alexander Yates. "Learning Representations for Weakly Supervised Natural Language Processing Tasks". Computational Linguistics 40, n.º 1 (marzo de 2014): 85–120. http://dx.doi.org/10.1162/coli_a_00167.
Texto completoXIONG, DEYI y MIN ZHANG. "Backward and trigger-based language models for statistical machine translation". Natural Language Engineering 21, n.º 2 (24 de julio de 2013): 201–26. http://dx.doi.org/10.1017/s1351324913000168.
Texto completoSchütze, Hinrich y Michael Walsh. "Half-Context Language Models". Computational Linguistics 37, n.º 4 (diciembre de 2011): 843–65. http://dx.doi.org/10.1162/coli_a_00078.
Texto completoRahman, M. D. Riazur, M. D. Tarek Habib, M. D. Sadekur Rahman, Gazi Zahirul Islam y M. D. Abbas Ali Khan. "An exploratory research on grammar checking of Bangla sentences using statistical language models". International Journal of Electrical and Computer Engineering (IJECE) 10, n.º 3 (1 de junio de 2020): 3244. http://dx.doi.org/10.11591/ijece.v10i3.pp3244-3252.
Texto completoNowakowski, Karol, Michal Ptaszynski y Fumito Masui. "MiNgMatch—A Fast N-gram Model for Word Segmentation of the Ainu Language". Information 10, n.º 10 (16 de octubre de 2019): 317. http://dx.doi.org/10.3390/info10100317.
Texto completoBERTOLAMI, ROMAN y HORST BUNKE. "INTEGRATION OF n-GRAM LANGUAGE MODELS IN MULTIPLE CLASSIFIER SYSTEMS FOR OFFLINE HANDWRITTEN TEXT LINE RECOGNITION". International Journal of Pattern Recognition and Artificial Intelligence 22, n.º 07 (noviembre de 2008): 1301–21. http://dx.doi.org/10.1142/s0218001408006855.
Texto completoMASUMURA, Ryo, Taichi ASAMI, Takanobu OBA, Hirokazu MASATAKI, Sumitaka SAKAUCHI y Satoshi TAKAHASHI. "N-gram Approximation of Latent Words Language Models for Domain Robust Automatic Speech Recognition". IEICE Transactions on Information and Systems E99.D, n.º 10 (2016): 2462–70. http://dx.doi.org/10.1587/transinf.2016slp0014.
Texto completoShahrivari, Saeed, Saeed Rahmani y Hooman Keshavarz. "AUTOMATIC TAGGING OF PERSIAN WEB PAGES BASED ON N-GRAM LANGUAGE MODELS USING MAPREDUCE". ICTACT Journal on Soft Computing 05, n.º 04 (1 de julio de 2015): 1003–8. http://dx.doi.org/10.21917/ijsc.2015.0140.
Texto completoDorado, Rubén. "Statistical models for languaje representation". Revista Ontare 1, n.º 1 (16 de septiembre de 2015): 29. http://dx.doi.org/10.21158/23823399.v1.n1.2013.1208.
Texto completoPaul, Baltescu, Blunsom Phil y Hoang Hieu. "OxLM: A Neural Language Modelling Framework for Machine Translation". Prague Bulletin of Mathematical Linguistics 102, n.º 1 (11 de septiembre de 2014): 81–92. http://dx.doi.org/10.2478/pralin-2014-0016.
Texto completoPelemans, Joris, Noam Shazeer y Ciprian Chelba. "Sparse Non-negative Matrix Language Modeling". Transactions of the Association for Computational Linguistics 4 (diciembre de 2016): 329–42. http://dx.doi.org/10.1162/tacl_a_00102.
Texto completoZitouni, Imed. "Backoff hierarchical class n-gram language models: effectiveness to model unseen events in speech recognition". Computer Speech & Language 21, n.º 1 (enero de 2007): 88–104. http://dx.doi.org/10.1016/j.csl.2006.01.001.
Texto completoBessou, Sadik y Racha Sari. "Efficient Discrimination between Arabic Dialects". Recent Advances in Computer Science and Communications 13, n.º 4 (19 de octubre de 2020): 725–30. http://dx.doi.org/10.2174/2213275912666190716115604.
Texto completoTakahashi, Shuntaro y Kumiko Tanaka-Ishii. "Evaluating Computational Language Models with Scaling Properties of Natural Language". Computational Linguistics 45, n.º 3 (septiembre de 2019): 481–513. http://dx.doi.org/10.1162/coli_a_00355.
Texto completoArthur O. Santos, Flávio, Thiago Dias Bispo, Hendrik Teixeira Macedo y Cleber Zanchettin. "Morphological Skip-Gram: Replacing FastText characters n-gram with morphological knowledge". Inteligencia Artificial 24, n.º 67 (20 de febrero de 2021): 1–17. http://dx.doi.org/10.4114/intartif.vol24iss67pp1-17.
Texto completoWANG, XIAOLONG, DANIEL S. YEUNG, JAMES N. K. LIU, ROBERT LUK y XUAN WANG. "A HYBRID LANGUAGE MODEL BASED ON STATISTICS AND LINGUISTIC RULES". International Journal of Pattern Recognition and Artificial Intelligence 19, n.º 01 (febrero de 2005): 109–28. http://dx.doi.org/10.1142/s0218001405003934.
Texto completoGuoDong, Z. y L. KimTeng. "Interpolation of n-gram and mutual-information based trigger pair language models for Mandarin speech recognition". Computer Speech & Language 13, n.º 2 (abril de 1999): 125–41. http://dx.doi.org/10.1006/csla.1998.0118.
Texto completoBojanowski, Piotr, Edouard Grave, Armand Joulin y Tomas Mikolov. "Enriching Word Vectors with Subword Information". Transactions of the Association for Computational Linguistics 5 (diciembre de 2017): 135–46. http://dx.doi.org/10.1162/tacl_a_00051.
Texto completoTACHBELIE, MARTHA YIFIRU, SOLOMON TEFERRA ABATE y WOLFGANG MENZEL. "Using morphemes in language modeling and automatic speech recognition of Amharic". Natural Language Engineering 20, n.º 2 (12 de diciembre de 2012): 235–59. http://dx.doi.org/10.1017/s1351324912000356.
Texto completoFLOR, MICHAEL. "A fast and flexible architecture for very large word n-gram datasets". Natural Language Engineering 19, n.º 1 (10 de enero de 2012): 61–93. http://dx.doi.org/10.1017/s1351324911000349.
Texto completoDorado, Ruben. "Smoothing methods for the treatment of digital texts". Revista Ontare 2, n.º 1 (17 de septiembre de 2015): 42. http://dx.doi.org/10.21158/23823399.v2.n1.2014.1234.
Texto completoChang, Harry M. "Constructing n-gram rules for natural language models through exploring the limitation of the Zipf–Mandelbrot law". Computing 91, n.º 3 (2 de octubre de 2010): 241–64. http://dx.doi.org/10.1007/s00607-010-0116-x.
Texto completoSingh, Umrinderpal. "A Comparison of Phrase Based and Word based Language Model for Punjabi". International Journal of Advanced Research in Computer Science and Software Engineering 7, n.º 7 (30 de julio de 2017): 444. http://dx.doi.org/10.23956/ijarcsse/v7i7/0232.
Texto completoSmywinski-Pohl, Aleksander y Bartosz Ziółko. "Application of Morphosyntactic and Class-Based Language Models in Automatic Speech Recognition of Polish". International Journal on Artificial Intelligence Tools 25, n.º 02 (abril de 2016): 1650006. http://dx.doi.org/10.1142/s0218213016500068.
Texto completoMAUČEC, MIRJAM SEPESY, TOMAŽ ROTOVNIK, ZDRAVKO KAČIČ y JANEZ BREST. "USING DATA-DRIVEN SUBWORD UNITS IN LANGUAGE MODEL OF HIGHLY INFLECTIVE SLOVENIAN LANGUAGE". International Journal of Pattern Recognition and Artificial Intelligence 23, n.º 02 (marzo de 2009): 287–312. http://dx.doi.org/10.1142/s0218001409007119.
Texto completoTremblay, Antoine, Elissa Asp, Anne Johnson, Malgorzata Zarzycka Migdal, Tim Bardouille y Aaron J. Newman. "What the Networks Tell us about Serial and Parallel Processing". Mental Lexicon 11, n.º 1 (7 de junio de 2016): 115–60. http://dx.doi.org/10.1075/ml.11.1.06tre.
Texto completoCastro, Dayvid W., Ellen Souza, Douglas Vitório, Diego Santos y Adriano L. I. Oliveira. "Smoothed n-gram based models for tweet language identification: A case study of the Brazilian and European Portuguese national varieties". Applied Soft Computing 61 (diciembre de 2017): 1160–72. http://dx.doi.org/10.1016/j.asoc.2017.05.065.
Texto completoXia, Yu Guo y Ming Liang Gu. "Ensemble Learning Approach with Application to Chinese Dialect Identification". Applied Mechanics and Materials 333-335 (julio de 2013): 769–74. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.769.
Texto completoEyamin, Md Iftakher Alam, Md Tarek Habib, Muhammad Ifte Khairul Islam, Md Sadekur Rahman y Md Abbas Ali Khan. "An investigative design of optimum stochastic language model for bangla autocomplete". Indonesian Journal of Electrical Engineering and Computer Science 13, n.º 2 (1 de febrero de 2019): 671. http://dx.doi.org/10.11591/ijeecs.v13.i2.pp671-676.
Texto completoZhang, Lipeng, Peng Zhang, Xindian Ma, Shuqin Gu, Zhan Su y Dawei Song. "A Generalized Language Model in Tensor Space". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 7450–58. http://dx.doi.org/10.1609/aaai.v33i01.33017450.
Texto completoFutrell, Richard, Adam Albright, Peter Graff y Timothy J. O’Donnell. "A Generative Model of Phonotactics". Transactions of the Association for Computational Linguistics 5 (diciembre de 2017): 73–86. http://dx.doi.org/10.1162/tacl_a_00047.
Texto completoPakoci, Edvin, Branislav Popović y Darko Pekar. "Using Morphological Data in Language Modeling for Serbian Large Vocabulary Speech Recognition". Computational Intelligence and Neuroscience 2019 (3 de marzo de 2019): 1–8. http://dx.doi.org/10.1155/2019/5072918.
Texto completoPino, Juan, Aurelien Waite y William Byrne. "Simple and Efficient Model Filtering in Statistical Machine Translation". Prague Bulletin of Mathematical Linguistics 98, n.º 1 (1 de octubre de 2012): 5–24. http://dx.doi.org/10.2478/v10108-012-0005-x.
Texto completoStolcke, Andreas, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema y Marie Meteer. "Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech". Computational Linguistics 26, n.º 3 (septiembre de 2000): 339–73. http://dx.doi.org/10.1162/089120100561737.
Texto completoBoudia, Mohamed Amine, Reda Mohamed Hamou y Abdelmalek Amine. "A New Meta-Heuristic based on Human Renal Function for Detection and Filtering of SPAM". International Journal of Information Security and Privacy 9, n.º 4 (octubre de 2015): 26–58. http://dx.doi.org/10.4018/ijisp.2015100102.
Texto completo