Journal articles on the topic 'N-gram language models'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'N-gram language models.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
LLORENS, DAVID, JUAN MIGUEL VILAR, and FRANCISCO CASACUBERTA. "FINITE STATE LANGUAGE MODELS SMOOTHED USING n-GRAMS." International Journal of Pattern Recognition and Artificial Intelligence 16, no. 03 (May 2002): 275–89. http://dx.doi.org/10.1142/s0218001402001666.
Full textMEMUSHAJ, ALKET, and TAREK M. SOBH. "USING GRAPHEME n-GRAMS IN SPELLING CORRECTION AND AUGMENTATIVE TYPING SYSTEMS." New Mathematics and Natural Computation 04, no. 01 (March 2008): 87–106. http://dx.doi.org/10.1142/s1793005708000970.
Full textMezzoudj, Freha, and Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models." International Journal of Innovative Computing and Applications 9, no. 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.095762.
Full textMezzoudj, Freha, and Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models." International Journal of Innovative Computing and Applications 9, no. 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.10016827.
Full textTakase, Sho, Jun Suzuki, and Masaaki Nagata. "Character n-Gram Embeddings to Improve RNN Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5074–82. http://dx.doi.org/10.1609/aaai.v33i01.33015074.
Full textSantos, André L., Gonçalo Prendi, Hugo Sousa, and Ricardo Ribeiro. "Stepwise API usage assistance using n -gram language models." Journal of Systems and Software 131 (September 2017): 461–74. http://dx.doi.org/10.1016/j.jss.2016.06.063.
Full textNederhof, Mark-Jan. "A General Technique to Train Language Models on Language Models." Computational Linguistics 31, no. 2 (June 2005): 173–85. http://dx.doi.org/10.1162/0891201054223986.
Full textCrego, Josep M., and François Yvon. "Factored bilingual n-gram language models for statistical machine translation." Machine Translation 24, no. 2 (June 2010): 159–75. http://dx.doi.org/10.1007/s10590-010-9082-5.
Full textLin, Jimmy, and W. John Wilbur. "Modeling actions of PubMed users with n-gram language models." Information Retrieval 12, no. 4 (September 12, 2008): 487–503. http://dx.doi.org/10.1007/s10791-008-9067-7.
Full textGUO, YUQING, HAIFENG WANG, and JOSEF VAN GENABITH. "Dependency-based n-gram models for general purpose sentence realisation." Natural Language Engineering 17, no. 4 (November 29, 2010): 455–83. http://dx.doi.org/10.1017/s1351324910000288.
Full textSennrich, Rico. "Modelling and Optimizing on Syntactic N-Grams for Statistical Machine Translation." Transactions of the Association for Computational Linguistics 3 (December 2015): 169–82. http://dx.doi.org/10.1162/tacl_a_00131.
Full textDoval, Yerai, and Carlos Gómez-Rodríguez. "Comparing neural- and N-gram-based language models for word segmentation." Journal of the Association for Information Science and Technology 70, no. 2 (December 2, 2018): 187–97. http://dx.doi.org/10.1002/asi.24082.
Full textTaranukha, V. "Ways to Improve N-Gram Language Models for OCR and Speech Recognition of Slavic Languages." Advanced Science Journal 2014, no. 4 (March 31, 2014): 65–69. http://dx.doi.org/10.15550/asj.2014.04.065.
Full textLong, Qiang, Wei Wang, Jinfu Deng, Song Liu, Wenhao Huang, Fangying Chen, and Sifan Liu. "A distributed system for large-scale n-gram language models at Tencent." Proceedings of the VLDB Endowment 12, no. 12 (August 2019): 2206–17. http://dx.doi.org/10.14778/3352063.3352136.
Full textWang, Rui, Masao Utiyama, Isao Goto, Eiichiro Sumita, Hai Zhao, and Bao-Liang Lu. "Converting Continuous-Space Language Models into N -gram Language Models with Efficient Bilingual Pruning for Statistical Machine Translation." ACM Transactions on Asian and Low-Resource Language Information Processing 15, no. 3 (March 8, 2016): 1–26. http://dx.doi.org/10.1145/2843942.
Full textHuang, Fei, Arun Ahuja, Doug Downey, Yi Yang, Yuhong Guo, and Alexander Yates. "Learning Representations for Weakly Supervised Natural Language Processing Tasks." Computational Linguistics 40, no. 1 (March 2014): 85–120. http://dx.doi.org/10.1162/coli_a_00167.
Full textXIONG, DEYI, and MIN ZHANG. "Backward and trigger-based language models for statistical machine translation." Natural Language Engineering 21, no. 2 (July 24, 2013): 201–26. http://dx.doi.org/10.1017/s1351324913000168.
Full textSchütze, Hinrich, and Michael Walsh. "Half-Context Language Models." Computational Linguistics 37, no. 4 (December 2011): 843–65. http://dx.doi.org/10.1162/coli_a_00078.
Full textRahman, M. D. Riazur, M. D. Tarek Habib, M. D. Sadekur Rahman, Gazi Zahirul Islam, and M. D. Abbas Ali Khan. "An exploratory research on grammar checking of Bangla sentences using statistical language models." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 3 (June 1, 2020): 3244. http://dx.doi.org/10.11591/ijece.v10i3.pp3244-3252.
Full textNowakowski, Karol, Michal Ptaszynski, and Fumito Masui. "MiNgMatch—A Fast N-gram Model for Word Segmentation of the Ainu Language." Information 10, no. 10 (October 16, 2019): 317. http://dx.doi.org/10.3390/info10100317.
Full textBERTOLAMI, ROMAN, and HORST BUNKE. "INTEGRATION OF n-GRAM LANGUAGE MODELS IN MULTIPLE CLASSIFIER SYSTEMS FOR OFFLINE HANDWRITTEN TEXT LINE RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 22, no. 07 (November 2008): 1301–21. http://dx.doi.org/10.1142/s0218001408006855.
Full textMASUMURA, Ryo, Taichi ASAMI, Takanobu OBA, Hirokazu MASATAKI, Sumitaka SAKAUCHI, and Satoshi TAKAHASHI. "N-gram Approximation of Latent Words Language Models for Domain Robust Automatic Speech Recognition." IEICE Transactions on Information and Systems E99.D, no. 10 (2016): 2462–70. http://dx.doi.org/10.1587/transinf.2016slp0014.
Full textShahrivari, Saeed, Saeed Rahmani, and Hooman Keshavarz. "AUTOMATIC TAGGING OF PERSIAN WEB PAGES BASED ON N-GRAM LANGUAGE MODELS USING MAPREDUCE." ICTACT Journal on Soft Computing 05, no. 04 (July 1, 2015): 1003–8. http://dx.doi.org/10.21917/ijsc.2015.0140.
Full textDorado, Rubén. "Statistical models for languaje representation." Revista Ontare 1, no. 1 (September 16, 2015): 29. http://dx.doi.org/10.21158/23823399.v1.n1.2013.1208.
Full textPaul, Baltescu, Blunsom Phil, and Hoang Hieu. "OxLM: A Neural Language Modelling Framework for Machine Translation." Prague Bulletin of Mathematical Linguistics 102, no. 1 (September 11, 2014): 81–92. http://dx.doi.org/10.2478/pralin-2014-0016.
Full textPelemans, Joris, Noam Shazeer, and Ciprian Chelba. "Sparse Non-negative Matrix Language Modeling." Transactions of the Association for Computational Linguistics 4 (December 2016): 329–42. http://dx.doi.org/10.1162/tacl_a_00102.
Full textZitouni, Imed. "Backoff hierarchical class n-gram language models: effectiveness to model unseen events in speech recognition." Computer Speech & Language 21, no. 1 (January 2007): 88–104. http://dx.doi.org/10.1016/j.csl.2006.01.001.
Full textBessou, Sadik, and Racha Sari. "Efficient Discrimination between Arabic Dialects." Recent Advances in Computer Science and Communications 13, no. 4 (October 19, 2020): 725–30. http://dx.doi.org/10.2174/2213275912666190716115604.
Full textTakahashi, Shuntaro, and Kumiko Tanaka-Ishii. "Evaluating Computational Language Models with Scaling Properties of Natural Language." Computational Linguistics 45, no. 3 (September 2019): 481–513. http://dx.doi.org/10.1162/coli_a_00355.
Full textArthur O. Santos, Flávio, Thiago Dias Bispo, Hendrik Teixeira Macedo, and Cleber Zanchettin. "Morphological Skip-Gram: Replacing FastText characters n-gram with morphological knowledge." Inteligencia Artificial 24, no. 67 (February 20, 2021): 1–17. http://dx.doi.org/10.4114/intartif.vol24iss67pp1-17.
Full textWANG, XIAOLONG, DANIEL S. YEUNG, JAMES N. K. LIU, ROBERT LUK, and XUAN WANG. "A HYBRID LANGUAGE MODEL BASED ON STATISTICS AND LINGUISTIC RULES." International Journal of Pattern Recognition and Artificial Intelligence 19, no. 01 (February 2005): 109–28. http://dx.doi.org/10.1142/s0218001405003934.
Full textGuoDong, Z., and L. KimTeng. "Interpolation of n-gram and mutual-information based trigger pair language models for Mandarin speech recognition." Computer Speech & Language 13, no. 2 (April 1999): 125–41. http://dx.doi.org/10.1006/csla.1998.0118.
Full textBojanowski, Piotr, Edouard Grave, Armand Joulin, and Tomas Mikolov. "Enriching Word Vectors with Subword Information." Transactions of the Association for Computational Linguistics 5 (December 2017): 135–46. http://dx.doi.org/10.1162/tacl_a_00051.
Full textTACHBELIE, MARTHA YIFIRU, SOLOMON TEFERRA ABATE, and WOLFGANG MENZEL. "Using morphemes in language modeling and automatic speech recognition of Amharic." Natural Language Engineering 20, no. 2 (December 12, 2012): 235–59. http://dx.doi.org/10.1017/s1351324912000356.
Full textFLOR, MICHAEL. "A fast and flexible architecture for very large word n-gram datasets." Natural Language Engineering 19, no. 1 (January 10, 2012): 61–93. http://dx.doi.org/10.1017/s1351324911000349.
Full textDorado, Ruben. "Smoothing methods for the treatment of digital texts." Revista Ontare 2, no. 1 (September 17, 2015): 42. http://dx.doi.org/10.21158/23823399.v2.n1.2014.1234.
Full textChang, Harry M. "Constructing n-gram rules for natural language models through exploring the limitation of the Zipf–Mandelbrot law." Computing 91, no. 3 (October 2, 2010): 241–64. http://dx.doi.org/10.1007/s00607-010-0116-x.
Full textSingh, Umrinderpal. "A Comparison of Phrase Based and Word based Language Model for Punjabi." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 7 (July 30, 2017): 444. http://dx.doi.org/10.23956/ijarcsse/v7i7/0232.
Full textSmywinski-Pohl, Aleksander, and Bartosz Ziółko. "Application of Morphosyntactic and Class-Based Language Models in Automatic Speech Recognition of Polish." International Journal on Artificial Intelligence Tools 25, no. 02 (April 2016): 1650006. http://dx.doi.org/10.1142/s0218213016500068.
Full textMAUČEC, MIRJAM SEPESY, TOMAŽ ROTOVNIK, ZDRAVKO KAČIČ, and JANEZ BREST. "USING DATA-DRIVEN SUBWORD UNITS IN LANGUAGE MODEL OF HIGHLY INFLECTIVE SLOVENIAN LANGUAGE." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 02 (March 2009): 287–312. http://dx.doi.org/10.1142/s0218001409007119.
Full textTremblay, Antoine, Elissa Asp, Anne Johnson, Malgorzata Zarzycka Migdal, Tim Bardouille, and Aaron J. Newman. "What the Networks Tell us about Serial and Parallel Processing." Mental Lexicon 11, no. 1 (June 7, 2016): 115–60. http://dx.doi.org/10.1075/ml.11.1.06tre.
Full textCastro, Dayvid W., Ellen Souza, Douglas Vitório, Diego Santos, and Adriano L. I. Oliveira. "Smoothed n-gram based models for tweet language identification: A case study of the Brazilian and European Portuguese national varieties." Applied Soft Computing 61 (December 2017): 1160–72. http://dx.doi.org/10.1016/j.asoc.2017.05.065.
Full textXia, Yu Guo, and Ming Liang Gu. "Ensemble Learning Approach with Application to Chinese Dialect Identification." Applied Mechanics and Materials 333-335 (July 2013): 769–74. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.769.
Full textEyamin, Md Iftakher Alam, Md Tarek Habib, Muhammad Ifte Khairul Islam, Md Sadekur Rahman, and Md Abbas Ali Khan. "An investigative design of optimum stochastic language model for bangla autocomplete." Indonesian Journal of Electrical Engineering and Computer Science 13, no. 2 (February 1, 2019): 671. http://dx.doi.org/10.11591/ijeecs.v13.i2.pp671-676.
Full textZhang, Lipeng, Peng Zhang, Xindian Ma, Shuqin Gu, Zhan Su, and Dawei Song. "A Generalized Language Model in Tensor Space." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7450–58. http://dx.doi.org/10.1609/aaai.v33i01.33017450.
Full textFutrell, Richard, Adam Albright, Peter Graff, and Timothy J. O’Donnell. "A Generative Model of Phonotactics." Transactions of the Association for Computational Linguistics 5 (December 2017): 73–86. http://dx.doi.org/10.1162/tacl_a_00047.
Full textPakoci, Edvin, Branislav Popović, and Darko Pekar. "Using Morphological Data in Language Modeling for Serbian Large Vocabulary Speech Recognition." Computational Intelligence and Neuroscience 2019 (March 3, 2019): 1–8. http://dx.doi.org/10.1155/2019/5072918.
Full textPino, Juan, Aurelien Waite, and William Byrne. "Simple and Efficient Model Filtering in Statistical Machine Translation." Prague Bulletin of Mathematical Linguistics 98, no. 1 (October 1, 2012): 5–24. http://dx.doi.org/10.2478/v10108-012-0005-x.
Full textStolcke, Andreas, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. "Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech." Computational Linguistics 26, no. 3 (September 2000): 339–73. http://dx.doi.org/10.1162/089120100561737.
Full textBoudia, Mohamed Amine, Reda Mohamed Hamou, and Abdelmalek Amine. "A New Meta-Heuristic based on Human Renal Function for Detection and Filtering of SPAM." International Journal of Information Security and Privacy 9, no. 4 (October 2015): 26–58. http://dx.doi.org/10.4018/ijisp.2015100102.
Full text