Добірка наукової літератури з теми "Allocation de Dirichlet"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Allocation de Dirichlet".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Allocation de Dirichlet"

1

Du, Lan, Wray Buntine, Huidong Jin, and Changyou Chen. "Sequential latent Dirichlet allocation." Knowledge and Information Systems 31, no. 3 (2011): 475–503. http://dx.doi.org/10.1007/s10115-011-0425-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Schwarz, Carlo. "Ldagibbs: A Command for Topic Modeling in Stata Using Latent Dirichlet Allocation." Stata Journal: Promoting communications on statistics and Stata 18, no. 1 (2018): 101–17. http://dx.doi.org/10.1177/1536867x1801800107.

Повний текст джерела
Анотація:
In this article, I introduce the ldagibbs command, which implements latent Dirichlet allocation in Stata. Latent Dirichlet allocation is the most popular machine-learning topic model. Topic models automatically cluster text documents into a user-chosen number of topics. Latent Dirichlet allocation represents each document as a probability distribution over topics and represents each topic as a probability distribution over words. Therefore, latent Dirichlet allocation provides a way to analyze the content of large unclassified text data and an alternative to predefined document classifications
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yoshida, Takahiro, Ryohei Hisano, and Takaaki Ohnishi. "Gaussian hierarchical latent Dirichlet allocation: Bringing polysemy back." PLOS ONE 18, no. 7 (2023): e0288274. http://dx.doi.org/10.1371/journal.pone.0288274.

Повний текст джерела
Анотація:
Topic models are widely used to discover the latent representation of a set of documents. The two canonical models are latent Dirichlet allocation, and Gaussian latent Dirichlet allocation, where the former uses multinomial distributions over words, and the latter uses multivariate Gaussian distributions over pre-trained word embedding vectors as the latent topic representations, respectively. Compared with latent Dirichlet allocation, Gaussian latent Dirichlet allocation is limited in the sense that it does not capture the polysemy of a word such as “bank.” In this paper, we show that Gaussia
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Archambeau, Cedric, Balaji Lakshminarayanan, and Guillaume Bouchard. "Latent IBP Compound Dirichlet Allocation." IEEE Transactions on Pattern Analysis and Machine Intelligence 37, no. 2 (2015): 321–33. http://dx.doi.org/10.1109/tpami.2014.2313122.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Pion-Tonachini, Luca, Scott Makeig, and Ken Kreutz-Delgado. "Crowd labeling latent Dirichlet allocation." Knowledge and Information Systems 53, no. 3 (2017): 749–65. http://dx.doi.org/10.1007/s10115-017-1053-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

S.S., Ramyadharshni, and Pabitha Dr.P. "Topic Categorization on Social Network Using Latent Dirichlet Allocation." Bonfring International Journal of Software Engineering and Soft Computing 8, no. 2 (2018): 16–20. http://dx.doi.org/10.9756/bijsesc.8390.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Li, Gen, and Hazri Jamil. "Teacher professional learning community and interdisciplinary collaborative teaching path under the informationization basic education model." Yugoslav Journal of Operations Research, no. 00 (2024): 29. http://dx.doi.org/10.2298/yjor2403029l.

Повний текст джерела
Анотація:
The construction of a learning community cannot be separated from the participation of information technology. The current teacher learning community has problems of low interaction efficiency and insufficient enthusiasm for group cooperative teaching. This study adopts the Latent Dirichlet allocation method to process text data generated by teacher interaction from the evolution of knowledge topics in the learning community network space. At the same time, the interaction data of the network community learning space is used to extract the interaction characteristics between teachers, and a co
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Syed, Shaheen, and Marco Spruit. "Exploring Symmetrical and Asymmetrical Dirichlet Priors for Latent Dirichlet Allocation." International Journal of Semantic Computing 12, no. 03 (2018): 399–423. http://dx.doi.org/10.1142/s1793351x18400184.

Повний текст джерела
Анотація:
Latent Dirichlet Allocation (LDA) has gained much attention from researchers and is increasingly being applied to uncover underlying semantic structures from a variety of corpora. However, nearly all researchers use symmetrical Dirichlet priors, often unaware of the underlying practical implications that they bear. This research is the first to explore symmetrical and asymmetrical Dirichlet priors on topic coherence and human topic ranking when uncovering latent semantic structures from scientific research articles. More specifically, we examine the practical effects of several classes of Diri
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Garg, Mohit, and Priya Rangra. "Bibliometric Analysis of Latent Dirichlet Allocation." DESIDOC Journal of Library & Information Technology 42, no. 2 (2022): 105–13. http://dx.doi.org/10.14429/djlit.42.2.17307.

Повний текст джерела
Анотація:
Latent Dirichlet Allocation (LDA) has emerged as an important algorithm in big data analysis that finds the group of topics in the text data. It posits that each text document consists of a group of topics, and each topic is a mixture of words related to it. With the emergence of a plethora of text data, the LDA has become a popular algorithm for topic modeling among researchers from different domains. Therefore, it is essential to understand the trends of LDA researches. Bibliometric techniques are established methods to study the research progress of a topic. In this study, bibliographic dat
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chauhan, Uttam, and Apurva Shah. "Topic Modeling Using Latent Dirichlet allocation." ACM Computing Surveys 54, no. 7 (2022): 1–35. http://dx.doi.org/10.1145/3462478.

Повний текст джерела
Анотація:
We are not able to deal with a mammoth text corpus without summarizing them into a relatively small subset. A computational tool is extremely needed to understand such a gigantic pool of text. Probabilistic Topic Modeling discovers and explains the enormous collection of documents by reducing them in a topical subspace. In this work, we study the background and advancement of topic modeling techniques. We first introduce the preliminaries of the topic modeling techniques and review its extensions and variations, such as topic modeling over various domains, hierarchical topic modeling, word emb
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Дисертації з теми "Allocation de Dirichlet"

1

Ponweiser, Martin. "Latent Dirichlet Allocation in R." WU Vienna University of Economics and Business, 2012. http://epub.wu.ac.at/3558/1/main.pdf.

Повний текст джерела
Анотація:
Topic models are a new research field within the computer sciences information retrieval and text mining. They are generative probabilistic models of text corpora inferred by machine learning and they can be used for retrieval and text mining tasks. The most prominent topic model is latent Dirichlet allocation (LDA), which was introduced in 2003 by Blei et al. and has since then sparked off the development of other topic models for domain-specific purposes. This thesis focuses on LDA's practical application. Its main goal is the replication of the data analyses from the 2004 LDA paper ``Findi
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Arnekvist, Isac, and Ludvig Ericson. "Finding competitors using Latent Dirichlet Allocation." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186386.

Повний текст джерела
Анотація:
Identifying business competitors is of interest to many, but is becoming increasingly hard in an expanding global market. The aim of this report is to investigate whether Latent Dirichlet Allocation (LDA) can be used to identify and rank competitors based on distances between LDA representations of company descriptions. The performance of the LDA model was compared to that of bag-of-words and random ordering by evaluating then comparing them on a handful of common information retrieval metrics. Several different distance metrics were evaluated to determine which metric had best correspondence
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Choubey, Rahul. "Tag recommendation using Latent Dirichlet Allocation." Thesis, Kansas State University, 2011. http://hdl.handle.net/2097/9785.

Повний текст джерела
Анотація:
Master of Science<br>Department of Computing and Information Sciences<br>Doina Caragea<br>The vast amount of data present on the internet calls for ways to label and organize this data according to specific categories, in order to facilitate search and browsing activities. This can be easily accomplished by making use of folksonomies and user provided tags. However, it can be difficult for users to provide meaningful tags. Tag recommendation systems can guide the users towards informative tags for online resources such as websites, pictures, etc. The aim of this thesis is to build a system
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Risch, Johan. "Detecting Twitter topics using Latent Dirichlet Allocation." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-277260.

Повний текст джерела
Анотація:
Latent Dirichlet Allocations is evaluated for its suitability when detecting topics in a stream of short messages limited to 140 characters. This is done by assessing its ability to model the incoming messages and its ability to classify previously unseen messages with known topics. The evaluation shows that the model can be suitable for certain applications in topic detection when the stream size is small enough. Furthermoresuggestions on how to handle larger streams are outlined.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Liu, Zelong. "High performance latent dirichlet allocation for text mining." Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/7726.

Повний текст джерела
Анотація:
Latent Dirichlet Allocation (LDA), a total probability generative model, is a three-tier Bayesian model. LDA computes the latent topic structure of the data and obtains the significant information of documents. However, traditional LDA has several limitations in practical applications. LDA cannot be directly used in classification because it is a non-supervised learning model. It needs to be embedded into appropriate classification algorithms. LDA is a generative model as it normally generates the latent topics in the categories where the target documents do not belong to, producing the deviat
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kulhanek, Raymond Daniel. "A Latent Dirichlet Allocation/N-gram Composite Language Model." Wright State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=wright1379520876.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Anaya, Leticia H. "Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers." Thesis, University of North Texas, 2011. https://digital.library.unt.edu/ark:/67531/metadc103284/.

Повний текст джерела
Анотація:
In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed. Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are two text data computer algorithms that have received much attention individually in the text data literature for topic extraction studies but not for document classification nor for
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Jaradat, Shatha. "OLLDA: Dynamic and Scalable Topic Modelling for Twitter : AN ONLINE SUPERVISED LATENT DIRICHLET ALLOCATION ALGORITHM." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177535.

Повний текст джерела
Анотація:
Providing high quality of topics inference in today's large and dynamic corpora, such as Twitter, is a challenging task. This is especially challenging taking into account that the content in this environment contains short texts and many abbreviations. This project proposes an improvement of a popular online topics modelling algorithm for Latent Dirichlet Allocation (LDA), by incorporating supervision to make it suitable for Twitter context. This improvement is motivated by the need for a single algorithm that achieves both objectives: analyzing huge amounts of documents, including new docume
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Yalamanchili, Hima Bindu. "A Novel Approach For Cancer Characterization Using Latent Dirichlet Allocation and Disease-Specific Genomic Analysis." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1527600876174758.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sheikha, Hassan. "Text mining Twitter social media for Covid-19 : Comparing latent semantic analysis and latent Dirichlet allocation." Thesis, Högskolan i Gävle, Avdelningen för datavetenskap och samhällsbyggnad, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-32567.

Повний текст джерела
Анотація:
In this thesis, the Twitter social media is data mined for information about the covid-19 outbreak during the month of March, starting from the 3’rd and ending on the 31’st. 100,000 tweets were collected from Harvard’s opensource data and recreated using Hydrate. This data is analyzed further using different Natural Language Processing (NLP) methodologies, such as termfrequency inverse document frequency (TF-IDF), lemmatizing, tokenizing, Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA). Furthermore, the results of the LSA and LDA algorithms is reduced dimensional data that
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Книги з теми "Allocation de Dirichlet"

1

Shi, Feng. Learn About Latent Dirichlet Allocation in R With Data From the News Articles Dataset (2016). SAGE Publications, Ltd., 2019. http://dx.doi.org/10.4135/9781526495693.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Shi, Feng. Learn About Latent Dirichlet Allocation in Python With Data From the News Articles Dataset (2016). SAGE Publications, Ltd., 2019. http://dx.doi.org/10.4135/9781526497727.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Augmenting Latent Dirichlet Allocation and Rank Threshold Detection with Ontologies. CreateSpace Independent Publishing Platform, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Jockers, Matthew L. Theme. University of Illinois Press, 2017. http://dx.doi.org/10.5406/illinois/9780252037528.003.0008.

Повний текст джерела
Анотація:
This chapter demonstrates how big data and computation can be used to identify and track recurrent themes as the products of external influence. It first considers the limitations of the Google Ngram Viewer as a tool for tracing thematic trends over time before turning to Douglas Biber's Corpus Linguistics: Investigating Language Structure and Use, a primer on various factors complicating word-focused text analysis and the subsequent conclusions one might draw regarding word meanings. It then discusses the results of the author's application of latent Dirichlet allocation (LDA) to a corpus of
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Allocation de Dirichlet"

1

Li, Hang. "Latent Dirichlet Allocation." In Machine Learning Methods. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3917-6_20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tang, Yi-Kun, Xian-Ling Mao, and Heyan Huang. "Labeled Phrase Latent Dirichlet Allocation." In Web Information Systems Engineering – WISE 2016. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48740-3_39.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Moon, Gordon E., Israt Nisa, Aravind Sukumaran-Rajam, Bortik Bandyopadhyay, Srinivasan Parthasarathy, and P. Sadayappan. "Parallel Latent Dirichlet Allocation on GPUs." In Lecture Notes in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93701-4_20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Calvo, Hiram, Ángel Hernández-Castañeda, and Jorge García-Flores. "Author Identification Using Latent Dirichlet Allocation." In Computational Linguistics and Intelligent Text Processing. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77116-8_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wheeler, Jordan M., Shiyu Wang, and Allan S. Cohen. "Latent Dirichlet Allocation of Constructed Responses." In The Routledge International Handbook of Automated Essay Evaluation. Routledge, 2024. http://dx.doi.org/10.4324/9781003397618-31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Maanicshah, Kamal, Manar Amayri, and Nizar Bouguila. "Interactive Generalized Dirichlet Mixture Allocation Model." In Lecture Notes in Computer Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-23028-8_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Hao, Jing, and Hongxi Wei. "Latent Dirichlet Allocation Based Image Retrieval." In Lecture Notes in Computer Science. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68699-8_17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Rus, Vasile, Nobal Niraula, and Rajendra Banjade. "Similarity Measures Based on Latent Dirichlet Allocation." In Computational Linguistics and Intelligent Text Processing. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37247-6_37.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Bíró, István, and Jácint Szabó. "Latent Dirichlet Allocation for Automatic Document Categorization." In Machine Learning and Knowledge Discovery in Databases. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04174-7_28.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lovato, Pietro, Manuele Bicego, Vittorio Murino, and Alessandro Perina. "Robust Initialization for Learning Latent Dirichlet Allocation." In Similarity-Based Pattern Recognition. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24261-3_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Allocation de Dirichlet"

1

Tahsin, Faiza, Hafsa Ennajari, and Nizar Bouguila. "Author Dirichlet Multinomial Allocation Model with Generalized Distribution (ADMAGD)." In 2024 International Symposium on Networks, Computers and Communications (ISNCC). IEEE, 2024. http://dx.doi.org/10.1109/isncc62547.2024.10758998.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Koltcov, Sergei, Olessia Koltsova, and Sergey Nikolenko. "Latent dirichlet allocation." In the 2014 ACM conference. ACM Press, 2014. http://dx.doi.org/10.1145/2615569.2615680.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Chien, Jen-Tzung, Chao-Hsi Lee, and Zheng-Hua Tan. "Dirichlet mixture allocation." In 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2016. http://dx.doi.org/10.1109/mlsp.2016.7738866.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Shen, Zhi-Yong, Jun Sun, and Yi-Dong Shen. "Collective Latent Dirichlet Allocation." In 2008 Eighth IEEE International Conference on Data Mining (ICDM). IEEE, 2008. http://dx.doi.org/10.1109/icdm.2008.75.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Li, Shuangyin, Guan Huang, Ruiyang Tan, and Rong Pan. "Tag-Weighted Dirichlet Allocation." In 2013 IEEE International Conference on Data Mining (ICDM). IEEE, 2013. http://dx.doi.org/10.1109/icdm.2013.11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hsin, Wei-Cheng, and Jen-Wei Huang. "Multi-dependent Latent Dirichlet Allocation." In 2017 Conference on Technologies and Applications of Artificial Intelligence (TAAI). IEEE, 2017. http://dx.doi.org/10.1109/taai.2017.51.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Krestel, Ralf, Peter Fankhauser, and Wolfgang Nejdl. "Latent dirichlet allocation for tag recommendation." In the third ACM conference. ACM Press, 2009. http://dx.doi.org/10.1145/1639714.1639726.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Tan, Yimin, and Zhijian Ou. "Topic-weak-correlated Latent Dirichlet allocation." In 2010 7th International Symposium on Chinese Spoken Language Processing (ISCSLP). IEEE, 2010. http://dx.doi.org/10.1109/iscslp.2010.5684906.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Xiang, Yingzhuo, Dongmei Yang, and Jikun Yan. "The Auto Annotation Latent Dirichlet Allocation." In First International Conference on Information Sciences, Machinery, Materials and Energy. Atlantis Press, 2015. http://dx.doi.org/10.2991/icismme-15.2015.387.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Bhutada, Sunil, V. V. S. S. S. Balaram, and Vishnu Vardhan Bulusu. "Latent Dirichlet Allocation based multilevel classification." In 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT). IEEE, 2014. http://dx.doi.org/10.1109/iccicct.2014.6993109.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Allocation de Dirichlet"

1

Teh, Yee W., David Newman, and Max Welling. A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation. Defense Technical Information Center, 2007. http://dx.doi.org/10.21236/ada629956.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Antón Sarabia, Arturo, Santiago Bazdresch, and Alejandra Lelo-de-Larrea. The Influence of Central Bank's Projections and Economic Narrative on Professional Forecasters' Expectations: Evidence from Mexico. Banco de México, 2023. http://dx.doi.org/10.36095/banxico/di.2023.21.

Повний текст джерела
Анотація:
This paper evaluates the influence of central bank's projections and narrative signals provided in the summaries of its Inflation Report on the expectations of professional forecasters for inflation and GDP growth in the case of Mexico. We use the Latent Dirichlet Allocation model, a textmining technique, to identify narrative signals. We show that both quantitative and qualitative information have an influence on inflation and GDP growth expectations. We also find that narrative signals related to monetary policy, observed inflation, aggregate demand, and inflation and employment projections
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Moreno Pérez, Carlos, and Marco Minozzo. “Making Text Talk”: The Minutes of the Central Bank of Brazil and the Real Economy. Banco de España, 2022. http://dx.doi.org/10.53479/23646.

Повний текст джерела
Анотація:
This paper investigates the relationship between the views expressed in the minutes of the meetings of the Central Bank of Brazil’s Monetary Policy Committee (COPOM) and the real economy. It applies various computational linguistic machine learning algorithms to construct measures of the minutes of the COPOM. First, we create measures of the content of the paragraphs of the minutes using Latent Dirichlet Allocation (LDA). Second, we build an uncertainty index for the minutes using Word Embedding and K-Means. Then, we combine these indices to create two topic-uncertainty indices. The first one
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Alonso-Robisco, Andrés, José Manuel Carbó, and José Manuel Carbó. Machine Learning methods in climate finance: a systematic review. Banco de España, 2023. http://dx.doi.org/10.53479/29594.

Повний текст джерела
Анотація:
Preventing the materialization of climate change is one of the main challenges of our time. The involvement of the financial sector is a fundamental pillar in this task, which has led to the emergence of a new field in the literature, climate finance. In turn, the use of Machine Learning (ML) as a tool to analyze climate finance is on the rise, due to the need to use big data to collect new climate-related information and model complex non-linear relationships. Considering the proliferation of articles in this field, and the potential for the use of ML, we propose a review of the academic lite
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!