Literatura académica sobre el tema "Neural language models"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Neural language models".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Neural language models"

1

Buckman, Jacob, and Graham Neubig. "Neural Lattice Language Models." Transactions of the Association for Computational Linguistics 6 (December 2018): 529–41. http://dx.doi.org/10.1162/tacl_a_00036.

Texto completo
Resumen
In this work, we propose a new language modeling paradigm that has the ability to perform both prediction and moderation of information flow at multiple granularities: neural lattice language models. These models construct a lattice of possible paths through a sentence and marginalize across this lattice to calculate sequence probabilities or optimize parameters. This approach allows us to seamlessly incorporate linguistic intuitions — including polysemy and the existence of multiword lexical items — into our language model. Experiments on multiple language modeling tasks show that English neu
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bengio, Yoshua. "Neural net language models." Scholarpedia 3, no. 1 (2008): 3881. http://dx.doi.org/10.4249/scholarpedia.3881.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Dong, Li. "Learning natural language interfaces with neural models." AI Matters 7, no. 2 (2021): 14–17. http://dx.doi.org/10.1145/3478369.3478375.

Texto completo
Resumen
Language is the primary and most natural means of communication for humans. The learning curve of interacting with various services (e.g., digital assistants, and smart appliances) would be greatly reduced if we could talk to machines using human language. However, in most cases computers can only interpret and execute formal languages.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

De Coster, Mathieu, and Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation." Information 13, no. 5 (2022): 220. http://dx.doi.org/10.3390/info13050220.

Texto completo
Resumen
We consider neural sign language translation: machine translation from signed to written languages using encoder–decoder neural networks. Translating sign language videos to written language text is especially complex because of the difference in modality between source and target language and, consequently, the required video processing. At the same time, sign languages are low-resource languages, their datasets dwarfed by those available for written languages. Recent advances in written language processing and success stories of transfer learning raise the question of how pretrained written
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Mandy Lau. "Artificial intelligence language models and the false fantasy of participatory language policies." Working papers in Applied Linguistics and Linguistics at York 1 (September 13, 2021): 4–15. http://dx.doi.org/10.25071/2564-2855.5.

Texto completo
Resumen
Artificial intelligence neural language models learn from a corpus of online language data, often drawn directly from user-generated content through crowdsourcing or the gift economy, bypassing traditional keepers of language policy and planning (such as governments and institutions). Here lies the dream that the languages of the digital world can bend towards individual needs and wants, and not the traditional way around. Through the participatory language work of users, linguistic diversity, accessibility, personalization, and inclusion can be increased. However, the promise of a more partic
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Chang, Tyler A., and Benjamin K. Bergen. "Word Acquisition in Neural Language Models." Transactions of the Association for Computational Linguistics 10 (2022): 1–16. http://dx.doi.org/10.1162/tacl_a_00444.

Texto completo
Resumen
Abstract We investigate how neural language models acquire individual words during training, extracting learning curves and ages of acquisition for over 600 words on the MacArthur-Bates Communicative Development Inventory (Fenson et al., 2007). Drawing on studies of word acquisition in children, we evaluate multiple predictors for words’ ages of acquisition in LSTMs, BERT, and GPT-2. We find that the effects of concreteness, word length, and lexical class are pointedly different in children and language models, reinforcing the importance of interaction and sensorimotor experience in child lang
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Mezzoudj, Freha, and Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models." International Journal of Innovative Computing and Applications 9, no. 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.095762.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Mezzoudj, Freha, and Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models." International Journal of Innovative Computing and Applications 9, no. 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.10016827.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Qi, Kunxun, and Jianfeng Du. "Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8632–39. http://dx.doi.org/10.1609/aaai.v34i05.6387.

Texto completo
Resumen
Cross-lingual natural language inference is a fundamental task in cross-lingual natural language understanding, widely addressed by neural models recently. Existing neural model based methods either align sentence embeddings between source and target languages, heavily relying on annotated parallel corpora, or exploit pre-trained cross-lingual language models that are fine-tuned on a single language and hard to transfer knowledge to another language. To resolve these limitations in existing methods, this paper proposes an adversarial training framework to enhance both pre-trained models and cl
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Angius, Nicola, Pietro Perconti, Alessio Plebe, and Alessandro Acciai. "The Simulative Role of Neural Language Models in Brain Language Processing." Philosophies 9, no. 5 (2024): 137. http://dx.doi.org/10.3390/philosophies9050137.

Texto completo
Resumen
This paper provides an epistemological and methodological analysis of the recent practice of using neural language models to simulate brain language processing. It is argued that, on the one hand, this practice can be understood as an instance of the traditional simulative method in artificial intelligence, following a mechanistic understanding of the mind; on the other hand, that it modifies the simulative method significantly. Firstly, neural language models are introduced; a study case showing how neural language models are being applied in cognitive neuroscience for simulative purposes is
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "Neural language models"

1

Lei, Tao Ph D. Massachusetts Institute of Technology. "Interpretable neural models for natural language processing." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108990.

Texto completo
Resumen
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 109-119).<br>The success of neural network models often comes at a cost of interpretability. This thesis addresses the problem by providing justifications behind the model's structure and predictions. In the first part of this thesis, we present a class of sequence operations for text processing. The proposed component generalizes from convolution operations and gated aggregations. As justi
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kunz, Jenny. "Neural Language Models with Explicit Coreference Decision." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-371827.

Texto completo
Resumen
Coreference is an important and frequent concept in any form of discourse, and Coreference Resolution (CR) a widely used task in Natural Language Understanding (NLU). In this thesis, we implement and explore two recent models that include the concept of coreference in Recurrent Neural Network (RNN)-based Language Models (LM). Entity and reference decisions are modeled explicitly in these models using attention mechanisms. Both models learn to save the previously observed entities in a set and to decide if the next token created by the LM is a mention of one of the entities in the set, an entit
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Labeau, Matthieu. "Neural language models : Dealing with large vocabularies." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.

Texto completo
Resumen
Le travail présenté dans cette thèse explore les méthodes pratiques utilisées pour faciliter l'entraînement et améliorer les performances des modèles de langues munis de très grands vocabulaires. La principale limite à l'utilisation des modèles de langue neuronaux est leur coût computationnel: il dépend de la taille du vocabulaire avec laquelle il grandit linéairement. La façon la plus aisée de réduire le temps de calcul de ces modèles reste de limiter la taille du vocabulaire, ce qui est loin d'être satisfaisant pour de nombreuses tâches. La plupart des méthodes existantes pour l'entraînement
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Bayer, Ali Orkan. "Semantic Language models with deep neural Networks." Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/367784.

Texto completo
Resumen
Spoken language systems (SLS) communicate with users in natural language through speech. There are two main problems related to processing the spoken input in SLS. The first one is automatic speech recognition (ASR) which recognizes what the user says. The second one is spoken language understanding (SLU) which understands what the user means. We focus on the language model (LM) component of SLS. LMs constrain the search space that is used in the search for the best hypothesis. Therefore, they play a crucial role in the performance of SLS. It has long been discussed that an improvement in the
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Bayer, Ali Orkan. "Semantic Language models with deep neural Networks." Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1578/1/bayer_thesis.pdf.

Texto completo
Resumen
Spoken language systems (SLS) communicate with users in natural language through speech. There are two main problems related to processing the spoken input in SLS. The first one is automatic speech recognition (ASR) which recognizes what the user says. The second one is spoken language understanding (SLU) which understands what the user means. We focus on the language model (LM) component of SLS. LMs constrain the search space that is used in the search for the best hypothesis. Therefore, they play a crucial role in the performance of SLS. It has long been discussed that an improvement in the
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Li, Zhongliang. "Slim Embedding Layers for Recurrent Neural Language Models." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1531950458646138.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Gangireddy, Siva Reddy. "Recurrent neural network language models for automatic speech recognition." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28990.

Texto completo
Resumen
The goal of this thesis is to advance the use of recurrent neural network language models (RNNLMs) for large vocabulary continuous speech recognition (LVCSR). RNNLMs are currently state-of-the-art and shown to consistently reduce the word error rates (WERs) of LVCSR tasks when compared to other language models. In this thesis we propose various advances to RNNLMs. The advances are: improved learning procedures for RNNLMs, enhancing the context, and adaptation of RNNLMs. We learned better parameters by a novel pre-training approach and enhanced the context using prosody and syntactic features.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Scarcella, Alessandro. "Recurrent neural network language models in the context of under-resourced South African languages." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29431.

Texto completo
Resumen
Over the past five years neural network models have been successful across a range of computational linguistic tasks. However, these triumphs have been concentrated in languages with significant resources such as large datasets. Thus, many languages, which are commonly referred to as under-resourced languages, have received little attention and have yet to benefit from recent advances. This investigation aims to evaluate the implications of recent advances in neural network language modelling techniques for under-resourced South African languages. Rudimentary, single layered recurrent neural n
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Le, Hai Son. "Continuous space models with neural networks in natural language processing." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00776704.

Texto completo
Resumen
The purpose of language models is in general to capture and to model regularities of language, thereby capturing morphological, syntactical and distributional properties of word sequences in a given language. They play an important role in many successful applications of Natural Language Processing, such as Automatic Speech Recognition, Machine Translation and Information Extraction. The most successful approaches to date are based on n-gram assumption and the adjustment of statistics from the training data by applying smoothing and back-off techniques, notably Kneser-Ney technique, introduced
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Miao, Yishu. "Deep generative models for natural language processing." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:e4e1f1f9-e507-4754-a0ab-0246f1e1e258.

Texto completo
Resumen
Deep generative models are essential to Natural Language Processing (NLP) due to their outstanding ability to use unlabelled data, to incorporate abundant linguistic features, and to learn interpretable dependencies among data. As the structure becomes deeper and more complex, having an effective and efficient inference method becomes increasingly important. In this thesis, neural variational inference is applied to carry out inference for deep generative models. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Libros sobre el tema "Neural language models"

1

1957-, Houghton George, ed. Connectionist models in cognitive psychology. Psychology Press, 2004.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Miikkulainen, Risto. Subsymbolic natural language processing: An integrated model of scripts, lexicon, and memory. MIT Press, 1993.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Bavaeva, Ol'ga. Metaphorical parallels of the neutral nomination "man" in modern English. INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1858259.

Texto completo
Resumen
The monograph is devoted to a multidimensional analysis of metaphor in modern English as a parallel nomination that exists along with a neutral equivalent denoting a person. The problem of determining the essence of metaphorical names and their role in the language has attracted the attention of many foreign and domestic linguists on the material of various languages, but until now the fact of the parallel existence of metaphors and neutral nominations has not been emphasized.&#x0D; The research is in line with modern problems of linguistics related to the relationship of language, thinking an
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Arbib, Michael. Neural Models of Language Processes. Elsevier Science & Technology Books, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Cairns, Paul, Joseph P. Levy, Dimitrios Bairaktaris, and John A. Bullinaria. Connectionist Models of Memory and Language. Taylor & Francis Group, 2015.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Dimitoglou, George, and Ahmad Tafti. Artificial Intelligence: Machine Learning, Convolutional Neural Networks and Large Language Models. de Gruyter GmbH, Walter, 2024.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Dimitoglou, George, and Ahmad Tafti. Artificial Intelligence: Machine Learning, Convolutional Neural Networks and Large Language Models. de Gruyter GmbH, Walter, 2024.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Dimitoglou, George, and Ahmad Tafti. Artificial Intelligence: Machine Learning, Convolutional Neural Networks and Large Language Models. de Gruyter GmbH, Walter, 2024.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2014.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Houghton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Capítulos de libros sobre el tema "Neural language models"

1

Skansi, Sandro. "Neural Language Models." In Undergraduate Topics in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73004-2_9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Delasalles, Edouard, Sylvain Lamprier, and Ludovic Denoyer. "Dynamic Neural Language Models." In Neural Information Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36718-3_24.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kucharavy, Andrei. "From Deep Neural Language Models to LLMs." In Large Language Models in Cybersecurity. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_1.

Texto completo
Resumen
AbstractLarge Language Models (LLMs) are scaled-up instances of Deep Neural Language Models—a type of Natural Language Processing (NLP) tools trained with Machine Learning (ML). To best understand how LLMs work, we must dive into what technologies they build on top of and what makes them different. To achieve this, an overview of the history of LLMs development, starting from the 1990s, is provided before covering the counterintuitive purely probabilistic nature of the Deep Neural Language Models, continuous token embedding spaces, recurrent neural networks-based models, what self-attention br
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Hampton, Peter John, Hui Wang, and Zhiwei Lin. "Knowledge Transfer in Neural Language Models." In Artificial Intelligence XXXIV. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71078-5_12.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

O’Neill, James, and Danushka Bollegala. "Learning to Evaluate Neural Language Models." In Communications in Computer and Information Science. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6168-9_11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Flor, Michael. "Neural AQG, Part 1: Early Models." In Synthesis Lectures on Human Language Technologies. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-92072-1_6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Goldrick, Matthew. "Neural Network Models of Speech Production." In The Handbook of the Neuropsychology of Language. Wiley-Blackwell, 2012. http://dx.doi.org/10.1002/9781118432501.ch7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

G, Santhosh Kumar. "Neural Language Models for (Fake?) News Generation." In Data Science for Fake News. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62696-9_6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Huang, Yue, and Xiaodong Gu. "Temporal Modeling Approach for Video Action Recognition Based on Vision-language Models." In Neural Information Processing. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8067-3_38.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Shen, Tongtong, Longbiao Wang, Xie Chen, Kuntharrgyal Khysru, and Jianwu Dang. "Exploiting the Tibetan Radicals in Recurrent Neural Network for Low-Resource Language Models." In Neural Information Processing. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70096-0_28.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Neural language models"

1

Butoi, Alexandra, Ryan Cotterell, and Anej Svete. "Computational Expressivity of Neural Language Models." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.acl-tutorials.3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Cao, Yang, Yangsong Lan, Feiyan Zhai, and Piji Li. "5W1H Extraction With Large Language Models." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10651056.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Huang, Shubin, Qiong Wu, and Yiyi Zhou. "Adapting Pre-trained Language Models to Vision-Language Tasksvia Dynamic Visual Prompting." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10651317.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Hollinsworth, Oskar John, Curt Tigges, Atticus Geiger, and Neel Nanda. "Language Models Linearly Represent Sentiment." In Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.blackboxnlp-1.5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Yang, Jiahong, Wenting Li, Haibo Cheng, and Ping Wang. "Targeted Password Guessing Using Neural Language Models." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10888919.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Peng, Yicui, Hao Chen, Ching-Sheng Lin, et al. "Uncertainty-Aware Explainable Recommendation with Large Language Models." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10651104.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Liu, Huijun, Bin Ji, Jie Yu, et al. "Offline Textual Adversarial Attacks against Large Language Models." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650921.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zlobin, O. N., V. L. Litvinov, and F. V. Filippov. "Domain-Specific Language Models for Continuous Learning." In 2025 VI International Conference on Neural Networks and Neurotechnologies (NeuroNT). IEEE, 2025. https://doi.org/10.1109/neuront66873.2025.11049984.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Park, Hyunwook, Yifan Ding, Ling Zhang, et al. "High-speed Channel Simulator using Neural Language Models." In 2024 IEEE International Symposium on Electromagnetic Compatibility, Signal & Power Integrity (EMC+SIPI). IEEE, 2024. http://dx.doi.org/10.1109/emcsipi49824.2024.10705639.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

KP, Gouri, Visesh AV, Anoop V, Ashlin Parakkal, and Joman P. Joji. "Advancing Sign Language Prediction with Neural Network Models." In 2024 International Conference on Advancement in Renewable Energy and Intelligent Systems (AREIS). IEEE, 2024. https://doi.org/10.1109/areis62559.2024.10893631.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Neural language models"

1

Semerikov, Serhiy O., Illia O. Teplytskyi, Yuliia V. Yechkalo, and Arnold E. Kiv. Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot. [б. в.], 2018. http://dx.doi.org/10.31812/123456789/2648.

Texto completo
Resumen
The article substantiates the necessity to develop training methods of computer simulation of neural networks in the spreadsheet environment. The systematic review of their application to simulating artificial neural networks is performed. The authors distinguish basic approaches to solving the problem of network computer simulation training in the spreadsheet environment, joint application of spreadsheets and tools of neural network simulation, application of third-party add-ins to spreadsheets, development of macros using the embedded languages of spreadsheets; use of standard spreadsheet ad
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Pasupuleti, Murali Krishna. Decentralized Creativity: AI-Infused Blockchain for Secure and Transparent Digital Innovation. National Education Services, 2025. https://doi.org/10.62311/nesx/rrvi125.

Texto completo
Resumen
Abstract The convergence of artificial intelligence (AI) and blockchain technology is transforming the creative economy by enabling secure, transparent, and decentralized innovation in digital content creation, intellectual property management, and monetization. Traditional creative industries are often constrained by centralized platforms, opaque copyright enforcement, and unfair revenue distribution, which limit the autonomy and financial benefits of creators. By leveraging blockchain’s immutable ledger, smart contracts, and non-fungible tokens (NFTs), digital assets can be authenticated, to
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Lee, Benjamin Yen Kit. Automated neuron explanation for code-trained language models. Iowa State University, 2024. http://dx.doi.org/10.31274/cc-20240624-267.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Apicella, M. L., J. Slaton, and B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 10. Neutral Data Manipulation Language (NDML) Precompiler Control Module Product Specification. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada250451.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Althoff, J. L., M. L. Apicella, and S. Singh. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 5. Neutral Data Definition Language (NDDL) Development Specification. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada252450.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Apicella, M. L., J. Slaton, and B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 13. Neutral Data Manipulation Language (NDML) Precompiler Parse NDML Product Specification. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada250453.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Althoff, J., and M. Apicella. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 9. Neutral Data Manipulation Language (NDML) Precompiler Development Specification. Section 2. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada252526.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Apicella, M. L., J. Slaton, and B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 12. Neutral Data Manipulation Language (NDML) Precompiler Parse Procedure Division Product Specification. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada250452.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Apicella, M. L., J. Slaton, B. Levi, and A. Pashak. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 23. Neutral Data Manipulation Language (NDML) Precompiler Build Source Code Product Specification. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada250460.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Apicella, M. L., J. Slaton, B. Levi, and A. Pashak. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 24. Neutral Data Manipulation Language (NDML) Precompiler Generator Support Routines Product Specification. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada250461.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!