Academic literature on the topic 'Neural language models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural language models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neural language models"

1

Buckman, Jacob, and Graham Neubig. "Neural Lattice Language Models." Transactions of the Association for Computational Linguistics 6 (December 2018): 529–41. http://dx.doi.org/10.1162/tacl_a_00036.

Full text
Abstract:
In this work, we propose a new language modeling paradigm that has the ability to perform both prediction and moderation of information flow at multiple granularities: neural lattice language models. These models construct a lattice of possible paths through a sentence and marginalize across this lattice to calculate sequence probabilities or optimize parameters. This approach allows us to seamlessly incorporate linguistic intuitions — including polysemy and the existence of multiword lexical items — into our language model. Experiments on multiple language modeling tasks show that English neu
APA, Harvard, Vancouver, ISO, and other styles
2

Bengio, Yoshua. "Neural net language models." Scholarpedia 3, no. 1 (2008): 3881. http://dx.doi.org/10.4249/scholarpedia.3881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dong, Li. "Learning natural language interfaces with neural models." AI Matters 7, no. 2 (2021): 14–17. http://dx.doi.org/10.1145/3478369.3478375.

Full text
Abstract:
Language is the primary and most natural means of communication for humans. The learning curve of interacting with various services (e.g., digital assistants, and smart appliances) would be greatly reduced if we could talk to machines using human language. However, in most cases computers can only interpret and execute formal languages.
APA, Harvard, Vancouver, ISO, and other styles
4

De Coster, Mathieu, and Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation." Information 13, no. 5 (2022): 220. http://dx.doi.org/10.3390/info13050220.

Full text
Abstract:
We consider neural sign language translation: machine translation from signed to written languages using encoder–decoder neural networks. Translating sign language videos to written language text is especially complex because of the difference in modality between source and target language and, consequently, the required video processing. At the same time, sign languages are low-resource languages, their datasets dwarfed by those available for written languages. Recent advances in written language processing and success stories of transfer learning raise the question of how pretrained written
APA, Harvard, Vancouver, ISO, and other styles
5

Mandy Lau. "Artificial intelligence language models and the false fantasy of participatory language policies." Working papers in Applied Linguistics and Linguistics at York 1 (September 13, 2021): 4–15. http://dx.doi.org/10.25071/2564-2855.5.

Full text
Abstract:
Artificial intelligence neural language models learn from a corpus of online language data, often drawn directly from user-generated content through crowdsourcing or the gift economy, bypassing traditional keepers of language policy and planning (such as governments and institutions). Here lies the dream that the languages of the digital world can bend towards individual needs and wants, and not the traditional way around. Through the participatory language work of users, linguistic diversity, accessibility, personalization, and inclusion can be increased. However, the promise of a more partic
APA, Harvard, Vancouver, ISO, and other styles
6

Chang, Tyler A., and Benjamin K. Bergen. "Word Acquisition in Neural Language Models." Transactions of the Association for Computational Linguistics 10 (2022): 1–16. http://dx.doi.org/10.1162/tacl_a_00444.

Full text
Abstract:
Abstract We investigate how neural language models acquire individual words during training, extracting learning curves and ages of acquisition for over 600 words on the MacArthur-Bates Communicative Development Inventory (Fenson et al., 2007). Drawing on studies of word acquisition in children, we evaluate multiple predictors for words’ ages of acquisition in LSTMs, BERT, and GPT-2. We find that the effects of concreteness, word length, and lexical class are pointedly different in children and language models, reinforcing the importance of interaction and sensorimotor experience in child lang
APA, Harvard, Vancouver, ISO, and other styles
7

Mezzoudj, Freha, and Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models." International Journal of Innovative Computing and Applications 9, no. 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.095762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mezzoudj, Freha, and Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models." International Journal of Innovative Computing and Applications 9, no. 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.10016827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Qi, Kunxun, and Jianfeng Du. "Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8632–39. http://dx.doi.org/10.1609/aaai.v34i05.6387.

Full text
Abstract:
Cross-lingual natural language inference is a fundamental task in cross-lingual natural language understanding, widely addressed by neural models recently. Existing neural model based methods either align sentence embeddings between source and target languages, heavily relying on annotated parallel corpora, or exploit pre-trained cross-lingual language models that are fine-tuned on a single language and hard to transfer knowledge to another language. To resolve these limitations in existing methods, this paper proposes an adversarial training framework to enhance both pre-trained models and cl
APA, Harvard, Vancouver, ISO, and other styles
10

Angius, Nicola, Pietro Perconti, Alessio Plebe, and Alessandro Acciai. "The Simulative Role of Neural Language Models in Brain Language Processing." Philosophies 9, no. 5 (2024): 137. http://dx.doi.org/10.3390/philosophies9050137.

Full text
Abstract:
This paper provides an epistemological and methodological analysis of the recent practice of using neural language models to simulate brain language processing. It is argued that, on the one hand, this practice can be understood as an instance of the traditional simulative method in artificial intelligence, following a mechanistic understanding of the mind; on the other hand, that it modifies the simulative method significantly. Firstly, neural language models are introduced; a study case showing how neural language models are being applied in cognitive neuroscience for simulative purposes is
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Neural language models"

1

Lei, Tao Ph D. Massachusetts Institute of Technology. "Interpretable neural models for natural language processing." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108990.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 109-119).<br>The success of neural network models often comes at a cost of interpretability. This thesis addresses the problem by providing justifications behind the model's structure and predictions. In the first part of this thesis, we present a class of sequence operations for text processing. The proposed component generalizes from convolution operations and gated aggregations. As justi
APA, Harvard, Vancouver, ISO, and other styles
2

Kunz, Jenny. "Neural Language Models with Explicit Coreference Decision." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-371827.

Full text
Abstract:
Coreference is an important and frequent concept in any form of discourse, and Coreference Resolution (CR) a widely used task in Natural Language Understanding (NLU). In this thesis, we implement and explore two recent models that include the concept of coreference in Recurrent Neural Network (RNN)-based Language Models (LM). Entity and reference decisions are modeled explicitly in these models using attention mechanisms. Both models learn to save the previously observed entities in a set and to decide if the next token created by the LM is a mention of one of the entities in the set, an entit
APA, Harvard, Vancouver, ISO, and other styles
3

Labeau, Matthieu. "Neural language models : Dealing with large vocabularies." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.

Full text
Abstract:
Le travail présenté dans cette thèse explore les méthodes pratiques utilisées pour faciliter l'entraînement et améliorer les performances des modèles de langues munis de très grands vocabulaires. La principale limite à l'utilisation des modèles de langue neuronaux est leur coût computationnel: il dépend de la taille du vocabulaire avec laquelle il grandit linéairement. La façon la plus aisée de réduire le temps de calcul de ces modèles reste de limiter la taille du vocabulaire, ce qui est loin d'être satisfaisant pour de nombreuses tâches. La plupart des méthodes existantes pour l'entraînement
APA, Harvard, Vancouver, ISO, and other styles
4

Bayer, Ali Orkan. "Semantic Language models with deep neural Networks." Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/367784.

Full text
Abstract:
Spoken language systems (SLS) communicate with users in natural language through speech. There are two main problems related to processing the spoken input in SLS. The first one is automatic speech recognition (ASR) which recognizes what the user says. The second one is spoken language understanding (SLU) which understands what the user means. We focus on the language model (LM) component of SLS. LMs constrain the search space that is used in the search for the best hypothesis. Therefore, they play a crucial role in the performance of SLS. It has long been discussed that an improvement in the
APA, Harvard, Vancouver, ISO, and other styles
5

Bayer, Ali Orkan. "Semantic Language models with deep neural Networks." Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1578/1/bayer_thesis.pdf.

Full text
Abstract:
Spoken language systems (SLS) communicate with users in natural language through speech. There are two main problems related to processing the spoken input in SLS. The first one is automatic speech recognition (ASR) which recognizes what the user says. The second one is spoken language understanding (SLU) which understands what the user means. We focus on the language model (LM) component of SLS. LMs constrain the search space that is used in the search for the best hypothesis. Therefore, they play a crucial role in the performance of SLS. It has long been discussed that an improvement in the
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Zhongliang. "Slim Embedding Layers for Recurrent Neural Language Models." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1531950458646138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gangireddy, Siva Reddy. "Recurrent neural network language models for automatic speech recognition." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28990.

Full text
Abstract:
The goal of this thesis is to advance the use of recurrent neural network language models (RNNLMs) for large vocabulary continuous speech recognition (LVCSR). RNNLMs are currently state-of-the-art and shown to consistently reduce the word error rates (WERs) of LVCSR tasks when compared to other language models. In this thesis we propose various advances to RNNLMs. The advances are: improved learning procedures for RNNLMs, enhancing the context, and adaptation of RNNLMs. We learned better parameters by a novel pre-training approach and enhanced the context using prosody and syntactic features.
APA, Harvard, Vancouver, ISO, and other styles
8

Scarcella, Alessandro. "Recurrent neural network language models in the context of under-resourced South African languages." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29431.

Full text
Abstract:
Over the past five years neural network models have been successful across a range of computational linguistic tasks. However, these triumphs have been concentrated in languages with significant resources such as large datasets. Thus, many languages, which are commonly referred to as under-resourced languages, have received little attention and have yet to benefit from recent advances. This investigation aims to evaluate the implications of recent advances in neural network language modelling techniques for under-resourced South African languages. Rudimentary, single layered recurrent neural n
APA, Harvard, Vancouver, ISO, and other styles
9

Le, Hai Son. "Continuous space models with neural networks in natural language processing." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00776704.

Full text
Abstract:
The purpose of language models is in general to capture and to model regularities of language, thereby capturing morphological, syntactical and distributional properties of word sequences in a given language. They play an important role in many successful applications of Natural Language Processing, such as Automatic Speech Recognition, Machine Translation and Information Extraction. The most successful approaches to date are based on n-gram assumption and the adjustment of statistics from the training data by applying smoothing and back-off techniques, notably Kneser-Ney technique, introduced
APA, Harvard, Vancouver, ISO, and other styles
10

Miao, Yishu. "Deep generative models for natural language processing." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:e4e1f1f9-e507-4754-a0ab-0246f1e1e258.

Full text
Abstract:
Deep generative models are essential to Natural Language Processing (NLP) due to their outstanding ability to use unlabelled data, to incorporate abundant linguistic features, and to learn interpretable dependencies among data. As the structure becomes deeper and more complex, having an effective and efficient inference method becomes increasingly important. In this thesis, neural variational inference is applied to carry out inference for deep generative models. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Neural language models"

1

1957-, Houghton George, ed. Connectionist models in cognitive psychology. Psychology Press, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Miikkulainen, Risto. Subsymbolic natural language processing: An integrated model of scripts, lexicon, and memory. MIT Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bavaeva, Ol'ga. Metaphorical parallels of the neutral nomination "man" in modern English. INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1858259.

Full text
Abstract:
The monograph is devoted to a multidimensional analysis of metaphor in modern English as a parallel nomination that exists along with a neutral equivalent denoting a person. The problem of determining the essence of metaphorical names and their role in the language has attracted the attention of many foreign and domestic linguists on the material of various languages, but until now the fact of the parallel existence of metaphors and neutral nominations has not been emphasized.&#x0D; The research is in line with modern problems of linguistics related to the relationship of language, thinking an
APA, Harvard, Vancouver, ISO, and other styles
4

Arbib, Michael. Neural Models of Language Processes. Elsevier Science & Technology Books, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cairns, Paul, Joseph P. Levy, Dimitrios Bairaktaris, and John A. Bullinaria. Connectionist Models of Memory and Language. Taylor & Francis Group, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dimitoglou, George, and Ahmad Tafti. Artificial Intelligence: Machine Learning, Convolutional Neural Networks and Large Language Models. de Gruyter GmbH, Walter, 2024.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dimitoglou, George, and Ahmad Tafti. Artificial Intelligence: Machine Learning, Convolutional Neural Networks and Large Language Models. de Gruyter GmbH, Walter, 2024.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dimitoglou, George, and Ahmad Tafti. Artificial Intelligence: Machine Learning, Convolutional Neural Networks and Large Language Models. de Gruyter GmbH, Walter, 2024.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Houghton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Neural language models"

1

Skansi, Sandro. "Neural Language Models." In Undergraduate Topics in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73004-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Delasalles, Edouard, Sylvain Lamprier, and Ludovic Denoyer. "Dynamic Neural Language Models." In Neural Information Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36718-3_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kucharavy, Andrei. "From Deep Neural Language Models to LLMs." In Large Language Models in Cybersecurity. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_1.

Full text
Abstract:
AbstractLarge Language Models (LLMs) are scaled-up instances of Deep Neural Language Models—a type of Natural Language Processing (NLP) tools trained with Machine Learning (ML). To best understand how LLMs work, we must dive into what technologies they build on top of and what makes them different. To achieve this, an overview of the history of LLMs development, starting from the 1990s, is provided before covering the counterintuitive purely probabilistic nature of the Deep Neural Language Models, continuous token embedding spaces, recurrent neural networks-based models, what self-attention br
APA, Harvard, Vancouver, ISO, and other styles
4

Hampton, Peter John, Hui Wang, and Zhiwei Lin. "Knowledge Transfer in Neural Language Models." In Artificial Intelligence XXXIV. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71078-5_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

O’Neill, James, and Danushka Bollegala. "Learning to Evaluate Neural Language Models." In Communications in Computer and Information Science. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6168-9_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Flor, Michael. "Neural AQG, Part 1: Early Models." In Synthesis Lectures on Human Language Technologies. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-92072-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Goldrick, Matthew. "Neural Network Models of Speech Production." In The Handbook of the Neuropsychology of Language. Wiley-Blackwell, 2012. http://dx.doi.org/10.1002/9781118432501.ch7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

G, Santhosh Kumar. "Neural Language Models for (Fake?) News Generation." In Data Science for Fake News. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62696-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Yue, and Xiaodong Gu. "Temporal Modeling Approach for Video Action Recognition Based on Vision-language Models." In Neural Information Processing. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8067-3_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shen, Tongtong, Longbiao Wang, Xie Chen, Kuntharrgyal Khysru, and Jianwu Dang. "Exploiting the Tibetan Radicals in Recurrent Neural Network for Low-Resource Language Models." In Neural Information Processing. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70096-0_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Neural language models"

1

Butoi, Alexandra, Ryan Cotterell, and Anej Svete. "Computational Expressivity of Neural Language Models." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.acl-tutorials.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cao, Yang, Yangsong Lan, Feiyan Zhai, and Piji Li. "5W1H Extraction With Large Language Models." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10651056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Shubin, Qiong Wu, and Yiyi Zhou. "Adapting Pre-trained Language Models to Vision-Language Tasksvia Dynamic Visual Prompting." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10651317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hollinsworth, Oskar John, Curt Tigges, Atticus Geiger, and Neel Nanda. "Language Models Linearly Represent Sentiment." In Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.blackboxnlp-1.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Jiahong, Wenting Li, Haibo Cheng, and Ping Wang. "Targeted Password Guessing Using Neural Language Models." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10888919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Peng, Yicui, Hao Chen, Ching-Sheng Lin, et al. "Uncertainty-Aware Explainable Recommendation with Large Language Models." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10651104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Huijun, Bin Ji, Jie Yu, et al. "Offline Textual Adversarial Attacks against Large Language Models." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zlobin, O. N., V. L. Litvinov, and F. V. Filippov. "Domain-Specific Language Models for Continuous Learning." In 2025 VI International Conference on Neural Networks and Neurotechnologies (NeuroNT). IEEE, 2025. https://doi.org/10.1109/neuront66873.2025.11049984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Park, Hyunwook, Yifan Ding, Ling Zhang, et al. "High-speed Channel Simulator using Neural Language Models." In 2024 IEEE International Symposium on Electromagnetic Compatibility, Signal & Power Integrity (EMC+SIPI). IEEE, 2024. http://dx.doi.org/10.1109/emcsipi49824.2024.10705639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

KP, Gouri, Visesh AV, Anoop V, Ashlin Parakkal, and Joman P. Joji. "Advancing Sign Language Prediction with Neural Network Models." In 2024 International Conference on Advancement in Renewable Energy and Intelligent Systems (AREIS). IEEE, 2024. https://doi.org/10.1109/areis62559.2024.10893631.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Neural language models"

1

Semerikov, Serhiy O., Illia O. Teplytskyi, Yuliia V. Yechkalo, and Arnold E. Kiv. Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot. [б. в.], 2018. http://dx.doi.org/10.31812/123456789/2648.

Full text
Abstract:
The article substantiates the necessity to develop training methods of computer simulation of neural networks in the spreadsheet environment. The systematic review of their application to simulating artificial neural networks is performed. The authors distinguish basic approaches to solving the problem of network computer simulation training in the spreadsheet environment, joint application of spreadsheets and tools of neural network simulation, application of third-party add-ins to spreadsheets, development of macros using the embedded languages of spreadsheets; use of standard spreadsheet ad
APA, Harvard, Vancouver, ISO, and other styles
2

Pasupuleti, Murali Krishna. Decentralized Creativity: AI-Infused Blockchain for Secure and Transparent Digital Innovation. National Education Services, 2025. https://doi.org/10.62311/nesx/rrvi125.

Full text
Abstract:
Abstract The convergence of artificial intelligence (AI) and blockchain technology is transforming the creative economy by enabling secure, transparent, and decentralized innovation in digital content creation, intellectual property management, and monetization. Traditional creative industries are often constrained by centralized platforms, opaque copyright enforcement, and unfair revenue distribution, which limit the autonomy and financial benefits of creators. By leveraging blockchain’s immutable ledger, smart contracts, and non-fungible tokens (NFTs), digital assets can be authenticated, to
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Benjamin Yen Kit. Automated neuron explanation for code-trained language models. Iowa State University, 2024. http://dx.doi.org/10.31274/cc-20240624-267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Apicella, M. L., J. Slaton, and B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 10. Neutral Data Manipulation Language (NDML) Precompiler Control Module Product Specification. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada250451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Althoff, J. L., M. L. Apicella, and S. Singh. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 5. Neutral Data Definition Language (NDDL) Development Specification. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada252450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Apicella, M. L., J. Slaton, and B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 13. Neutral Data Manipulation Language (NDML) Precompiler Parse NDML Product Specification. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada250453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Althoff, J., and M. Apicella. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 9. Neutral Data Manipulation Language (NDML) Precompiler Development Specification. Section 2. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada252526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Apicella, M. L., J. Slaton, and B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 12. Neutral Data Manipulation Language (NDML) Precompiler Parse Procedure Division Product Specification. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada250452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Apicella, M. L., J. Slaton, B. Levi, and A. Pashak. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 23. Neutral Data Manipulation Language (NDML) Precompiler Build Source Code Product Specification. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada250460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Apicella, M. L., J. Slaton, B. Levi, and A. Pashak. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 24. Neutral Data Manipulation Language (NDML) Precompiler Generator Support Routines Product Specification. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada250461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!