Academic literature on the topic 'Daga language'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Daga language.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Daga language"

1

Gascoigne, David. "Boomboom and Hullabaloo: Rhythm in the Zurich Dada Revolution." Paragraph 33, no. 2 (July 2010): 197–214. http://dx.doi.org/10.3366/para.2010.0004.

Full text
Abstract:
The drumbeats which punctuated Zurich Dada performances signal and enact the dismantling of the complexities of a culture the participants deemed wholly discredited. While the Futurists looked to technology for rhythmic renewal, Dadaists sought a deeper, more indefinable rhythm to nourish a far-reaching renaissance of human values. Study of ‘nonsensical’ texts by Huelsenbeck, Ball and Tzara reveals some traditional metrical elements. However, in Dadaist performance pieces in an imaginary hybrid language or in a ‘simultaneous poem’ in three languages at once, such elements are freed from traditional associations and become multiple, complex and ambiguous in performance and reception.
APA, Harvard, Vancouver, ISO, and other styles
2

Robertson, Eric. "Writing in Tongues: Multilingual Poetry and Self-Translation in France from Dada to the Present." Nottingham French Studies 56, no. 2 (July 2017): 119–38. http://dx.doi.org/10.3366/nfs.2017.0175.

Full text
Abstract:
This essay considers the case of some modern and contemporary bilingual and multilingual poets who have used translation creatively in the context of French literature. Far from attempting to erase the traces of the source language to make it more acceptable to a readership in the target language, these poets – from Hugo Ball, Jean Arp and Henri Michaux to Ryoko Sekiguchi, Caroline Bergvall and Anne Tardos – accept and even welcome the ‘radical artifice’ of their poetry and embrace the inherent foreignness of the word, even in the mother tongue. In myriad ways, their work explores language as a place of difference rather than equivalence, and as a site of slippage in which words are forever susceptible to bordering on other words and other languages, real or invented.
APA, Harvard, Vancouver, ISO, and other styles
3

Arikh Guliyev, Teyyub, and Jala Elman Ganiyeva. "Sources of somatic expressions in modern Azerbaijani language." SCIENTIFIC WORK 62, no. 01 (February 8, 2021): 43–45. http://dx.doi.org/10.36719/2663-4619/62/43-45.

Full text
Abstract:
Just as somatic expressions, phraseological compounds are rich in the lexical structure of the Azerbaijani language, their history is also ancient. This is due to both the ethnic outlook of the people and the language's ability to fully incorporate creative thinking. A lot of research has been done on the formation of phraseological compounds in Azerbaijani linguistics, some notes have been made about the sources of phraseological compounds. During our research, we came across the opinions of linguists based on their observations on monuments such as Orkhon-Yenisey, Kitabi-Dada Gorgud, Divani-dictionary-it-Turk. According to their research, in ancient times the number of phraseological combinations becomes less than in our modern language. Some of the phraseological combinations, especially those formed on the basis of key words belonging to the Turkic languages (ie, the noun in the formation of the phraseological unit) have a more ancient history. Based on written monuments, we can say that the vast majority of the phraseological fund belonging to the historical antiquities consists of phraseological combinations, somatic expressions formed by the names of body parts. The language of folklore materials, like monuments such as Orkhon-Yenisey, Kitabi-Dada Gorgud, Divani-lughat-it-turk, is a very rich source for the study of phraseological combinations, including somatic expressions. This includes phraseological materials in the language of bayats, riddles, proverbs and parables, tales and epics that preserve ancient traces.
APA, Harvard, Vancouver, ISO, and other styles
4

White, J. J., and Richard Sheppard. "Modernism: Dada: Postmodernism." Modern Language Review 97, no. 4 (October 2002): 1028. http://dx.doi.org/10.2307/3738713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fisher, Dominique, and Rudolph E. Kuenzli. "Dada and Surrealist Film." MLN 103, no. 4 (September 1988): 943. http://dx.doi.org/10.2307/2905034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Duoduo. "Noun-epithets of Dongba and Daba oral traditions." Linguistics of the Tibeto-Burman Area 44, no. 1 (May 11, 2021): 133–39. http://dx.doi.org/10.1075/ltba.20011.xu.

Full text
Abstract:
Abstract Dongba and Daba chants represent two of the few oral traditions still surviving in the world. In both traditions, the main category of formulaic expressions consists of traditional noun-epithets describing spirits. Dongba and Daba spirits can be classified into five categories, of which the noun-epithets used to describe them share similar features. Another significant percentage of noun-epithets portray figures of animals. Dongba and Daba chants are both chanted in odd-numbered metric patterns in which noun-epithets are adapted to the metric context. Besides the core expression (often a tetra-syllabic compound), several monosyllabic words not affecting the core meaning may be inserted as optional morphemes to modify the number of syllables in the noun-epithet. This study provides a systematic philological analysis of the vast repertoire of Daba and Dongba noun-epithets. Comparative mythology and comparative linguistics combine to present a comprehensive description of the stylistic features of Daba and Dongba noun-epithets.
APA, Harvard, Vancouver, ISO, and other styles
7

van den Berg, Hubert. "DADA-Zürich, Anarchismus und Boheme." Neophilologus 71, no. 4 (October 1987): 575–85. http://dx.doi.org/10.1007/bf00636811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Defrancq, Bart. "Establishing cross-linguistic semantic relatedness through monolingual corpora." International Journal of Corpus Linguistics 13, no. 4 (December 8, 2008): 465–90. http://dx.doi.org/10.1075/ijcl.13.4.04def.

Full text
Abstract:
Each instance of language comparison requires observations on semantic equivalence. Meaning is by far the most popular tertium comparationis in contrastive and typological research. However, the question of how semantic equivalence is to be determined remains extremely difficult to solve. This paper presents an approach to detect semantic relatedness between a limited range of lexical items from different languages on the basis of monolingual data. Applying distributional similarity (Dagan et al. 1999) cross-linguistically, it identifies semantically related verbs governing embedded interrogatives by looking at the frequency of the question words (i.e. wh-items) that are used in the embedded interrogatives in monolingual corpora. Convincing results are obtained for six different language pairs: English-French, English-Dutch, English-Spanish, French-Dutch, French-Spanish and Dutch-Spanish.
APA, Harvard, Vancouver, ISO, and other styles
9

Abela. "“Language is in its January”: Dada and William Carlos Williams’s Early Prose." William Carlos Williams Review 34, no. 2 (2017): 110. http://dx.doi.org/10.5325/willcarlwillrevi.34.2.0110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Abella, Rubén. ""Language is in its January": Dada and William Carlos Williams's Early Prose." William Carlos Williams Review 34, no. 2 (2017): 110–28. http://dx.doi.org/10.1353/wcw.2017.0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Daga language"

1

Newton, Alan R. "A formal data fusion language." Thesis, Cranfield University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.481233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jarman, Jay. "Combining Natural Language Processing and Statistical Text Mining: A Study of Specialized Versus Common Languages." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3166.

Full text
Abstract:
This dissertation focuses on developing and evaluating hybrid approaches for analyzing free-form text in the medical domain. This research draws on natural language processing (NLP) techniques that are used to parse and extract concepts based on a controlled vocabulary. Once important concepts are extracted, additional machine learning algorithms, such as association rule mining and decision tree induction, are used to discover classification rules for specific targets. This multi-stage pipeline approach is contrasted with traditional statistical text mining (STM) methods based on term counts and term-by-document frequencies. The aim is to create effective text analytic processes by adapting and combining individual methods. The methods are evaluated on an extensive set of real clinical notes annotated by experts to provide benchmark results. There are two main research question for this dissertation. First, can information (specialized language) be extracted from clinical progress notes that will represent the notes without loss of predictive information? Secondly, can classifiers be built for clinical progress notes that are represented by specialized language? Three experiments were conducted to answer these questions by investigating some specific challenges with regard to extracting information from the unstructured clinical notes and classifying documents that are so important in the medical domain. The first experiment addresses the first research question by focusing on whether relevant patterns within clinical notes reside more in the highly technical medically-relevant terminology or in the passages expressed by common language. The results from this experiment informed the subsequent experiments. It also shows that predictive patterns are preserved by preprocessing text documents with a grammatical NLP system that separates specialized language from common language and it is an acceptable method of data reduction for the purpose of STM. Experiments two and three address the second research question. Experiment two focuses on applying rule-mining techniques to the output of the information extraction effort from experiment one, with the ultimate goal of creating rule-based classifiers. There are several contributions of this experiment. First, it uses a novel approach to create classification rules from specialized language and to build a classifier. The data is split by classification and then rules are generated. Secondly, several toolkits were assembled to create the automated process by which the rules were created. Third, this automated process created interpretable rules and finally, the resulting model provided good accuracy. The resulting performance was slightly lower than from the classifier from experiment one but had the benefit of having interpretable rules. Experiment three focuses on using decision tree induction (DTI) for a rule discovery approach to classification, which also addresses research question three. DTI is another rule centric method for creating a classifier. The contributions of this experiment are that DTI can be used to create an accurate and interpretable classifier using specialized language. Additionally, the resulting rule sets are simple and easily interpretable, as well as created using a highly automated process.
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Lizhong. "Express query language and templates and rules two languages for advanced software system integrations." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1181162850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hellmann, Sebastian. "Integrating Natural Language Processing (NLP) and Language Resources Using Linked Data." Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-157932.

Full text
Abstract:
This thesis is a compendium of scientific works and engineering specifications that have been contributed to a large community of stakeholders to be copied, adapted, mixed, built upon and exploited in any way possible to achieve a common goal: Integrating Natural Language Processing (NLP) and Language Resources Using Linked Data The explosion of information technology in the last two decades has led to a substantial growth in quantity, diversity and complexity of web-accessible linguistic data. These resources become even more useful when linked with each other and the last few years have seen the emergence of numerous approaches in various disciplines concerned with linguistic resources and NLP tools. It is the challenge of our time to store, interlink and exploit this wealth of data accumulated in more than half a century of computational linguistics, of empirical, corpus-based study of language, and of computational lexicography in all its heterogeneity. The vision of the Giant Global Graph (GGG) was conceived by Tim Berners-Lee aiming at connecting all data on the Web and allowing to discover new relations between this openly-accessible data. This vision has been pursued by the Linked Open Data (LOD) community, where the cloud of published datasets comprises 295 data repositories and more than 30 billion RDF triples (as of September 2011). RDF is based on globally unique and accessible URIs and it was specifically designed to establish links between such URIs (or resources). This is captured in the Linked Data paradigm that postulates four rules: (1) Referred entities should be designated by URIs, (2) these URIs should be resolvable over HTTP, (3) data should be represented by means of standards such as RDF, (4) and a resource should include links to other resources. Although it is difficult to precisely identify the reasons for the success of the LOD effort, advocates generally argue that open licenses as well as open access are key enablers for the growth of such a network as they provide a strong incentive for collaboration and contribution by third parties. In his keynote at BNCOD 2011, Chris Bizer argued that with RDF the overall data integration effort can be “split between data publishers, third parties, and the data consumer”, a claim that can be substantiated by observing the evolution of many large data sets constituting the LOD cloud. As written in the acknowledgement section, parts of this thesis has received numerous feedback from other scientists, practitioners and industry in many different ways. The main contributions of this thesis are summarized here: Part I – Introduction and Background. During his keynote at the Language Resource and Evaluation Conference in 2012, Sören Auer stressed the decentralized, collaborative, interlinked and interoperable nature of the Web of Data. The keynote provides strong evidence that Semantic Web technologies such as Linked Data are on its way to become main stream for the representation of language resources. The jointly written companion publication for the keynote was later extended as a book chapter in The People’s Web Meets NLP and serves as the basis for “Introduction” and “Background”, outlining some stages of the Linked Data publication and refinement chain. Both chapters stress the importance of open licenses and open access as an enabler for collaboration, the ability to interlink data on the Web as a key feature of RDF as well as provide a discussion about scalability issues and decentralization. Furthermore, we elaborate on how conceptual interoperability can be achieved by (1) re-using vocabularies, (2) agile ontology development, (3) meetings to refine and adapt ontologies and (4) tool support to enrich ontologies and match schemata. Part II - Language Resources as Linked Data. “Linked Data in Linguistics” and “NLP & DBpedia, an Upward Knowledge Acquisition Spiral” summarize the results of the Linked Data in Linguistics (LDL) Workshop in 2012 and the NLP & DBpedia Workshop in 2013 and give a preview of the MLOD special issue. In total, five proceedings – three published at CEUR (OKCon 2011, WoLE 2012, NLP & DBpedia 2013), one Springer book (Linked Data in Linguistics, LDL 2012) and one journal special issue (Multilingual Linked Open Data, MLOD to appear) – have been (co-)edited to create incentives for scientists to convert and publish Linked Data and thus to contribute open and/or linguistic data to the LOD cloud. Based on the disseminated call for papers, 152 authors contributed one or more accepted submissions to our venues and 120 reviewers were involved in peer-reviewing. “DBpedia as a Multilingual Language Resource” and “Leveraging the Crowdsourcing of Lexical Resources for Bootstrapping a Linguistic Linked Data Cloud” contain this thesis’ contribution to the DBpedia Project in order to further increase the size and inter-linkage of the LOD Cloud with lexical-semantic resources. Our contribution comprises extracted data from Wiktionary (an online, collaborative dictionary similar to Wikipedia) in more than four languages (now six) as well as language-specific versions of DBpedia, including a quality assessment of inter-language links between Wikipedia editions and internationalized content negotiation rules for Linked Data. In particular the work described in created the foundation for a DBpedia Internationalisation Committee with members from over 15 different languages with the common goal to push DBpedia as a free and open multilingual language resource. Part III - The NLP Interchange Format (NIF). “NIF 2.0 Core Specification”, “NIF 2.0 Resources and Architecture” and “Evaluation and Related Work” constitute one of the main contribution of this thesis. The NLP Interchange Format (NIF) is an RDF/OWL-based format that aims to achieve interoperability between Natural Language Processing (NLP) tools, language resources and annotations. The core specification is included in and describes which URI schemes and RDF vocabularies must be used for (parts of) natural language texts and annotations in order to create an RDF/OWL-based interoperability layer with NIF built upon Unicode Code Points in Normal Form C. In , classes and properties of the NIF Core Ontology are described to formally define the relations between text, substrings and their URI schemes. contains the evaluation of NIF. In a questionnaire, we asked questions to 13 developers using NIF. UIMA, GATE and Stanbol are extensible NLP frameworks and NIF was not yet able to provide off-the-shelf NLP domain ontologies for all possible domains, but only for the plugins used in this study. After inspecting the software, the developers agreed however that NIF is adequate enough to provide a generic RDF output based on NIF using literal objects for annotations. All developers were able to map the internal data structure to NIF URIs to serialize RDF output (Adequacy). The development effort in hours (ranging between 3 and 40 hours) as well as the number of code lines (ranging between 110 and 445) suggest, that the implementation of NIF wrappers is easy and fast for an average developer. Furthermore the evaluation contains a comparison to other formats and an evaluation of the available URI schemes for web annotation. In order to collect input from the wide group of stakeholders, a total of 16 presentations were given with extensive discussions and feedback, which has lead to a constant improvement of NIF from 2010 until 2013. After the release of NIF (Version 1.0) in November 2011, a total of 32 vocabulary employments and implementations for different NLP tools and converters were reported (8 by the (co-)authors, including Wiki-link corpus, 13 by people participating in our survey and 11 more, of which we have heard). Several roll-out meetings and tutorials were held (e.g. in Leipzig and Prague in 2013) and are planned (e.g. at LREC 2014). Part IV - The NLP Interchange Format in Use. “Use Cases and Applications for NIF” and “Publication of Corpora using NIF” describe 8 concrete instances where NIF has been successfully used. One major contribution in is the usage of NIF as the recommended RDF mapping in the Internationalization Tag Set (ITS) 2.0 W3C standard and the conversion algorithms from ITS to NIF and back. One outcome of the discussions in the standardization meetings and telephone conferences for ITS 2.0 resulted in the conclusion there was no alternative RDF format or vocabulary other than NIF with the required features to fulfill the working group charter. Five further uses of NIF are described for the Ontology of Linguistic Annotations (OLiA), the RDFaCE tool, the Tiger Corpus Navigator, the OntosFeeder and visualisations of NIF using the RelFinder tool. These 8 instances provide an implemented proof-of-concept of the features of NIF. starts with describing the conversion and hosting of the huge Google Wikilinks corpus with 40 million annotations for 3 million web sites. The resulting RDF dump contains 477 million triples in a 5.6 GB compressed dump file in turtle syntax. describes how NIF can be used to publish extracted facts from news feeds in the RDFLiveNews tool as Linked Data. Part V - Conclusions. provides lessons learned for NIF, conclusions and an outlook on future work. Most of the contributions are already summarized above. One particular aspect worth mentioning is the increasing number of NIF-formated corpora for Named Entity Recognition (NER) that have come into existence after the publication of the main NIF paper Integrating NLP using Linked Data at ISWC 2013. These include the corpora converted by Steinmetz, Knuth and Sack for the NLP & DBpedia workshop and an OpenNLP-based CoNLL converter by Brümmer. Furthermore, we are aware of three LREC 2014 submissions that leverage NIF: NIF4OGGD - NLP Interchange Format for Open German Governmental Data, N^3 – A Collection of Datasets for Named Entity Recognition and Disambiguation in the NLP Interchange Format and Global Intelligent Content: Active Curation of Language Resources using Linked Data as well as an early implementation of a GATE-based NER/NEL evaluation framework by Dojchinovski and Kliegr. Further funding for the maintenance, interlinking and publication of Linguistic Linked Data as well as support and improvements of NIF is available via the expiring LOD2 EU project, as well as the CSA EU project called LIDER, which started in November 2013. Based on the evidence of successful adoption presented in this thesis, we can expect a decent to high chance of reaching critical mass of Linked Data technology as well as the NIF standard in the field of Natural Language Processing and Language Resources.
APA, Harvard, Vancouver, ISO, and other styles
5

Touma, Rizkallah. "Computer-language based data prefetching techniques." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/665207.

Full text
Abstract:
Data prefetching has long been used as a technique to improve access times to persistent data. It is based on retrieving data records from persistent storage to main memory before the records are needed. Data prefetching has been applied to a wide variety of persistent storage systems, from file systems to Relational Database Management Systems and NoSQL databases, with the aim of reducing access times to the data maintained by the system and thus improve the execution times of the applications using this data. However, most existing solutions to data prefetching have been based on information that can be retrieved from the storage system itself, whether in the form of heuristics based on the data schema or data access patterns detected by monitoring access to the system. There are multiple disadvantages of these approaches in terms of the rigidity of the heuristics they use, the accuracy of the predictions they make and / or the time they need to make these predictions, a process often performed while the applications are accessing the data and causing considerable overhead. In light of the above, this thesis proposes two novel approaches to data prefetching based on predictions made by analyzing the instructions and statements of the computer languages used to access persistent data. The proposed approaches take into consideration how the data is accessed by the higher-level applications, make accurate predictions and are performed without causing any additional overhead. The first of the proposed approaches aims at analyzing instructions of applications written in object-oriented languages in order to prefetch data from Persistent Object Stores. The approach is based on static code analysis that is done prior to the application execution and hence does not add any overhead. It also includes various strategies to deal with cases that require runtime information unavailable prior to the execution of the application. We integrate this analysis approach into an existing Persistent Object Store and run a series of extensive experiments to measure the improvement obtained by prefetching the objects predicted by the approach. The second approach analyzes statements and historic logs of the declarative query language SPARQL in order to prefetch data from RDF Triplestores. The approach measures two types of similarity between SPARQL queries in order to detect recurring query patterns in the historic logs. Afterwards, it uses the detected patterns to predict subsequent queries and launch them before they are requested to prefetch the data needed by them. Our evaluation of the proposed approach shows that it high-accuracy prediction and can achieve a high cache hit rate when caching the results of the predicted queries.
Precargar datos ha sido una de las técnicas más comunes para mejorar los tiempos de acceso a datos persistentes. Esta técnica se basa en predecir los registros de datos que se van a acceder en el futuro y cargarlos del almacenimiento persistente a la memoria con antelación a su uso. Precargar datos ha sido aplicado en multitud de sistemas de almacenimiento persistente, desde sistemas de ficheros a bases de datos relacionales y NoSQL, con el objetivo de reducir los tiempos de acceso a los datos y por lo tanto mejorar los tiempos de ejecución de las aplicaciones que usan estos datos. Sin embargo, la mayoría de los enfoques existentes utilizan predicciones basadas en información que se encuentra dentro del mismo sistema de almacenimiento, ya sea en forma de heurísticas basadas en el esquema de los datos o patrones de acceso a los datos generados mediante la monitorización del acceso al sistema. Estos enfoques presentan varias desventajas en cuanto a la rigidez de las heurísticas usadas, la precisión de las predicciones generadas y el tiempo que necesitan para generar estas predicciones, un proceso que se realiza con frecuencia mientras las aplicaciones acceden a los datos y que puede tener efectos negativos en el tiempo de ejecución de estas aplicaciones. En vista de lo anterior, esta tesis presenta dos enfoques novedosos para precargar datos basados en predicciones generadas por el análisis de las instrucciones y sentencias del lenguaje informático usado para acceder a los datos persistentes. Los enfoques propuestos toman en consideración cómo las aplicaciones acceden a los datos, generan predicciones precisas y mejoran el rendimiento de las aplicaciones sin causar ningún efecto negativo. El primer enfoque analiza las instrucciones de applicaciones escritas en lenguajes de programación orientados a objetos con el fin de precargar datos de almacenes de objetos persistentes. El enfoque emplea análisis estático de código hecho antes de la ejecución de las aplicaciones, y por lo tanto no afecta negativamente el rendimiento de las mismas. El enfoque también incluye varias estrategias para tratar casos que requieren información de runtime no disponible antes de ejecutar las aplicaciones. Además, integramos este enfoque en un almacén de objetos persistentes y ejecutamos una serie extensa de experimentos para medir la mejora de rendimiento que se puede obtener utilizando el enfoque. Por otro lado, el segundo enfoque analiza las sentencias y logs del lenguaje declarativo de consultas SPARQL para precargar datos de triplestores de RDF. Este enfoque aplica dos medidas para calcular la similtud entre las consultas del lenguaje SPARQL con el objetivo de detectar patrones recurrentes en los logs históricos. Posteriormente, el enfoque utiliza los patrones detectados para predecir las consultas siguientes y precargar con antelación los datos que necesitan. Nuestra evaluación muestra que este enfoque produce predicciones de alta precisión y puede lograr un alto índice de aciertos cuando los resultados de las consultas predichas se guardan en el caché.
APA, Harvard, Vancouver, ISO, and other styles
6

Jeelani, Ashfaq Ahmed. "A data layout descriptor language (LADEL)." [Johnson City, Tenn. : East Tennessee State University], 2001. http://etd-submit.etsu.edu/etd/theses/available/etd-0301101-022022/unrestricted/Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hsu, Bo-June (Bo-June Paul). "Language Modeling for limited-data domains." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/52796.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 99-109).
With the increasing focus of speech recognition and natural language processing applications on domains with limited amount of in-domain training data, enhanced system performance often relies on approaches involving model adaptation and combination. In such domains, language models are often constructed by interpolating component models trained from partially matched corpora. Instead of simple linear interpolation, we introduce a generalized linear interpolation technique that computes context-dependent mixture weights from features that correlate with the component confidence and relevance for each n-gram context. Since the n-grams from partially matched corpora may not be of equal relevance to the target domain, we propose an n-gram weighting scheme to adjust the component n-gram probabilities based on features derived from readily available corpus segmentation and metadata to de-emphasize out-of-domain n-grams. In scenarios without any matched data for a development set, we examine unsupervised and active learning techniques for tuning the interpolation and weighting parameters. Results on a lecture transcription task using the proposed generalized linear interpolation and n-gram weighting techniques yield up to a 1.4% absolute word error rate reduction over a linearly interpolated baseline language model. As more sophisticated models are only as useful as they are practical, we developed the MIT Language Modeling (MITLM) toolkit, designed for efficient iterative parameter optimization, and released it to the research community.
(cont.) With a compact vector-based n-gram data structure and optimized algorithm implementations, the toolkit not only improves the running time of common tasks by up to 40x, but also enables the efficient parameter tuning for language modeling techniques that were previously deemed impractical.
by Bo-June (Paul) Hsu.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Edward Soo. "Data-mining natural language materials syntheses." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122075.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references.
Discovering, designing, and developing a novel material is an arduous task, involving countless hours of human effort and ingenuity. While some aspects of this process have been vastly accelerated by the advent of first-principles-based computational techniques and high throughput experimental methods, a vast ocean of untapped historical knowledge lies dormant in the scientific literature. Namely, the precise methods by which many inorganic compounds are synthesized are recorded only as text within journal articles. This thesis aims to realize the potential of this data for informing the syntheses of inorganic materials through the use of data-mining algorithms. Critically, the methods used and produced in this thesis are fully automated, thus maximizing the impact for accelerated synthesis planning by human researchers.
There are three primary objectives of this thesis: 1) aggregate and codify synthesis knowledge contained within scientific literature, 2) identify synthesis "driving factors" for different synthesis outcomes (e.g., phase selection) and 3) autonomously learn synthesis hypotheses from the literature and extend these hypotheses to predicted syntheses for novel materials. Towards the first goal of this thesis, a pipeline of algorithms is developed in order to extract and codify materials synthesis information from journal articles into a structured, machine readable format, analogous to existing databases for materials structures and properties. To efficiently guide the extraction of materials data, this pipeline leverages domain knowledge regarding the allowable relations between different types of information (e.g., concentrations often correspond to solutions).
Both unsupervised and supervised machine learning algorithms are also used to rapidly extract synthesis information from the literature. To examine the autonomous learning of driving factors for morphology selection during hydrothermal syntheses, TiO₂ nanotube formation is found to be correlated with NaOH concentrations and reaction temperatures, using models that are given no internal chemistry knowledge. Additionally, the capacity for transfer learning is shown by predicting phase symmetry in materials systems unseen by models during training, outperforming heuristic physically-motivated baseline stratgies, and again with chemistry-agnostic models. These results suggest that synthesis parameters possess some intrinsic capability for predicting synthesis outcomes. The nature of this linkage between synthesis parameters and synthesis outcomes is then further explored by performing virtual synthesis parameter screening using generative models.
Deep neural networks (variational autoencoders) are trained to learn low-dimensional representations of synthesis routes on augmented datasets, created by aggregated synthesis information across materials with high structural similarity. This technique is validated by predicting ion-mediated polymorph selection effects in MnO₂, using only data from the literature (i.e., without knowledge of competing free energies). This method of synthesis parameter screening is then applied to suggest a new hypothesis for solvent-driven formation of the rare TiO₂ phase, brookite. To extend the capability of synthesis planning with literature-based generative models, a sequence-based conditional variational autoencoder (CVAE) neural network is developed. The CVAE allows a materials scientist to query the model for synthesis suggestions of arbitrary materials, including those that the model has not observed before.
In a demonstrative experiment, the CVAE suggests the correct precursors for literature-reported syntheses of two perovskite materials using training data published more than a decade prior to the target syntheses. Thus, the CVAE is used as an additional materials synthesis screening utility that is complementary to techniques driven by density functional theory calculations. Finally, this thesis provides a broad commentary on the status quo for the reporting of written materials synthesis methods, and suggests a new format which improves both human and machine readability. The thesis concludes with comments on promising future directions which may build upon the work described in this document.
by Edward Soo Kim.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Materials Science and Engineering
APA, Harvard, Vancouver, ISO, and other styles
9

Gutti, Praveen. "Semistructured probabilistic object query language a query language for semistructured probabilistic data /." Lexington, Ky. : [University of Kentucky Libraries], 2007. http://hdl.handle.net/10225/701.

Full text
Abstract:
Thesis (M.S.)--University of Kentucky, 2007.
Title from document title page (viewed on April 2, 2008). Document formatted into pages; contains: vii, 42 p. : ill. (some col.). Includes abstract and vita. Includes bibliographical references (p. 39-40).
APA, Harvard, Vancouver, ISO, and other styles
10

Swain, Bradley Andrew. "Path understanding using geospatial natural language." [Pensacola, Fla.] : University of West Florida, 2009. http://purl.fcla.edu/fcla/etd/WFE0000182.

Full text
Abstract:
Thesis (M.S.)--University of West Florida, 2009.
Submitted to the Dept. of Computer Science. Title from title page of source document. Document formatted into pages; contains 45 pages. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Daga language"

1

Jesudason, Daniel. Daga ok 2 =: Daga reading book 2. Ukarumpa, E.H.P., Papua New Guinea: Summer Institute of Linguistics, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jesudason, Daniel. Daga ok 1 =: Daga reading book 1. Ukarumpa, E.H.P., Papua New Guinea: Summer Institute of Linguistics, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sani, Umar Mohammed. Tsabta da kare kai daga cuta. Zaria: Huda Huda Pub. Co., 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jesudason, Daniel. Agi anupen buka megawa =: Daga numeracy book. Ukarumpa via Lae, EHP, Papua New Guinea: Summer Institute of Linguistics, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Aesop. The mouse and the lion =: Ang daga at ang leon. Manila, Philippines: Lampara Publishing House, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ward, J. R. Amante eterno: La Hermandad de la Daga Negra. Madrid: Punto De Lectura, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chang, Monica. The mouse bride: A Chinese folktale = Ang nobya ng daga : Isang kuwentong-bayan mula sa tsina. Taipei, Taiwan: Yuan-Liou Pub. Co., 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nyström, Staffan. Ord för höjder och sluttningar i Daga härad: En studie över betydelsen hos två grupper terrängbetecknande appellativ och ortnamnselement. Uppsala: Ortnamnsarkivet i Uppsala, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

University), International Conference on Hausa Language (4th 1987 Bayero. Takardu a kan harshe da Adabi da Alʼadu Na Hausa: Wasu daga cikin takardun da aka kaddamar a Taron Kara wa Juna Ilimi na Hudu kan harshe da Al'adu na Hausa, wanda aka yi 20-24 ga Satumba, 1987. Kano: Cibiyar Nazarin Harsunan Nijeriya, Jami'ar Bayero, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pálsson, Hermann. Hrímfaxi: Hestanöfn frá fyrri tíð til vorra daga og litir íslenska hestsins = Islandpferdenamen und -farben, von der Mythologie zur Gegenwart = the names of Icelandic horses and their colours, from ancient time to the present day. [Vatnsdal]: Bókaútgáfan á Hofi, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Daga language"

1

Biswas, Mainak, Saif Rahaman, Satwik Kundu, Pawan Kumar Singh, and Ram Sarkar. "Spoken Language Identification of Indian Languages Using MFCC Features." In Studies in Big Data, 249–72. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-9492-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Almgren, Margareta, Leire Beloki, Itziar Idiazabal, and Ibon Manterola. "Acquisition of Basque in successive bilingualism: Data from oral storytelling." In Language Contact and Contact Languages, 239–59. Amsterdam: John Benjamins Publishing Company, 2008. http://dx.doi.org/10.1075/hsm.7.14alm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Martin, Marcienne. "Internet: Language." In Encyclopedia of Big Data, 1–5. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-32001-4_120-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mubarak, Hamdy. "Crowdsourcing Speech and Language Data for Resource-Poor Languages." In Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2017, 440–47. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64861-3_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kunii, Hideko S. "Data Definition Language." In Graph Data Model, 21–28. Tokyo: Springer Japan, 1990. http://dx.doi.org/10.1007/978-4-431-68114-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kunii, Hideko S. "Data Manipulation Language." In Graph Data Model, 29–39. Tokyo: Springer Japan, 1990. http://dx.doi.org/10.1007/978-4-431-68114-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Estrada, Raul, and Isaac Ruiz. "The Language: Scala." In Big Data SMACK, 19–40. Berkeley, CA: Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-2175-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Weik, Martin H. "data manipulation language." In Computer Science and Communications Dictionary, 352. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_4320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Weik, Martin H. "data definition language." In Computer Science and Communications Dictionary, 345. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_4246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Weik, Martin H. "data description language." In Computer Science and Communications Dictionary, 346. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_4254.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Daga language"

1

Ding, Bosheng, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, and Chunyan Miao. "DAGA: Data Augmentation with a Generation Approach for Low-resource Tagging Tasks." In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Zhenpeng, Sheng Shen, Ziniu Hu, Xuan Lu, Qiaozhu Mei, and Xuanzhe Liu. "Emoji-Powered Representation Learning for Cross-Lingual Sentiment Classification (Extended Abstract)." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/649.

Full text
Abstract:
Sentiment classification typically relies on a large amount of labeled data. In practice, the availability of labels is highly imbalanced among different languages. To tackle this problem, cross-lingual sentiment classification approaches aim to transfer knowledge learned from one language that has abundant labeled examples (i.e., the source language, usually English) to another language with fewer labels (i.e., the target language). The source and the target languages are usually bridged through off-the-shelf machine translation tools. Through such a channel, cross-language sentiment patterns can be successfully learned from English and transferred into the target languages. This approach, however, often fails to capture sentiment knowledge specific to the target language. In this paper, we employ emojis, which are widely available in many languages, as a new channel to learn both the cross-language and the language-specific sentiment patterns. We propose a novel representation learning method that uses emoji prediction as an instrument to learn respective sentiment-aware representations for each language. The learned representations are then integrated to facilitate cross-lingual sentiment classification.
APA, Harvard, Vancouver, ISO, and other styles
3

Lian, Xin, Kshitij Jain, Jakub Truszkowski, Pascal Poupart, and Yaoliang Yu. "Unsupervised Multilingual Alignment using Wasserstein Barycenter." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/512.

Full text
Abstract:
We study unsupervised multilingual alignment, the problem of finding word-to-word translations between multiple languages without using any parallel data. One popular strategy is to reduce multilingual alignment to the much simplified bilingual setting, by picking one of the input languages as the pivot language that we transit through. However, it is well-known that transiting through a poorly chosen pivot language (such as English) may severely degrade the translation quality, since the assumed transitive relations among all pairs of languages may not be enforced in the training process. Instead of going through a rather arbitrarily chosen pivot language, we propose to use the Wasserstein barycenter as a more informative ``mean'' language: it encapsulates information from all languages and minimizes all pairwise transportation costs. We evaluate our method on standard benchmarks and demonstrate state-of-the-art performances.
APA, Harvard, Vancouver, ISO, and other styles
4

Thomas, Anitta, Aurona J. Gerber, and Alta van der Merwe. "A Conceptual Framework of Research on Visual Language Specification Languages." In 2019 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD). IEEE, 2019. http://dx.doi.org/10.1109/icabcd.2019.8851003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ming Lim, Tong, and Lee Sai Peck. "Extended Object Languages for The Extolware Persistent Framework." In InSITE 2004: Informing Science + IT Education Conference. Informing Science Institute, 2004. http://dx.doi.org/10.28945/2832.

Full text
Abstract:
Users interact with a database system through a set of database languages and this makes designing database languages a very challenging task to a computer software engineer. A set of well-defined database languages must be easy to learn, easy to understand and powerful enough to capture semantic of a problem domain. This paper discusses design issues of a proposed database language, namely Extended Object Language or EOL for short, for an Extolware Persistent Object framework (Lim & Lee, 1997, 1998, 1999, 2001, 2002a, 2002b, 2002c) that provide wrapping services for relational database systems and multidimensional database systems (DataPro, 1996; IBM Corp., 2001; Informix Software Inc., 2001a, 2001b). This research examines SQL3 (Fortier, 1999) and ODL/OQL (Cattell & Barry, 1999) with an overview of their language constructs and operators that support object-oriented requirements as stated in Object Data Management Group (ODMG) object model. Next, a discussion on the Extended Object Language (EOL) and its language constructs are examined. This is followed by a close examination of new database operators and constructs introduced into EOL. A design overview and evaluation of these database languages are examined. A summary on these languages is presented at the end of the paper with conclusion and further research plans.
APA, Harvard, Vancouver, ISO, and other styles
6

Brychcin, Tomas, and Miloslav Konopik. "Morphological based language models for inflectional languages." In 2011 IEEE 6th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS). IEEE, 2011. http://dx.doi.org/10.1109/idaacs.2011.6072829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Qianhui, Zijia Lin, Börje F. Karlsson, Biqing Huang, and Jian-Guang Lou. "UniTrans : Unifying Model Transfer and Data Transfer for Cross-Lingual Named Entity Recognition with Unlabeled Data." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/543.

Full text
Abstract:
Prior work in cross-lingual named entity recognition (NER) with no/little labeled data falls into two primary categories: model transfer- and data transfer-based methods. In this paper, we find that both method types can complement each other, in the sense that, the former can exploit context information via language-independent features but sees no task-specific information in the target language; while the latter generally generates pseudo target-language training data via translation but its exploitation of context information is weakened by inaccurate translations. Moreover, prior work rarely leverages unlabeled data in the target language, which can be effortlessly collected and potentially contains valuable information for improved results. To handle both problems, we propose a novel approach termed UniTrans to Unify both model and data Transfer for cross-lingual NER, and furthermore, leverage the available information from unlabeled target-language data via enhanced knowledge distillation. We evaluate our proposed UniTrans over 4 target languages on benchmark datasets. Our experimental results show that it substantially outperforms the existing state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Mabokela, Ronny. "Phone Clustering Methods for Multilingual Language Identification." In 9th International Conference on Natural Language Processing (NLP 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101421.

Full text
Abstract:
This paper proposes phoneme clustering methods for multilingual language identification (LID) on a mixed-language corpus. A one-pass multilingual automated speech recognition (ASR) system converts spoken utterances into occurrences of phone sequences. Hidden Markov models were employed to train multilingual acoustic models that handle multiple languages within an utterance. Two phoneme clustering methods were explored to derive the most appropriate phoneme similarities between the target languages. Ultimately a supervised machine learning technique was employed to learn the language transition of the phonotactic information and engage the support vector machine (SVM) models to classify phoneme occurrences. The system performance was evaluated on mixed-language speech corpus for two South African languages (Sepedi and English) using the phone error rate (PER) and LID classification accuracy separately. We show that multilingual ASR which fed directly to the LID system has a direct impact on LID accuracy. Our proposed system has achieved an acceptable phone recognition and classification accuracy in mixed-language speech and monolingual speech (i.e. either Sepedi or English). Data-driven, and knowledge-driven phoneme clustering methods improve ASR and LID for code-switched speech. The data-driven method obtained the PER of 5.1% and LID classification accuracy of 94.5% when the acoustic models are trained with 64 Gaussian mixtures per state.
APA, Harvard, Vancouver, ISO, and other styles
9

Remnev, N. V. "NATIVE LANGUAGE IDENTIFICATION FOR RUSSIAN USING ERRORS TYPES." In International Conference on Computational Linguistics and Intellectual Technologies "Dialogue". Russian State University for the Humanities, 2020. http://dx.doi.org/10.28995/2075-7182-2020-19-1123-1133.

Full text
Abstract:
The task of recognizing the author’s native (Native Language Identification—NLI) language based on a texts, written in a language that is non-native to the author—is the task of automatically recognizing native language (L1). The NLI task was studied in detail for the English language, and two shared tasks were conducted in 2013 and 2017, where TOEFL English essays and essay samples were used as data. There is also a small number of works where the NLI problem was solved for other languages. The NLI problem was investigated for Russian by Ladygina (2017) and Remnev (2019). This paper discusses the use of well-established approaches in the NLI Shared Task 2013 and 2017 competitions to solve the problem of recognizing the author’s native language, as well as to recognize the type of speaker—learners of Russian or Heritage Russian speakers. Native language identification task is also solved based on the types of errors specific to different languages. This study is data-driven and is possible thanks to the Russian Learner Corpus developed by the Higher School of Economics (HSE) Learner Russian Research Group on the basis of which experiments are being conducted.
APA, Harvard, Vancouver, ISO, and other styles
10

Ghosh, Aditi. "Representations of the Self and the Others in a Multilingual City: Hindi Speakers in Kolkata." In GLOCAL Conference on Asian Linguistic Anthropology 2019. The GLOCAL Unit, SOAS University of London, 2019. http://dx.doi.org/10.47298/cala2019.3-4.

Full text
Abstract:
This study examines the attitudes and representations of a select group of Hindi mother tongue speakers residing in Kolkata. Hindi is one of the two official languages of India and Hindi mother tongue speakers are the numerically dominant language community in India, as per census. Further, due to historical, political and socio-cultural reasons, enormous importance is attached to the language, to the extent that there is a wide spread misrepresentation of the language as the national language of India. In this way, speakers of Hindi by no means form a minority in Indian contexts. However, as India is an extremely multilingual and diverse country, in many areas of the country other language speakers outnumber Hindi speakers, and in different states other languages have prestige, greater functional value and locally official status as well. Kolkata is one of such places, as the capital of West Bengal, a state where Bengali is the official language, and where Bengali is the most widely spoken mother tongue. Hindi mother tongue speakers, therefore, are not the dominant majority here, however, their language still carries the symbolic load of a representative language of India. In this context, this study examines the opinions and attitudes of a section of long term residents of Kolkata whose mother tongue is Hindi. The data used in this paper is derived from a large scale survey conducted in Kolkata which included 153 Hindi speakers. The objective of the study is to elicit, through a structured interview, their attitudes towards their own language and community, and towards the other languages and communities in Kolkata, and to examine how they represent and construct the various communities in their responses. The study adopts qualitative methods of analysis. The analysis shows that though there is largely an overt representation of harmony, there are indications of how the socio-cultural symbolic values attached to different languages are also extended to its speakers creating subtle social distances among language communities.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Daga language"

1

Schwartz, R., L. Nguyen, F. Kubala, G. CHou, G. Zavaliagkos, and J. Makhoul. On Using Written Language Training Data for Spoken Language Modeling. Fort Belvoir, VA: Defense Technical Information Center, January 1994. http://dx.doi.org/10.21236/ada460657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bjorklund, M., ed. The YANG 1.1 Data Modeling Language. RFC Editor, August 2016. http://dx.doi.org/10.17487/rfc7950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Duran, Randall E. Reengineering Using a Data Abstraction Based Specification Language. Fort Belvoir, VA: Defense Technical Information Center, September 1991. http://dx.doi.org/10.21236/ada254726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Blelloch, Guy E., Siddhartha Chatterjee, Jonathan C. Hardwick, Jay Sipelstein, and Marco Zagha. Implementation of a Portable Nested Data-Parallel Language. Fort Belvoir, VA: Defense Technical Information Center, February 1993. http://dx.doi.org/10.21236/ada270524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Charest, Marc Robert Joseph. Contra: A New Language for Task- and Data-Paralellism. Office of Scientific and Technical Information (OSTI), July 2020. http://dx.doi.org/10.2172/1641548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bradley, Gordon H. Network and Graph Markup Language (NaGML) Data File Formats. Fort Belvoir, VA: Defense Technical Information Center, July 2004. http://dx.doi.org/10.21236/ada425213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bylsma, Wesley. Creation of Virtual Reality Modeling Language (VRML) Geometry Data From Movie.BYU Data. Fort Belvoir, VA: Defense Technical Information Center, November 2004. http://dx.doi.org/10.21236/ada431165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bylsma, Wesley. Creation of Virtual Reality Modeling Language (VRML) Displacement Data from Par Data. Fort Belvoir, VA: Defense Technical Information Center, November 2004. http://dx.doi.org/10.21236/ada431395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bylsma, Wesley. Creation of Virtual Reality Modeling Language (VRML) Appearance Data From Geoclr Data. Fort Belvoir, VA: Defense Technical Information Center, November 2004. http://dx.doi.org/10.21236/ada432364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tuck, Russ. An Optimally Portable SIMD (Single-Instruction Multiple-Data) Programming Language. Fort Belvoir, VA: Defense Technical Information Center, October 1988. http://dx.doi.org/10.21236/ada201089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography