Academic literature on the topic 'Automated language translation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Automated language translation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Automated language translation"

1

Marshall, Susan LaVonne. "Concept of Operations (CONOPS) for foreign language and speech translation technologies in a coalition military environment." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Mar%5FMarshall.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Doni, Pracner. "Translation and Transformation of Low Level Programs." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=110184&source=NDLTD&language=en.

Full text
Abstract:
This thesis presents an approach for working with low level source code that enables automatic restructuring and raising the abstraction level of the programs. This makes it easier to understand the logic of the program, which in turn reduces the development time.The process in this thesis was designed to be flexible and consists of several independent tools. This makes the process easy to adapt as needed, while at the same time the developed tools can be used for other processes. There are usually two basic steps. First is the translation to WSL language, which has a great number of semantic preserving program transformations. The second step are the transformations of the translated WSL. Two tools were developed for translation: one that works with a subset of x86 assembly, and another that works with MicroJava bytecode. The result of the translation is a low level program in WSL.The primary goal of this thesis was to fully automate the selection of the transformations. This enables users with no domain&nbsp; knowledge to efficiently use this process as needed. At the same time, the flexibility of the process enables experienced users to adapt it as needed or integrate it into other processes. The automation was achieved with a <em>hill climbing </em>algorithm.Experiments that were run on several types of input programs showed that the results can be excellent. The fitness function used was a built-in metric that gives the &ldquo;weight&rdquo; of structures in a program. On input samples that had original high level source codes, the end result metrics of the translated and transformed programs were comparable. On some samples the result was even better than the originals, on some others they were somewhat more complex. When comparing with low level original source code, the end results was always significantly improved.<br>U okviru ove teze se predstavlja pristup radu sa programima niskog nivoa koji omogućava automatsko restrukturiranje i podizanje na vi&scaron;e nivoe. Samim tim postaje mnogo lak&scaron;e razumeti logiku programa &scaron;to smanjuje vreme razvoja.Proces je dizajniran tako da bude fleksibilan i sastoji se od vi&scaron;e nezavisnih alata. Samim tim je lako menjati proces po potrebi, ali i upotrebiti razvijene alate u drugim procesima. Tipično se mogu razlikovati dva glavna koraka. Prvi je prevođenje u jezik WSL,za koji postoji veliki broj transformacija programa koje očuvavaju semantiku. Drugi su transformacije u samom WSL-u. Za potrebe prevođenja su razvijena dva alata, jedan koji radi sa podskupom x86 asemblera i drugi koji radi sa MikroJava bajtk&ocirc;dom. Rezultat prevođenja je program niskog nivoa u WSL jeziku.Primarni cilj ovog istraživanja je bila potpuna automatizacija odabira transformacija, tako da i korisnici bez iskustva u radu sa sistemom mogu efikasno da primene ovaj proces za svoje potrebe. Sa druge strane zbog fleksibilnosti procesa, iskusni korisnici mogu lakoda ga pro&scaron;ire ili da ga integri&scaron;u u neki drugi već postojeći&nbsp;&nbsp; proces.Automatizacija je&nbsp; postignuta pretraživanjem usponom (eng. hill climbing).Eksperimenti vr&scaron;eni na nekoliko tipova ulaznih programa niskog nivoa su pokazali da rezultati mogu biti&nbsp; izuzetni. Za funkciju pogodnosti je kori&scaron;ćena ugrađena metrika koja daje &ldquo;težinu&rdquo; struktura u programu. Kod ulaza za koje je originalni izvorni k&ocirc;d bio dostupan, krajnje metrike najboljih varijanti prevedenih i transformisanih programa su bile na sličnom nivou. Neki primeri su bolji od originala, dok su drugi bili ne&scaron;to kompleksniji. Rezultati su uvek pokazivali značajna unapređenja u odnosu na originalni k&ocirc;d niskog nivoa.
APA, Harvard, Vancouver, ISO, and other styles
3

Zogheib, Ali. "Automatic language translation /." Göteborg : IT-universitetet, Chalmers tekniska högskola och Göteborgs universitet, 2007. http://www.ituniv.se/w/index.php?option=com_itu_thesis&Itemid=319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Davis, Paul C. "Stone Soup Translation: The Linked Automata Model." Connect to this title online, 2002. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1023806593.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2002.<br>Title from first page of PDF file. Document formatted into pages; contains xvi, 306 p.; includes graphics. Includes abstract and vita. Advisor: Chris Brew, Dept. of Linguistics. Includes indexes. Includes bibliographical references (p. 284-293).
APA, Harvard, Vancouver, ISO, and other styles
5

Clark, D. P. "Automatic translation of scene description languages." Thesis, Swansea University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636259.

Full text
Abstract:
This work presents a novel approach to translation, targeted in particular towards the translation of graphical scene description languages. A new approach to automatic language translation has been proposed. It is based on the concept of using an Independent Stylesheet for the specification of each language concerned in terms of lexis, syntax and semantics, and using a Generic Translator to perform translation between two languages according mainly to the corresponding stylesheets. This new approach is called Independent Stylesheet Language Translation (ISLT). The ISLT approach focuses on a class of translation problems, where accurate mapping between two languages cannot be fully accomplished. For instance, such a scenario is common among graphical scene description languages. Therefore, the aim of translation is to achieve a close semantic approximation of the source program in a target language, and that the approximation is syntactically correct with respect to a declared stylesheet of the target language. A generic software architecture for ISLT has been proposed, which consists of three main phases, namely Extraction, Transformation and Reconstruction. The Extraction phase involves the automatic generation of a parser based on the stylesheet of a source language, and the parser is then used to decompose a program, in the source language into an abstract program in the form of a Program Component List. The Transformation phase involves a series of iterative mapping process, supported by a Generic Mapping Thesaurus, for the transformation of an abstract program related to the source language, to that related to the target language. The Reconstruction phase utilises XSLT for the construction of a program in the target language based on an abstract program. A domain-specific implementation of ISLT, called Graphical Anamorphic Language Environment (GALE), has been developed for the translation of graphical scene description languages. Three example languages have been considered, and the results have demonstrated the technical feasibility and scalability of the proposed approach. The ISLT approach does not suffer from the huge cost of direct translation based approaches or the restriction on functionality and program content imposed upon by the use of an intermediary language. Furthermore, the semantic approximation in translation helps retain programmatic intent. It is believed that, in the long term, the ISLT approach is more cost-effective than the traditional approaches of direct translation and intermediate translation.
APA, Harvard, Vancouver, ISO, and other styles
6

Dürlich, Luise. "Automatic Recognition and Classification of Translation Errors in Human Translation." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420289.

Full text
Abstract:
Grading assignments is a time-consuming part of teaching translation. Automatic tools that facilitate this task would allow teachers of professional translation to focus more on other aspects of their job. Within Natural Language Processing, error recognitionhas not been studied for human translation in particular. This thesis is a first attempt at both error recognition and classification with both mono- and bilingual models. BERT– a pre-trained monolingual language model – and NuQE – a model adapted from the field of Quality Estimation for Machine Translation – are trained on a relatively small hand annotated corpus of student translations. Due to the nature of the task, errors are quite rare in relation to correctly translated tokens in the corpus. To account for this,we train the models with both under- and oversampled data. While both models detect errors with moderate success, the NuQE model adapts very poorly to the classification setting. Overall, scores are quite low, which can be attributed to class imbalance and the small amount of training data, as well as some general concerns about the corpus annotations. However, we show that powerful monolingual language models can detect formal, lexical and translational errors with some success and that, depending on the model, simple under- and oversampling approaches can already help a great deal to avoid pure majority class prediction.
APA, Harvard, Vancouver, ISO, and other styles
7

Chatterjee, Rajen. "Automatic Post-Editing for Machine Translation." Doctoral thesis, Università degli studi di Trento, 2019. http://hdl.handle.net/11572/242495.

Full text
Abstract:
Automatic Post-Editing (APE) aims to correct systematic errors in a machine translated text. This is primarily useful when the machine translation (MT) system is not accessible for improvement, leaving APE as a viable option to improve translation quality as a downstream task - which is the focus of this thesis. This field has received less attention compared to MT due to several reasons, which include: the limited availability of data to perform a sound research, contrasting views reported by different researchers about the effectiveness of APE, and limited attention from the industry to use APE in current production pipelines. In this thesis, we perform a thorough investigation of APE as a down- stream task in order to: i) understand its potential to improve translation quality; ii) advance the core technology - starting from classical methods to recent deep-learning based solutions; iii) cope with limited and sparse data; iv) better leverage multiple input sources; v) mitigate the task-specific problem of over-correction; vi) enhance neural decoding to leverage external knowledge; and vii) establish an online learning framework to handle data diversity in real-time. All the above contributions are discussed across several chapters, and most of them are evaluated in the APE shared task organized each year at the Conference on Machine Translation. Our efforts in improving the technology resulted in the best system at the 2017 APE shared task, and our work on online learning received a distinguished paper award at the Italian Conference on Computational Linguistics. Overall, outcomes and findings of our work have boost interest among researchers and attracted industries to examine this technology to solve real-word problems.
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, X. "XTRA : The design and implementation of a fully automatic machine translation system." Thesis, University of Essex, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.379393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Averboch, Guillermo Andres. "A system for document analysis, translation, and automatic hypertext linking." Thesis, Virginia Tech, 1995. http://hdl.handle.net/10919/43809.

Full text
Abstract:
A digital library database is a heterogeneous collection of documents. Documents may become available in different formats (e.g., ASCII, SGML, typesetter languages) and they may have to be translated to a standard document representation scheme used by the digital library. This work focuses on the design of a framework that can be used to convert text documents in any format to equivalent documents in different formats and, in particular, to SGML (Standard Generalized Markup Language). In addition, the framework must be able to extract information about the analyzed documents, store that information in a permanent database, and construct hypertext links between documents and the information contained in that database and between the document themselves. For example, information about the author of a document could be extracted and stored in the database. A link can then be established between the document and the information about its author and from there to other documents by the same author. These tasks must be performed without any human intervention, even at the risk of making a small number of mistakes. To accomplish these goals we developed a language called DELTO (Description Language for Textual Objects) that can be used to describe a document format. Given a description for a particular format, our system is able to extract information from documents in that format, to store part of that information in a permanent database, and to use that information in constructing an abstract representation of those documents that can be used to generate equivalent documents in different formats. The system originated from this work is used for constructing the database of Envision, a Virginia Tech digital library research project.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Saers, Markus. "Translation as Linear Transduction : Models and Algorithms for Efficient Learning in Statistical Machine Translation." Doctoral thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-135704.

Full text
Abstract:
Automatic translation has seen tremendous progress in recent years, mainly thanks to statistical methods applied to large parallel corpora. Transductions represent a principled approach to modeling translation, but existing transduction classes are either not expressive enough to capture structural regularities between natural languages or too complex to support efficient statistical induction on a large scale. A common approach is to severely prune search over a relatively unrestricted space of transduction grammars. These restrictions are often applied at different stages in a pipeline, with the obvious drawback of committing to irrevocable decisions that should not have been made. In this thesis we will instead restrict the space of transduction grammars to a space that is less expressive, but can be efficiently searched. First, the class of linear transductions is defined and characterized. They are generated by linear transduction grammars, which represent the natural bilingual case of linear grammars, as well as the natural linear case of inversion transduction grammars (and higher order syntax-directed transduction grammars). They are recognized by zipper finite-state transducers, which are equivalent to finite-state automata with four tapes. By allowing this extra dimensionality, linear transductions can represent alignments that finite-state transductions cannot, and by keeping the mechanism free of auxiliary storage, they become much more efficient than inversion transductions. Secondly, we present an algorithm for parsing with linear transduction grammars that allows pruning. The pruning scheme imposes no restrictions a priori, but guides the search to potentially interesting parts of the search space in an informed and dynamic way. Being able to parse efficiently allows learning of stochastic linear transduction grammars through expectation maximization. All the above work would be for naught if linear transductions were too poor a reflection of the actual transduction between natural languages. We test this empirically by building systems based on the alignments imposed by the learned grammars. The conclusion is that stochastic linear inversion transduction grammars learned from observed data stand up well to the state of the art.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography