To see the other types of publications on this topic, follow the link: Translation technology.

Dissertations / Theses on the topic 'Translation technology'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Translation technology.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Babych, Bogdan. "Information extraction technology in machine translation." Thesis, University of Leeds, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.416402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Asare, Edmund K. "An Ethnographic Study of the Use of Translation Tools in a Translation Agency: Implications for Translation Tool Design." Kent State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=kent1310587792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bodie, Gary John. "A new kind of Beowulf : text, translation and technology /." view abstract or download file of text, 2007. http://proquest.umi.com/pqdweb?did=1453174591&sid=2&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2007.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 243-254). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
4

González, Rubio Jesús. "On the effective deployment of current machine translation technology." Doctoral thesis, Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/37888.

Full text
Abstract:
Machine translation is a fundamental technology that is gaining more importance each day in our multilingual society. Companies and particulars are turning their attention to machine translation since it dramatically cuts down their expenses on translation and interpreting. However, the output of current machine translation systems is still far from the quality of translations generated by human experts. The overall goal of this thesis is to narrow down this quality gap by developing new methodologies and tools that improve the broader and more efficient deployment of machine translation technology. We start by proposing a new technique to improve the quality of the translations generated by fully-automatic machine translation systems. The key insight of our approach is that different translation systems, implementing different approaches and technologies, can exhibit different strengths and limitations. Therefore, a proper combination of the outputs of such different systems has the potential to produce translations of improved quality. We present minimum Bayes¿ risk system combination, an automatic approach that detects the best parts of the candidate translations and combines them to generate a consensus translation that is optimal with respect to a particular performance metric. We thoroughly describe the formalization of our approach as a weighted ensemble of probability distributions and provide efficient algorithms to obtain the optimal consensus translation according to the widespread BLEU score. Empirical results show that the proposed approach is indeed able to generate statistically better translations than the provided candidates. Compared to other state-of-the-art systems combination methods, our approach reports similar performance not requiring any additional data but the candidate translations. Then, we focus our attention on how to improve the utility of automatic translations for the end-user of the system. Since automatic translations are not perfect, a desirable feature of machine translation systems is the ability to predict at run-time the quality of the generated translations. Quality estimation is usually addressed as a regression problem where a quality score is predicted from a set of features that represents the translation. However, although the concept of translation quality is intuitively clear, there is no consensus on which are the features that actually account for it. As a consequence, quality estimation systems for machine translation have to utilize a large number of weak features to predict translation quality. This involves several learning problems related to feature collinearity and ambiguity, and due to the ¿curse¿ of dimensionality. We address these challenges by adopting a two-step training methodology. First, a dimensionality reduction method computes, from the original features, the reduced set of features that better explains translation quality. Then, a prediction model is built from this reduced set to finally predict the quality score. We study various reduction methods previously used in the literature and propose two new ones based on statistical multivariate analysis techniques. More specifically, the proposed dimensionality reduction methods are based on partial least squares regression. The results of a thorough experimentation show that the quality estimation systems estimated following the proposed two-step methodology obtain better prediction accuracy that systems estimated using all the original features. Moreover, one of the proposed dimensionality reduction methods obtained the best prediction accuracy with only a fraction of the original features. This feature reduction ratio is important because it implies a dramatic reduction of the operating times of the quality estimation system. An alternative use of current machine translation systems is to embed them within an interactive editing environment where the system and a human expert collaborate to generate error-free translations. This interactive machine translation approach have shown to reduce supervision effort of the user in comparison to the conventional decoupled post-edition approach. However, interactive machine translation considers the translation system as a passive agent in the interaction process. In other words, the system only suggests translations to the user, who then makes the necessary supervision decisions. As a result, the user is bound to exhaustively supervise every suggested translation. This passive approach ensures error-free translations but it also demands a large amount of supervision effort from the user. Finally, we study different techniques to improve the productivity of current interactive machine translation systems. Specifically, we focus on the development of alternative approaches where the system becomes an active agent in the interaction process. We propose two different active approaches. On the one hand, we describe an active interaction approach where the system informs the user about the reliability of the suggested translations. The hope is that this information may help the user to locate translation errors thus improving the overall translation productivity. We propose different scores to measure translation reliability at the word and sentence levels and study the influence of such information in the productivity of an interactive machine translation system. Empirical results show that the proposed active interaction protocol is able to achieve a large reduction in supervision effort while still generating translations of very high quality. On the other hand, we study an active learning framework for interactive machine translation. In this case, the system is not only able to inform the user of which suggested translations should be supervised, but it is also able to learn from the user-supervised translations to improve its future suggestions. We develop a value-of-information criterion to select which automatic translations undergo user supervision. However, given its high computational complexity, in practice we study different selection strategies that approximate this optimal criterion. Results of a large scale experimentation show that the proposed active learning framework is able to obtain better compromises between the quality of the generated translations and the human effort required to obtain them. Moreover, in comparison to a conventional interactive machine translation system, our proposal obtained translations of twice the quality with the same supervision effort.
González Rubio, J. (2014). On the effective deployment of current machine translation technology [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37888
TESIS
APA, Harvard, Vancouver, ISO, and other styles
5

Papadoudi, Dafni. "Conceptual metaphor in English popular technology and Greek translation." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/conceptual-metaphor-in-english-popular-technology-and-greek-translation(02c32fa4-98cc-4499-a329-371c1e5413c5).html.

Full text
Abstract:
This research project studies the metaphorical conceptualisation of technology in English popular technology magazines and in translation in the respective Greek editions. The focus is on the cognitive linguistic view of metaphor initially presented by Lakoff and Johnson (1980), on the metaphor identification procedure (Pragglejaz Group 2007), and critical metaphor analysis (Charteris-Black 2004). The analysis of the English data identifies 14 main metaphors and 29 submetaphors which contribute to the structure of the target domain of technology. It distinguishes between conventional and novel metaphors, and common and original metaphorical expressions, motivated by correlations in experience between diverse source domains and by the widespread diffusion and impact of technology. The English data also provide insight into the functions of these metaphors in popular technology discourse and reveal evidence to thinking, values and attitudes about technology in the English language. The analysis of the Greek data examines similarities and differences in the conceptualisations between the English and Greek languages and cultures, and finds similarities in the categories of metaphors, frequency of and preference for metaphor use in the source and target languages, and in the majority of metaphorical expressions. Similarities are based on common experiences stemming from experiential co-occurrence or experiential similarity, and on translated experience. Differences are restricted to specific-level metaphors and expressions, motivated by alternative conceptualisations of terminology, cultural specificity and preferential conceptualisations. A set of translation strategies and a number of possible translation effects are also identified. These strategies and effects add to the possibilities of translation variations and the range of translation options, and are used to draw conclusions regarding the similarities and differences between the English and Greek languages and cultures. Consequently, through the identification and description of metaphors in technology magazines and in translation, the study attempts to highlight aspects of the culture of technology, which views technology as a cultural artefact and a producer of its own culture.
APA, Harvard, Vancouver, ISO, and other styles
6

Dürlich, Luise. "Automatic Recognition and Classification of Translation Errors in Human Translation." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420289.

Full text
Abstract:
Grading assignments is a time-consuming part of teaching translation. Automatic tools that facilitate this task would allow teachers of professional translation to focus more on other aspects of their job. Within Natural Language Processing, error recognitionhas not been studied for human translation in particular. This thesis is a first attempt at both error recognition and classification with both mono- and bilingual models. BERT– a pre-trained monolingual language model – and NuQE – a model adapted from the field of Quality Estimation for Machine Translation – are trained on a relatively small hand annotated corpus of student translations. Due to the nature of the task, errors are quite rare in relation to correctly translated tokens in the corpus. To account for this,we train the models with both under- and oversampled data. While both models detect errors with moderate success, the NuQE model adapts very poorly to the classification setting. Overall, scores are quite low, which can be attributed to class imbalance and the small amount of training data, as well as some general concerns about the corpus annotations. However, we show that powerful monolingual language models can detect formal, lexical and translational errors with some success and that, depending on the model, simple under- and oversampling approaches can already help a great deal to avoid pure majority class prediction.
APA, Harvard, Vancouver, ISO, and other styles
7

Seaton, Roger A. F. "Technology, knowledge translation and policy : conceptual frameworks and case-studies." Thesis, Cranfield University, 1997. http://hdl.handle.net/1826/3460.

Full text
Abstract:
The aim of this thesis is to develop, present and then demonstrate conceptual structures that link together an apparently heterogeneous range of research activity about technology in situations which are the subject of decision-making and policy formulation. Technology is considered to be knowledge as applied in organisations, communities, policy contexts and so on. It thus takes in not only the physical processes and output of the engineering sciences but also the new forms of organisation which use that output, their impacts on, and interactions with, people at large and with the so-called natural systems in which those people are embedded and with which they also interact. Methodologically many of the research publications which this thesis incorporates approach technology related issues and problems from the bottom up, from the most microscopic level of the individual human actor, the smallest feasible level of natural and engineered systems. It is an attempt to redress the top down perspectives which dominate technology and science policy formulation and decision making. This approach often requires research interaction at the level of the individual person or at the lowest level of physical and biological activity relevant to the issue at hand and the appropriate techniques for such interaction are debated and demonstrated. The principles of "translation" or "mapping" which are capable of being applied to a range of interactions between different domains (physically engineered, diverse individuals and knowledge) are developed. The thesis then shows how the representation of responses of people to products and services has evolved and begins to focus on organisations as suppliers of those products and processes. Technology is articulated as knowledge in the context of technology transfer into organisations and the thesis shows how those ideas evolved into the concept of knowledge dynamics in organisations. The problems of interactions which involve bio- physical systems as well as engineered systems and people and the issues of sustainability and policy relevant research are introduced. The nature of integrative interdisciplinary research about these issues is presented as a form of knowledge dynamics. The thesis shows how the concepts above can be used to distinguish between policy and decision relevant issues, and how they help to provide a conceptual framework within which the similarities and differences between knowledge policy in organisations and science research policy can be compared. Thus it is a series of interdisciplinary explorations into complex decision and policy relevant situations in which technology, in the form of knowledge and as the study of interaction between the designed physical world, people, organisations and natural systems, is a constant theme.
APA, Harvard, Vancouver, ISO, and other styles
8

Izwaini, Sattar. "Translation and the language of information technology : a corpus-based study of the vocabulary of information technology in English and its translation into Arabic and Swedish." Thesis, University of Manchester, 2004. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chau, Ting-Hey. "Translation Memory System Optimization : How to effectively implement translation memory system optimization." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169218.

Full text
Abstract:
Translation of technical manuals is expensive, especially when a larger company needs to publish manuals for their whole product range in over 20 different languages. When a text segment (i.e. a phrase, sentence or paragraph) is manually translated, we would like to reuse these translated segments in future translation tasks. A translated segment is stored with its corresponding source language, often called a language pair in a Translation Memory System. A language pair in a Translation Memory represents a Translation Entry also known as a Translation Unit. During a translation, when a text segment in a source document matches a segment in the Translation Memory, available target languages in the Translation Unit will not require a human translation. The previously translated segment can be inserted into the target document. Such functionality is provided in the single source publishing software, Skribenta developed by Excosoft. Skribenta requires text segments in source documents to find an exact or a full match in the Translation Memory, in order to apply a translation to a target language. A full match can only be achieved if a source segment is stored in a standardized form, which requires manual tagging of entities, and often reoccurring words such as model names and product numbers. This thesis investigates different ways to improve and optimize a Translation Memory System. One way was to aid users with the work of manual tagging of entities, by developing Heuristic algorithms to approach the problem of Named Entity Recognition (NER). The evaluation results from the developed Heuristic algorithms were compared with the result from an off the shelf NER tool developed by Stanford. The results shows that the developed Heuristic algorithms is able to achieve a higher F-Measure compare to the Stanford NER, and may be a great initial step to aid Excosofts’ users to improve their Translation Memories.
Översättning av tekniska manualer är väldigt kostsamt, speciellt när större organisationer behöver publicera produktmanualer för hela deras utbud till över 20 olika språk. När en text (t.ex. en fras, mening, paragraf) har blivit översatt så vill vi kunna återanvända den översatta texten i framtida översättningsprojekt och dokument. De översatta texterna lagras i ett översättningsminne (Translation Memory). Varje text lagras i sitt källspråk tillsammans med dess översättning på ett annat språk, så kallat målspråk. Dessa utgör då ett språkpar i ett översättningsminnessystem (Translation Memory System). Ett språkpar som lagras i ett översättningsminne utgör en Translation Entry även kallat Translation Unit. Om man hittar en matchning när man söker på källspråket efter en given textsträng i översättningsminnet, får man upp översättningar på alla möjliga målspråk för den givna textsträngen. Dessa kan i sin tur sättas in i måldokumentet. En sådan funktionalitet erbjuds i publicerings programvaran Skribenta, som har utvecklats av Excosoft. För att utföra en översättning till ett målspråk kräver Skribenta att text i källspråket hittar en exakt matchning eller en s.k. full match i översättningsminnet. En full match kan bara uppnås om en text finns lagrad i standardform. Detta kräver manuell taggning av entiteter och ofta förekommande ord som modellnamn och produktnummer. I denna uppsats undersöker jag hur man effektivt implementerar en optimering i ett översättningsminnessystem, bland annat genom att underlätta den manuella taggningen av entitier. Detta har gjorts genom olika Heuristiker som angriper problemet med Named Entity Recognition (NER). Resultat från de utvecklade Heuristikerna har jämförts med resultatet från det NER-verktyg som har utvecklats av Stanford. Resultaten visar att de Heuristiker som jag utvecklat uppnår ett högre F-Measure jämfört med Stanford NER och kan därför vara ett bra inledande steg för att hjälpa Excosofts användare att förbättra deras översättningsminnen.
APA, Harvard, Vancouver, ISO, and other styles
10

Clark, W. Andrew, Cher L. Cornett, and Peter M. Hriso. "Linking Art to Science: Digital Media as a Technology Translation Tool." Digital Commons @ East Tennessee State University, 2005. https://dc.etsu.edu/etsu-works/2524.

Full text
Abstract:
Technology translation can be achieved through the blending of the sciences and arts in the form of digital imagery. Digital animation and video can be utilized to portray molecular events where the mechanism of action is known but the process occurs at a sub-microscopic level. There needs to be a strong collaboration between scientific advisors and digital artists when creating the animation such that the artistic interpretation of the molecular event conforms to the known and accepted confines of science. The finished animation may be used for information, education or persuasion as entrepreneurial biotechnical companies attempt to find markets, customers and investors interested in their inventions. Educational institutions with programs in the sciences, arts, digital media and medicine need to promote the interaction of students from these disciplines through cross-functional teams and courses. Solutions to problems developed by these teams tend to be broader and more comprehensive than more homogeneous teams.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Xuan. "Hardware-based text-to-braille translation." Thesis, Curtin University, 2007. http://hdl.handle.net/20.500.11937/1351.

Full text
Abstract:
Braille, as a special written method of communication for the blind, has been globally accepted for years. It gives blind people another chance to learn and communicate more efficiently with the rest of the world. It also makes possible the translation of printed languages into a written language which is recognisable for blind people. Recently, Braille is experiencing a decreasing popularity due to the use of alternative technologies, like speech synthesis. However, as a form of literacy, Braille is still playing a significant role in the education of people with visual impairments. With the development of electronic technology, Braille turned out to be well suited to computer-aided production because of its coded forms. Software based text-to-Braille translation has been proved to be a successful solution in Assistive Technology (AT). However, the feasibility and advantages of the algorithm reconfiguration based on hardware implementation have rarely been substantially discussed. A hardware-based translation system with algorithm reconfiguration is able to supply greater throughput than a software-based system. Further, it is also expected as a single component integrated in a multi-functional Braille system on a chip.Therefore, this thesis presents the development of a system for text-to-Braille translation implemented in hardware. Differing from most commercial methods, this translator is able to carry out the translation in hardware instead of using software. To find a particular translation algorithm which is suitable for a hardware-based solution, the history of, and previous contributions to Braille translation are introduced and discussed. It is concluded that Markov systems, a formal language theory, were highly suitable for application to hardware based Braille translation. Furthermore, the text-to-Braille algorithm is reconfigured to achieve parallel processing to accelerate the translation speed. Characteristics and advantages of Field Programmable Gate Arrays (FPGAs), and application of Very High Speed Integrated Circuit Hardware Description Language (VHDL) are introduced to explain how the translating algorithm can be transformed to hardware. Using a Xilinx hardware development platform, the algorithm for text-to-Braille translation is implemented and the structure of the translator is described hierarchically.
APA, Harvard, Vancouver, ISO, and other styles
12

Tang, Chi Wai. "Portuguese-Chinese machine translation based on translation corresponding tree (TCT) annotation scheme." Thesis, University of Macau, 2006. http://umaclib3.umac.mo/record=b1446091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Limbu, Sireesh Haang. "Direct Speech to Speech Translation Using Machine Learning." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-439141.

Full text
Abstract:
Nowadays, most of the speech to speech translation applications and services use a three step process. First step is the speech to text translation using speech recognition. This is followed by text to text language translation and finally the text is synthesized into speech. As the availability of data and computing power improved,each of these individual steps advanced over time. Although, the progress was significant, there was always error associated with the first translation step in terms of various factors such as tone recognition of the speech, accent etc. The error further propagated and quite often deteriorated as it went down the translation steps. This gave birth to an ongoing budding research in direct speech to speech translation without relying on text translations. This project is inspired from Google’s 'Translatotron : An End-to-End Speech-to-Speech translation model'. In line with the 'Translatotron' model this thesis makes use of a simpler Sequence-to-Sequence (STS)encoder-decoder LSTM network using spectrograms as input to examine the possibility of direct language translations in audio form. Although the final results have inconsistencies and are not as efficient as the traditional speech to speech translation techniques which heavily rely on text translations, they serve as a promising platform for further research
APA, Harvard, Vancouver, ISO, and other styles
14

Farzi, Reza. "Taming Translation Technology for L2 Writing: Documenting the Use of Free Online Translation Tools by ESL Students in a Writing Course." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34585.

Full text
Abstract:
The present study explored the use of translation technology in second language (L2) writing by English as a Second Language (ESL) students at the University level. The appropriate role of translation, and specifically translation technology, in L2 curricula has been the subject of theoretical and practical debate. In order to address knowledge gaps relevant to this debate, the present study sought to document students’ current use of translation technology, specifically free online translation (FOT) tools, and their opinions about these tools. The study’s mixed-methods design included video observations and questionnaires regarding FOT use completed by 19 university students enrolled in a high intermediate-level ESL course. Semi-structured follow-up interviews were conducted with the six participants who were observed using FOT tools extensively on the video recordings. Results showed that high intermediate-level ESL students have a primarily positive attitude toward FOT tools. In addition, the majority of students reported using such tools regularly, even though only about one third of the students were actually observed using the tools significantly in the video recordings. Results are discussed in the context of the ongoing debate over whether and how translation technology should be used in L2 classrooms.
APA, Harvard, Vancouver, ISO, and other styles
15

Odd, Jakobsson. "Pronoun translation between English and Icelandic." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-339069.

Full text
Abstract:
A problem in machine translation is how to handle pronouns since languages use these differently, for example, in anaphoric reference. This essay examines what happens to the English third person pronouns he, she, and it when translated into Icelandic. Parallel corpora were prepared by tokenisation and subsequently the machine translation method word alignment was applied on the corpus. The results show that when a pronoun is used to refer to something outside the sentence (extra-sentential), this gives rise to major problems. Another problem encountered was the differences in the deictic strength between pronouns in English and Icelandic. One conclusion that can be drawn is that more research is needed as more reliable ways of handling pronouns are needed in translations.
Ett problem inom maskinöversättning är hur man ska hantera pronomen då språk använder dessa olika, exempelvis vid anaforisk referens. I den här uppsatsen undersöks vad som händer med engelska tredje persons pronomen he, she, och it när de har översatts till isländska. Parallella korpusar gjordes iordning genom tokenisering och därefter användes maskinöversättningsmetoden ordlänkning på korpusen. Resultaten visar att när pronomen används för att referera till något utanför satsen (extrasententiell) är det ett stort problem. Ett annat problem som påträffades gällde skillnader i deiktisk styrka mellan pronomen i engelska och isländska. En slutsats som kan dras är att mer forskning behövs då det behövs mer tillförlitliga sätt att hantera pronomen i översättningar.
APA, Harvard, Vancouver, ISO, and other styles
16

Sagemo, Oscar. "Estimating Post-Editing Effort with Translation Quality Features." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-299143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Xuan. "Hardware-based text-to-braille translation." Curtin University of Technology, Department of Computer Engineering, 2007. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=17220.

Full text
Abstract:
Braille, as a special written method of communication for the blind, has been globally accepted for years. It gives blind people another chance to learn and communicate more efficiently with the rest of the world. It also makes possible the translation of printed languages into a written language which is recognisable for blind people. Recently, Braille is experiencing a decreasing popularity due to the use of alternative technologies, like speech synthesis. However, as a form of literacy, Braille is still playing a significant role in the education of people with visual impairments. With the development of electronic technology, Braille turned out to be well suited to computer-aided production because of its coded forms. Software based text-to-Braille translation has been proved to be a successful solution in Assistive Technology (AT). However, the feasibility and advantages of the algorithm reconfiguration based on hardware implementation have rarely been substantially discussed. A hardware-based translation system with algorithm reconfiguration is able to supply greater throughput than a software-based system. Further, it is also expected as a single component integrated in a multi-functional Braille system on a chip.
Therefore, this thesis presents the development of a system for text-to-Braille translation implemented in hardware. Differing from most commercial methods, this translator is able to carry out the translation in hardware instead of using software. To find a particular translation algorithm which is suitable for a hardware-based solution, the history of, and previous contributions to Braille translation are introduced and discussed. It is concluded that Markov systems, a formal language theory, were highly suitable for application to hardware based Braille translation. Furthermore, the text-to-Braille algorithm is reconfigured to achieve parallel processing to accelerate the translation speed. Characteristics and advantages of Field Programmable Gate Arrays (FPGAs), and application of Very High Speed Integrated Circuit Hardware Description Language (VHDL) are introduced to explain how the translating algorithm can be transformed to hardware. Using a Xilinx hardware development platform, the algorithm for text-to-Braille translation is implemented and the structure of the translator is described hierarchically.
APA, Harvard, Vancouver, ISO, and other styles
18

De, Solà Morales Serra Oriol. "Health technology assessment as a framework for translation and valuation of innovation." Doctoral thesis, Universitat Rovira i Virgili, 2013. http://hdl.handle.net/10803/276155.

Full text
Abstract:
L’Avaluació de Tecnologies Sanitàries (ATS) pretén informar els decisors sobre els potencials impactes de la introducció de nova tecnologia en l’entorn sanitari. Tanmateix, es reconeixen diferents mancances en el procés. L’objectiu d’aquesta tesi és demostrar que l’ATS pot ser utilitzada abans (ex-ante) i després (ex-post) de la introducció d’una tecnologia i proposar una metodologia multidimensional que redueixi la incertesa en l’avaluació de la innovació en salut. Es presenten 3 articles (amb metodologia qualitativa) que demostren les limitacions de l’avaluació abans (ex-ante) i després (ex-post) de la introducció d’una tecnologia i la dificultat en l’atribució de l’impacte a la introducció d’una nova tecnologia. S’analitzen alhora models multidimensionals per a l’avaluació d’intervencions complexes, i es proposa una nova metodologia per a reduir la incertesa a l’hora d’introduir innovació. En conclusió, l’ATS és un procés vàlid per a l’avaluació ex-ante i ex-post, que pot ser superat per un model multidimensional que utilitza la mateixa base metodològica de l’ATS.
Health Technology Assessment (HTA) aims to inform decision makers about the potential impact of the introduction of new technology in the healthcare scenario. However, several deficiencies are recognized in the process. The objective of this thesis is to prove that HTA can be used before and after the introduction of technology and to propose a multidimensional methodology that reduces uncertainty in the assessment of innovation in healthcare. Three peer-reviewed publications show the limitations of ex-ante and ex-post evaluation and the limitations in attributing the impact to the introduction of a new technology. Several multidimensional evaluation models are analised, and a new methodology to reduce the uncertainty in introducing innovation is proposed. In conclusion, despite its limitations, the HTA process is valid for the ex-ante and ex-post evaluation, but can also be improved by a multidimensional model that uses the same methodological bases of HTA.
La Evaluación de Tecnologías Sanitarias (ETS) pretende informar a los decisores sobre los potenciales impactos de la introducción de nueva tecnología en el panorama sanitario. Sin embargo, se reconocen diferentes carencias en el proceso. El objetivo de esta tesis es demostrar que la ETS puede ser utilizada antes y después de la introducción de una tecnología y proponer una metodología multidimensional que reduzca la incertidumbre en la evaluación de la innovación en salud. Se presentan 3 artículos que demuestran las limitaciones de la evaluación ex-ante y ex-post y las limitaciones en la atribución del impacto a la introducción de una nueva tecnología. Se analizan algunos modelos multidimensionales para la evaluación de intervenciones complejas, y se propone una nueva metodología para reducir la incertidumbre a la hora de introducir innovación. Se concluye que a pesar de sus limitaciones, la ETS es un proceso válido para la evaluación ex-ante y ex-post, pero que a la vez puede ser superado por un modelo multidimensional que utiliza la misma base metodológica que la ETS.
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, Linda A. (Linda Anne) 1981. "The translation of media technology skills to community mobilization in youth programs." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17700.

Full text
Abstract:
Thesis (M.C.P.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 2004.
Includes bibliographical references (p. 87-88).
Media is an important part of the political process, and alternative media is especially valuable for community organizing. Youth media programs can play a great role in the development of media technology as a community-building tool. This study proposes that a key way to bring youth into community mobilization efforts is through their interest in media and their deftness in learning how to use media tools. The combination of youth and media is powerful for communities and warrants discussion and development. This study begins the discussion first with an examination of media forms as a tool for grassroots movements. The potential of digital media is particularly emphasized. Secondly, a survey of youth media programs in the United States identifies ways that youth are already being served and provides the background for understanding how youth media programs equip young social activists in their local communities. Finally, case studies of two youth media programs investigate how, or if, the development of technical and creative skills around media translates to social and political mobilization, especially among youth. The use of media to bring youth into community mobilizing and to strengthen their efforts is promoted. The role of youth media programs in making this possible is put forth. This discussion of translating media technology skills to community action is significant, as it points to new directions in community organizing. As technology becomes more advanced and accessible to communities, digital media tools are increasingly significant in society and for groups who want to change society. The future of community development is closely connected to media and computer technology.
by Linda A. Kim.
M.C.P.
APA, Harvard, Vancouver, ISO, and other styles
20

Gupta, Rohit. "Use of language technology to improve matching and retrieval in translation memory." Thesis, University of Wolverhampton, 2016. http://hdl.handle.net/2436/620338.

Full text
Abstract:
Current Translation Memory (TM) tools lack semantic knowledge while matching. Most TM tools compute similarity at the string level, which does not take into account semantic aspects in matching. Therefore, semantically similar segments, which differ on the surface form, are often not retrieved. In this thesis, we present five novel and efficient approaches to incorporate advanced semantic knowledge in translation memory matching and retrieval. Two efficient approaches which use a paraphrase database to improve translation memory matching and retrieval are presented. Both automatic and human evaluations are conducted. The results on both evaluations show that paraphrasing improves matching and retrieval. An approach based on manually designed features extracted using NLP systems and resources is presented, where a Support Vector Machine (SVM) regression model is trained, which calculates the similarity between two segments. The approach based on manually designed features did not retrieve better matches than simple edit-distance. Two approaches for retrieving segments from a TM using deep learning are investigated. The first one is based on Long Short Term Memory (LSTM) networks, while the other one is based on Tree Structured Long Short Term Memory (Tree-LSTM) networks. Eight different models using different datasets and settings are trained. The results are comparable to a baseline which uses simple edit-distance.
APA, Harvard, Vancouver, ISO, and other styles
21

Sadek, Gaafar. "Translation: Rights and Agency - A Public Policy Perspective for Knowledge, Technology and Globalization." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37362.

Full text
Abstract:
Copyright law relegates translation to a secondary, or derivative, status, which means that publishing a translation requires the permission of the rights holder of the original. This thesis argues for the timeliness of revisiting the translation right by analyzing its foundations and its implications from a transdisciplinary public policy perspective. This is done by first studying the historical and philosophical foundations of copyright law itself where the translation right is housed, revealing questionable philosophical arguments and a colonial past that has created legal path dependencies. The thesis then undertakes an examination of the foundations of the translation right specifically, dubbed “the international issue par excellence,” which confirms the same pattern observed in the development of copyright law. Given the complete absence of the translator’s perspective from all international discussions on the translation right, copyright’s view of translation is then contrasted with recent scholarship in translation theory, with a special focus on the notion of agency(-ies), exposing the incompatibility of these views on translation, and highlighting the importance of including the perspective of translation studies in policies and laws related to translation. The last part of the thesis explores the present-day realities of knowledge societies, digital technologies, and globalization, in order to identify the role of translation today and in the future, while highlighting the tremendous gaps between the have’s and the have-not’s, and the necessity of recognizing the specificities of different societies. Knowledge is the new capital of the world, and the translation right is an impediment to the key role translation can potentially play in allowing societies to participate in the cycle of its consumption and regeneration. Digital technologies are powerful enablers that have allowed those who have leveraged and embraced them, such as the open movement and prosumers of all types, to transform the nature of their interactions with their environment macro- and microstructurally. This has also been reflected in the profession of translation, where collaborative projects are constantly initiated, while the nature of the translator’s work is changing to the point where one seriously doubts whether the provisions of the century-old translation right still apply to it. The discussion on globalization focuses on language in a globalized world, power relations between linguistic communities, and means of preserving linguistic diversity and heritage. The translation right, with its questionable foundations and outdated nature, is an impediment to the potential role of translation (as representative of the public interest) in the world, and must be revisited and at least reduced to the point of constituting balanced public policy. Social development, power relations and the necessity of differentiation (or “otherness”) are running themes throughout the work, which tries to balance between theoretical discussions from various relevant disciplines and reliance on United Nations and other public policy research.
APA, Harvard, Vancouver, ISO, and other styles
22

SOUZA, RICARDO CORREIA DA SILVA E. "TECHNOLOGY AND THE GAP BETWEEN UNIVERSITY EDUCATION AND TRANSLATION PRACTICE: A PROPOSED APPROACH." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=30788@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Esta dissertação se insere no contexto das discussões contemporâneas sobre abordagens às tecnologias da tradução na formação universitária de tradutores profissionais. Este estudo parte de uma análise de um afastamento percebido entre ensino universitário e prática profissional no âmbito estrito da formação de tradutores para investigar o papel do tratamento dispensado às tecnologias da tradução nessa conjuntura. Para tanto, como base teórica e metodológica, este trabalho usa postulados dos Estudos da Tradução sobre o objeto de seus ramos descritivos e aplicados e os estudos desenvolvidos pela pedagogia da tradução em torno do que seriam competências tradutórias para, então, propor um modo de inserção da tecnologia no processo de formação de tradutores. É feita, então, uma inclusão das tecnologias da tradução no contexto dos modernos Estudos de Ciência e Tecnologia em seus aspectos descritivos - notadamente os trabalhos sobre Ciência, Tecnologia e Sociedade (CTS) e a Teoria Ator-Rede (TAR) - para verificar expectativas sobre processos e resultados da tradução por causa da presença intensiva da tecnologia. Por fim, com base nas inter-relações expostas pelo exame anterior, este estudo sugere uma forma de abordagem às tecnologias da tradução na universidade que contribua para valorizar a universidade, reduzir o afastamento percebido entre ensino universitário e prática da tradução e destacar as pesquisas acadêmicas da tradução como protagonistas no âmbito das Humanidades Digitais.
This work follows the contemporary discussions on how to approach the translation technologies in the education of professional translators. This study starts with an analysis of a perceived distance between university education and professional practice in the strict framework of translators training and education, and then investigates the role that current approaches to translation technology play in that scenario. Therefore, as a theoretical and methodological basis, this work uses postulates from Translation Studies on the object of their descriptive and applied branches, and from the translation pedagogy studies on translational competences, to propose a mode of inserting technology in the translator education process. Then, the translation technologies are included in the modern context of Science and Technology Studies in their descriptive aspects - notably the Science, Technology and Society (STS) studies, and the Actor-Network Theory (ANT) approach - in order to check expectations on translation processes and results associated with the intensive presence of translation technologies. Finally, based on the interrelations found during investigations, this study suggests an approach to translation technology at the university that would contribute to distinguish the university, reduce the distance perceived between university education and translation practice, and highlight the academic translation research as protagonist within the Digital Humanities studies.
APA, Harvard, Vancouver, ISO, and other styles
23

Buttacavoli, Matthew. "An Ethnographic Study of Translators and Technology." Kent State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=kent1405431440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lindgren, Anna. "Semi-Automatic Translation of Medical Terms from English to Swedish : SNOMED CT in Translation." Thesis, Linköpings universitet, Medicinsk informatik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-69736.

Full text
Abstract:
The Swedish National Board of Health and Welfare has been overseeing translations of the international clinical terminology SNOMED CT from English to Swedish. This study was performed to find whether semi-automatic methods of translation could produce a satisfactory translation while requiring fewer resources than manual translation. Using the medical English-Swedish dictionary TermColl translations of select subsets of SNOMED CT were produced by ways of translation memory and statistical translation. The resulting translations were evaluated via BLEU score using translations provided by the Swedish National Board of Health and Welfare as reference before being compared with each other. The results showed a strong advantage for statistical translation over use of a translation memory; however, overall translation results were far from satisfactory.
Den internationella kliniska terminologin SNOMED CT har översatts från engelska till svenska under ansvar av Socialstyrelsen. Den här studien utfördes för att påvisa om semiautomatiska översättningsmetoder skulle kunna utföra tillräckligt bra översättning med färre resurser än manuell översättning. Den engelsk-svenska medicinska ordlistan TermColl användes som bas för översättning av delmängder av SNOMED CT via översättnings­minne och genom statistisk översättning. Med Socialstyrelsens översättningar som referens poängsattes the semiautomatiska översättningarna via BLEU. Resultaten visade att statistisk översättning gav ett betydligt bättre resultat än översättning med översättningsminne, men över lag var resultaten alltför dåliga för att semiautomatisk översättning skulle kunna rekommenderas i detta fall.
APA, Harvard, Vancouver, ISO, and other styles
25

Joelsson, Jakob. "Translationese and Swedish-English Statistical Machine Translation." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-305199.

Full text
Abstract:
This thesis investigates how well machine learned classifiers can identify translated text, and the effect translationese may have in Statistical Machine Translation -- all in a Swedish-to-English, and reverse, context. Translationese is a term used to describe the dialect of a target language that is produced when a source text is translated. The systems trained for this thesis are SVM-based classifiers for identifying translationese, as well as translation and language models for Statistical Machine Translation. The classifiers successfully identified translationese in relation to non-translated text, and to some extent, also what source language the texts were translated from. In the SMT experiments, variation of the translation model was whataffected the results the most in the BLEU evaluation. Systems configured with non-translated source text and translationese target text performed better than their reversed counter parts. The language model experiments showed that those trained on known translationese and classified translationese performed better than known non-translated text, though classified translationese did not perform as well as the known translationese. Ultimately, the thesis shows that translationese can be identified by machine learned classifiers and may affect the results of SMT systems.
APA, Harvard, Vancouver, ISO, and other styles
26

Stymne, Sara. "Compound Processing for Phrase-Based Statistical Machine Translation." Licentiate thesis, Linköping : Department of Computer and Information Science, Linköpings universitet, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-51416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Singh, Pawanbir. "Enabling late-stage translation of regenerative medicine based products." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/6060.

Full text
Abstract:
The primary aim of the thesis is to contribute to demonstrating how established and emerging science in the regenerative medicine (RM) domain can be translated into profitable commercial practice, and generate clinically- and cost-effective therapies. It achieves this by exploring and assessing underlying economics, including investment readiness and economic assessment, exploring regulatory and reimbursement frameworks, developing stem cell culture systems and assessing fit with clinical practice. The thesis is the first public domain wide-ranging analysis of business trends in the production, manufacturing and supply segments of the RM industry. It analyses the clinical potential of the domain as well as the translational and commercial challenges facing the industry. The industry is at a turning point as big pharmaceutical companies engage with RM in order to explore technologies as potential therapeutics and discovery tools. This unlocks the industry by confirming an exit path for RM based small- and medium-sized enterprises. Translation has come to be recognised as a core issue in the overall space and translation of regenerative therapies into the clinic is presently challenging, high-risk and expensive. This research addresses the question what are the mechanisms required to enable translation of emerging scientific knowledge into commercially viable clinical RM products? These mechanisms are particularly important as their creation involves and requires major investment decisions, which can determine the success or failure of RM developments and indeed of the companies concerned. The lack of well-established business models and the complexity of the domain suggested a conceptual approach drawing upon relevant literature from product and process development, applied business and revenue models, technological evolution and capital market ingenuity. The research was carried out in two phases. The first phase was concerned with identification of key challenges and mapping the overall industry emergence including emergence of related regulations to provide a context and framework for understanding the domain. Based on the emergence mapping a timeline of key parallel factors was identified, and their inherent connections explored to identify transforming events affecting and influencing multiple factors on the journey to clinical success within a business environment. This creates the reference model. The second phase was concerned with manufacturing a stem cell based therapeutic and applying health economic principles to determine available headroom for investment, cost of goods and return on investment, taking hearing disorders as a case exemplar, and exploring the behaviour of the net present value curve to identify key parameters affecting the economic positioning of this novel regime. A key output of the research is the investment readiness reference model. It integrates key RM business issues against reducing uncertainty and increasing value. The model argues that the complex nature of RM products means that the issues affecting industry emergence and development go well beyond the primarily scientific and technological concerns on which much current research focuses. The performance of RM firms ultimately hinges upon the successful clinical application of their developed products, the key step for creating and realising value, and their ability to deal with the fundamental business issues specific to the area. The framework deals with these business issues, which are investment & technology readiness, business models, organisational challenges, public policy and industry emergence. This thesis explores ideas that may bridge the chasm between the promise and reality of RM i.e. mechanisms to enable late stage translation of RM products. It links technological capability and business models for firms in the domain. Furthermore, it offers a unique perspective on the nature and characteristics of investment readiness and financial assessment, specifically identifying key parameters affecting economic positioning. The key contributions are therefore: New insights into the key challenges involved in realising the commercial potential of cell based therapeutics. Technology road mapping to link fundamental enabling technological capability for developing RM products with robust business plans integrating strategy, technology development and the regulatory and reimbursement framework. A generic investment readiness reference model generated from the enabling technology, value and supply chain structures to identify key indicators and characteristics of industry readiness. A novel experimental programme demonstrating expansion, maintenance and differentiation of human embryonic stem cells by manual and automated methods. New insights into economic positioning by mapping net present value, and economic analysis by estimating available headroom, cost of goods and return on investment for a putative hearing therapeutic.
APA, Harvard, Vancouver, ISO, and other styles
28

Zapata, Rojas Julian. "Translators in the Loop: Observing and Analyzing the Translator Experience with Multimodal Interfaces for Interactive Translation Dictation Environment Design." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34978.

Full text
Abstract:
This thesis explores interactive translation dictation (ITD), a translation technique that involves interaction with multimodal interfaces equipped with voice recognition (VR) technology throughout the entire translation process. Its main objective is to provide a solid theoretical background and an analysis of empirical qualitative and quantitative data that demonstrate ITD’s advantages and challenges, with a view to integrating this technique into the translation profession. Many empirical studies in human-computer interaction have strived to demonstrate the efficiency of voice input versus keyboard input. Although it was implicit in the earliest works that voice input was expected to completely replace—rather than complement—text-input devices, it was soon proposed that VR often performed better in combination with other input modes. This study introduces multimodal interaction to translation, taking advantage of the unparallelled robustness of commercially available voice-and-touch-enabled multimodal interfaces such as touch-screen computers and tablets. To that end, an experiment was carried out with 14 professional English-to-French translators, who performed a translation task either with the physical prototype of an ITD environment, or with a traditional keyboard-and-mouse environment. The hypothesis was that the prototypical environment would consistently provide translators with a better translator experience (TX) than the traditional environment, considering the translation process as a whole. The notion of TX as introduced in this work is defined as a translator’s perceptions of and responses to the use or anticipated use of a product, system or service. Both quantitative and qualitative data were collected using different methods, such as video and screen recording, input logging and semi-structured interviews. The combined analysis of objective and subjective usability measures suggests a better TX with the experimental environment versus the keyboard-and-mouse workstation, but significant challenges still need to be overcome for ITD to be fully integrated into the profession. Thus, this doctoral study provides a basis for better-grounded research in translator-computer interaction and translator-information interaction and, more specifically, for the design and development of an ITD environment, which is expected to support professional translators’ cognitive functions, performance and well-being. Lastly, this research aims to demonstrate that translation studies research, and translation technology in particular, needs to be more considerate of the translator, the TX, and the changing realities of the interaction between humans, computers and information in the twenty-first century.
APA, Harvard, Vancouver, ISO, and other styles
29

Busi, Gioia. "Changes in the translation industry: A prospectus on the near future of innovation in machine translation." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
This thesis aims to analyze the supposed inevitability of a breakthrough in machine translation and the role that this breakthrough will play in the evolution of translation companies. It will analyze the changes that are happening and what repercussions those changes will have on the decisions made by students, professionals, agencies, and institutions over the next twenty years. This paper will be divided into three main sections: The first part will provide a background of today’s translation industry and consider the advent of machine translation in translation agencies and its continuous developments. In the second part of this essay, I will illustrate how I carried out my research inside Global Voices - a translation agency based in Stirling, Scotland, in which I have interned as Project Manager in December 2018 - to understand what use the translation agency makes of machine translation, also conveying my colleagues’ thoughts about it. The conclusion will recapitulate the topics approached, revolve around the main findings of this study and try to foresee what translators should expect from the future, how in my opinion they should deal with the changes the future will bring.
APA, Harvard, Vancouver, ISO, and other styles
30

Beech, Daniel. "Managing volcanic hazards : an actor-network of technology and communication." Thesis, Aberystwyth University, 2017. http://hdl.handle.net/2160/a519ac39-c8ff-4c8d-be18-1cd8c1a61331.

Full text
Abstract:
The scientific and socio-political dimensions of volcanic hazards have been realigned since the eruption of Eyjafjallajökull in 2010, and have highlighted the need for volcanic activity to be studied from interdisciplinary perspectives. By focussing on communication, adaptability and resilience, this research explains the links between hazard management and social constructivism. The research question asks how Iceland’s networked approach to managing volcanoes can be understood by analysing the development of communication channels between human stakeholders and non-human technical devices and systems. Fieldwork was conducted in both Iceland and the UK, and a mixed methods approach was used to engage with the network. Research methods consisted of semi-structured interviews, participant observations and archival research. Findings explain the evolution of knowledge exchange, the value of technical innovation, and the need for interactions between local, national and international stakeholders. The study concludes that actors are increasingly empowered by the use of participatory technologies within hazard management, and the development of collaborative engagements between stakeholder communities from scientific and socio-political backgrounds. This research is relevant as it illustrates how the adaptive capacity of hazard networks can be expanded, potentially influencing the approaches that are taken to manage volcanic hazards in less economically developed contexts. In addition, this study can encourage continued interaction between scientists, at-risk communities and the aviation industry in multi-hazard environments such as Iceland.
APA, Harvard, Vancouver, ISO, and other styles
31

Saers, Markus. "Translation as Linear Transduction : Models and Algorithms for Efficient Learning in Statistical Machine Translation." Doctoral thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-135704.

Full text
Abstract:
Automatic translation has seen tremendous progress in recent years, mainly thanks to statistical methods applied to large parallel corpora. Transductions represent a principled approach to modeling translation, but existing transduction classes are either not expressive enough to capture structural regularities between natural languages or too complex to support efficient statistical induction on a large scale. A common approach is to severely prune search over a relatively unrestricted space of transduction grammars. These restrictions are often applied at different stages in a pipeline, with the obvious drawback of committing to irrevocable decisions that should not have been made. In this thesis we will instead restrict the space of transduction grammars to a space that is less expressive, but can be efficiently searched. First, the class of linear transductions is defined and characterized. They are generated by linear transduction grammars, which represent the natural bilingual case of linear grammars, as well as the natural linear case of inversion transduction grammars (and higher order syntax-directed transduction grammars). They are recognized by zipper finite-state transducers, which are equivalent to finite-state automata with four tapes. By allowing this extra dimensionality, linear transductions can represent alignments that finite-state transductions cannot, and by keeping the mechanism free of auxiliary storage, they become much more efficient than inversion transductions. Secondly, we present an algorithm for parsing with linear transduction grammars that allows pruning. The pruning scheme imposes no restrictions a priori, but guides the search to potentially interesting parts of the search space in an informed and dynamic way. Being able to parse efficiently allows learning of stochastic linear transduction grammars through expectation maximization. All the above work would be for naught if linear transductions were too poor a reflection of the actual transduction between natural languages. We test this empirically by building systems based on the alignments imposed by the learned grammars. The conclusion is that stochastic linear inversion transduction grammars learned from observed data stand up well to the state of the art.
APA, Harvard, Vancouver, ISO, and other styles
32

Ankrah, Samuel Nene. "The process and outcomes of technology translation by partnerships in the UK Faraday Partnership Initiative." Thesis, University of Leeds, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sullivan, Shannon. "Knowledge Translation of Economic Evaluations and Network Meta-Analyses." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32109.

Full text
Abstract:
Economic evaluations and network meta-analyses (NMAs) are complex methodologies. Increasing their transparency and accessibility could enhance confidence in the legitimacy of policy decisions informed by these analyses. Four systematic reviews were conducted to understand policymakers’ informational needs and to determine what guidance researchers have on how to present economic evaluations and NMAs. Qualitative interviews were conducted with Canadian policymakers, i.e., knowledge users, to understand barriers and facilitators to using and communicating economic evaluations and NMAs and with individuals in international health technology assessment organizations, i.e. knowledge producers, to explore current approaches to communicating economic evaluations and NMAs. A toolkit for NMAs and economic evaluations was proposed based on an integrated review of these findings and guided by the Knowledge-to-Action framework. Examples of tools were developed and applied to an economic evaluation and NMA of osteoporosis therapies. Systematic reviews and qualitative interviews found that communication approaches that provide robust content, identify contextual factors relevant to policy decisions and enhance clarity were valued. Twelve tools were proposed that enhance communication, education and access to resources for policymakers. Two of these tools were developed: Economic Guidance for Researchers and NMA Guidance for Researchers.
APA, Harvard, Vancouver, ISO, and other styles
34

Della, Corte Giuseppe. "Text and Speech Alignment Methods for Speech Translation Corpora Creation : Augmenting English LibriVox Recordings with Italian Textual Translations." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413064.

Full text
Abstract:
The recent uprise of end-to-end speech translation models requires a new generation of parallel corpora, composed of a large amount of source language speech utterances aligned with their target language textual translations. We hereby show a pipeline and a set of methods to collect hundreds of hours of English audio-book recordings and align them with their Italian textual translations, using exclusively public domain resources gathered semi-automatically from the web. The pipeline consists in three main areas: text collection, bilingual text alignment, and forced alignment. For the text collection task, we show how to automatically find e-book titles in a target language by using machine translation, web information retrieval, and named entity recognition and translation techniques. For the bilingual text alignment task, we investigated three methods: the Gale–Church algorithm in conjunction with a small-size hand-crafted bilingual dictionary, the Gale–Church algorithm in conjunction with a bigger bilingual dictionary automatically inferred through statistical machine translation, and bilingual text alignment by computing the vector similarity of multilingual embeddings of concatenation of consecutive sentences. Our findings seem to indicate that the consecutive-sentence-embeddings similarity computation approach manages to improve the alignment of difficult sentences by indirectly performing sentence re-segmentation. For the forced alignment task, we give a theoretical overview of the preferred method depending on the properties of the text to be aligned with the audio, suggesting and using a TTS-DTW (text-to-speech and dynamic time warping) based approach in our pipeline. The result of our experiments is a publicly available multi-modal corpus composed of about 130 hours of English speech aligned with its Italian textual translation and split in 60561 triplets of English audio, English transcript, and Italian textual translation. We also post-processed the corpus so as to extract 40-MFCCs features from the audio segments and released them as a data-set.
APA, Harvard, Vancouver, ISO, and other styles
35

Mohamed, Asif, Paul Sujeet, and Vishnu Ullas. "Gauntlet-X1: Smart Glove System for American Sign Language translation using Hand Activity Recognition." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428743.

Full text
Abstract:
The most common forms of Human Computer Interaction (HCI) devices these dayslike the keyboard, mouse and touch interfaces, are limited to working on atwo-dimensional (2-D) surface, and thus do not provide complete freedom ofaccessibility using our hands. With the vast number of gestures a hand can perform,including the different combinations of motion of fingers, wrist and elbow, we canmake accessibility and interaction with the digital environment much more simplified,without restrictions to the physical surface. Fortunately, this is possible due to theadvancements of Microelectromechanical systems (MEMS) manufacturing of sensors,reducing the size of a sensor to the size of a fingernail.In this thesis we document the design and development of a smart glove systemcomprising of Inertial Measurement Units (IMU) sensors that recognize handactivity/gestures using combinations of neural networks and deep learning techniquessuch as Long Short-Term Memory (LSTM) and Convolutional Neural Network(CNN). This peripheral device is named as the Gauntlet-X1, X1 to denote thecurrent prototype version of the device. The system captures IMU data and interfaceswith the host server. In order to demonstrate this prototype as a proof of concept,we integrate to Android mobile applications based on 3-D interactivity like theAmerican Sign Language(ASL), Augmented Reality (AR)/Virtual Reality (VR)applications and can be extended to further the use of HCI technology.
APA, Harvard, Vancouver, ISO, and other styles
36

Syahrina, Alvi. "Online Machine Translator System and Result Comparison." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20869.

Full text
Abstract:
Translation from one human language to another has been using the help of the capabilities of computer advances. There are a lot of machine translators nowadays, each adapts to different machine translator approaches. This thesis presents the distinction between two selected machine translator approaches, statistical machine translator (SMT) and hybrid machine translator (HMT). The research focuses on creating evaluation for two machine translator of different approaches by both textual studies and evaluation experiment. The result of this research is an evaluation of the translator system and also the translation result. This result is then hoped to add information into the history of machine translators.
Program: Kandidatutbildning i informatik
APA, Harvard, Vancouver, ISO, and other styles
37

Lysaght, Andrew Christopher. "Characterization of cochlear transcription, translation and energy extraction in aging and noise-induced pathology." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/95864.

Full text
Abstract:
Thesis: Ph. D., Harvard-MIT Program in Health Sciences and Technology, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 147-163).
Success in otologic practice is currently limited by the diagnostic tools and treatment options available to address an individual's specific presentation of hearing loss. This limitation results from insufficient characterization of the inner ear's biochemical environment as well as physical hurdles associated with accessing inner ear tissues. The encapsulation of the hearing organ within a bony shell and delicate nature of its tissues make standard tissue biopsy techniques impossible and leave many imaging methods impractical. This thesis sought to approach these clinical limitations in two ways: (1) performing novel transcriptional and translational characterizations of inner ear tissues and (2) development of a novel technique to access and communicate diagnostic information from within the inner ear. The first part of this thesis employs whole transcriptome shotgun sequencing to study murine inner ear transcriptional activity in young, healthy animals as well as changes associated with organ aging and noise-induced auditory neuropathy, an important mechanism of hearing impairment in humans. Knowledge of the inner ear's transcriptional behavior (Part I) is coupled with novel translational insights provided by high-throughput tandem mass-spectrometry (Part III) studies of human inner ear fluids obtained from healthy and pathologic populations. These studies illuminate homeostatic mechanisms employed by the highly specialized inner ear tissues, providing a critical knowledge-base for inner ear scientists and pharmacologists, and identify important expression-level changes which occur during the onset and progression of inner ear pathologies. While these high-throughput studies offer the powerful ability to gain a wealth of knowledge into which genes are active within the inner ear, functional assessment of the specific role these genes play must be assessed in a more focused manner. Phenotypic characterization of mice with specific genetic mutations (Part II) has been performed to provide critical insight into the specific role Fgf23 plays in development and maintenance of the auditory system. The second arm of this thesis seeks to provide clinical practicality to the above work by developing a method to safely access the inner ear environment to gather and communicate diagnostic information (Part IV). A guinea pig model was utilized to develop an approach to insert microelectrodes into the fluid spaces of the inner ear in order to harness and monitor the natural electrochemical gradient of the organ. The useful energy extracted from this "biological battery" was used to power a combined microchip/radio transmitter capable of performing voltage-sensing operations within endolymph and wirelessly relaying this information to an external receiver. This study was the first to utilize a mammalian electrochemical potential to power an electronic device. By performing this task while preserving the integrity of the hearing organ this work provides the first, critical proof-of-concept demonstration toward clinically-applicable sensing and therapeutic devices powered by the inner ear. Further refinement of this technique into a long-term, fully-implantable device will enable previously impossible longitudinal studies of organ behavior in awake, behaving subjects and the incorporation of sensing modalities into current inner ear prostheses to monitor biochemical changes and maximize patient benefits.
by Andrew Christopher Lysaght.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
38

Van, Vuuren Tania. "Proposing guidelines for a South African textbook of translation." Thesis, Bloemfontein : Central University of Technology, Free State, 2013. http://hdl.handle.net/11462/205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Resman, Daniel. "Using Alignment Methods to Reduce Translation of Changes in Structured Information." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-79363.

Full text
Abstract:
In this thesis I present an unsupervised approach that can be made supervised in order to reducetranslation of changes in structured information, stored in XML-documents. By combining a sentenceboundary detection algorithm and a sentence alignment algorithm, a translation memory is createdfrom the old version of the information in different languages. This translation memory can then beused to translate sentences that are not changed. The structure of the XML is used to improve theperformance. Two implementations were made and evaluated in three steps: sentence boundary detection,sentence alignment and correspondence. The last step evaluates the using of the translation memoryon a new version in the source language. The second implementation was an improvement, using theresults of the evaluation of the first implementation. The evaluation was done using 100 XML-documents in English, German and Swedish. There was a significant difference between the results ofthe implementations in the first two steps. The errors were reduced by each step and in the last stepthere were only three errors by first implementation and no errors by the second implementation. The evaluation of the implementations showed that it was possible to reduce text that requires re-translation by about 80%. Similar information can and is used by the translators to achieve higherproductivity, but this thesis shows that it is possible to reduce translation even before the textsreaches the translators.
APA, Harvard, Vancouver, ISO, and other styles
40

Ahmady, Tobias, and Rosmar Sander Klein. "Translation of keywords between English and Swedish." Thesis, KTH, Data- och elektroteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146637.

Full text
Abstract:
In this project, we have investigated how to perform rule-based machine translation of sets of keywords between two languages. The goal was to translate an input set, which contains one or more keywords in a source language, to a corresponding set of keywords, with the same number of elements, in the target language. However, some words in the source language may have several senses and may be translated to several, or no, words in the target language. If ambiguous translations occur, the best translation of the keyword should be chosen with respect to the context. In traditional machine translation, a word's context is determined by a phrase or sentences where the word occurs. In this project, the set of keywords represents the context. By investigating traditional approaches to machine translation (MT), we designed and described models for the specific purpose of keyword- translation. We have proposed a solution, based on direct translation for translating keywords between English and Swedish. In the proposed solu- tion, we also introduced a simple graph-based model for solving ambigu- ous translations.
I detta projekt har vi undersökt hur man utför regelbaserad maskinöver- sättning av nyckelord mellan två språk. Målet var att översätta en given mängd med ett eller flera nyckelord på ett källspråk till en motsvarande, lika stor mängd nyckelord på målspråket. Vissa ord i källspråket kan dock ha flera betydelser och kan översättas till flera, eller inga, ord på målsprå- ket. Om tvetydiga översättningar uppstår ska nyckelordets bästa över- sättning väljas med hänsyn till sammanhanget. I traditionell maskinö- versättning bestäms ett ords sammanhang av frasen eller meningen som det befinner sig i. I det här projektet representerar den givna mängden nyckelord sammanhanget. Genom att undersöka traditionella tillvägagångssätt för maskinöversätt- ning har vi designat och beskrivit modeller specifikt för översättning av nyckelord. Vi har presenterat en direkt maskinöversättningslösning av nyckelord mellan engelska och svenska där vi introducerat en enkel graf- baserad modell för tvetydiga översättningar.
APA, Harvard, Vancouver, ISO, and other styles
41

Guan, Xin. "Context-based translation of constant concept values in e-business." Thesis, University of Macau, 2008. http://umaclib3.umac.mo/record=b1780719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Zucca, Matthew. "Defect At Manitoulin Permaculture." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39827.

Full text
Abstract:
The purpose of this research was to describe technopolitics on a permaculture farm. The literature on technology and technopolitics is beginning to take shape around a series of unanswered questions regarding the role and potential of objects in social life. Using an actor-network theory framework and relying on Callon's (1984) sociology of translation and its principles, I identified the concept of defect and tracked how it was mobilized at Manitoulin Permaculture through participant observation across two summers. The concept of defect incorporated both sociopolitical as well as technological factors. The makers at Manitoulin Permaculture made sense of their choice to defect to a new life at Manitoulin Permaculture. In addition, their technologies, both new and old, became defective, animating their community in new and unforeseen ways.
APA, Harvard, Vancouver, ISO, and other styles
43

Strange, Heather. "Non-invasive prenatal diagnosis and testing : perspectives on the emergence and translation of a new prenatal testing technology." Thesis, Cardiff University, 2015. http://orca.cf.ac.uk/90887/.

Full text
Abstract:
This thesis presents findings from a qualitative study of the emergence and early clinical translation of non-invasive prenatal diagnosis (NIPD) in the UK. Drawing from interviews with a range of experts and users I track the enrolment and translation of this new prenatal testing technology across a variety of clinical and social spaces. I show how encounters with NIPD prompt deep critical examination of the moral, social and political implications - not only of the technology - but of the established clinical practices (routine and specialised prenatal testing) and specific policy contexts (prenatal screening programmes) within which NIPD has begun to sediment. I explore how, as NIPD advances at a rapid pace and emerges within a culturally and politically complex context, the technology both aligns with and disrupts routine practices of prenatal screening and diagnosis. I show how, as the technology divides into two major strands - NIPD and NIPT - at an early stage of development, and before becoming naturalised/normalised within the clinic, scientists, clinicians and policy makers attempt to pin down, define and ‘fix’ the technology, drawing upon and engaging in substantive practices of division, categorisation and classification. I explore ambiguities present within such accounts, highlighting dissenting voices and moments of problematisation, and following this, I show how the ‘troubling’ of boundaries prompts much examination of ethical and social concerns. As a location within which interviewees explored more contentious issues, I show how abortion emerged as central to the discussion of NIPD. I proceed to show how institutionalised, professionalised bioethical debate dominates mainstream discourse, and I explain how a particular construction of the informed, individual choice-maker is mobilised in order to locate moral and political responsibility for testing in the hands of individuals, and to distance political/organisational structures from entanglement with problematic concerns. I explore how clinicians and patients respond to this positioning in multiple ways, both assimilating and questioning the mainstream discourse of ‘informed choice’. In conclusion, I highlight the broader (bio)political aspects of NIPD’s emergence and translation within prenatal screening and diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
44

Ярова, Алла Григорівна, Алла Григорьевна Яровая, Alla Hryhorivna Yarova, Hussain Afrin Farzana Alam, and Vasanth Sukumaran. "Vasanth Lesya Ukrainka And The "Rigveda": Sociocultural And Linguistic Context Of Translation." Thesis, Sumy State University, 2021. https://essuir.sumdu.edu.ua/handle/123456789/84787.

Full text
Abstract:
Translation can be called one of the most important technologies of culture that helps to change the environment of a person. Without this invention some cultures created in different languages would remain divided and self-isolated and it could hardly contribute to the progress of mankind. The invention of translation as a method of transferring cultures eventually allowed people to start creating a modern global world on the basis of supplementation, exchange and borrowing rather than suppression and devaluation of this or that culture.
APA, Harvard, Vancouver, ISO, and other styles
45

Yu, Xin Xin. "HUMANS, NONHUMANS, ENVIRONMENT, AND TECHNOLOGY IN A POST-NATURAL WORLD: Translation and Commentary of a Story from Where Rain Falls Amiss (Kuyu zhi di 苦雨之地) by Wu Ming-yi 吳明益." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
As climate change in the Anthropocene rises to prominence in public discussions, literature and translation that engage with ecological issues are also gaining attention in Anglophone academia and the book market. This dissertation seeks to analyse Taiwanese author Wu Ming-yi 吳明益’s most recent short story collection, Where Rain Falls Amiss (Kuyu zhi di 苦雨之地, 2019), and discuss my translation process of one of the short stories, Here Lie Clouds, Two Thousand Metres Up (Yun zai liangqianmi 雲在兩千米). Drawing on ecocriticism and posthumanist theories, my analysis and interpretation of Where Rain Falls Amiss illustrates how it explores the interconnection between human beings, nonhuman organisms, physical landscapes, and technological environments, in a way that incorporates local environmental history and indigenous traditions as well as influences from Anglophone environmental writing and scientific research. Turning to the translation, I contextualised my understanding by interviewing the author Wu Ming-yi and the translator Darryl Sterk, who has translated two books by Wu into English, and I drew inspiration from ecotranslation theories and techniques advocated by Michael Cronin and Darryl Sterk. The commentary presents detailed examples to discuss how I approached the translation of bionyms and descriptions of flora and fauna as well as culture-specific content such as indigenous beliefs and localisms. Overall, I sought to negotiate a balance between terminological precision, cultural specificity, and textual fluency on a case-by-case basis, hoping to produce a translated text that is both instructive and engaging for the target reader. My translation and the source text of Here Lie Clouds, Two Thousand Metres Up can be found at the end of the dissertation.
APA, Harvard, Vancouver, ISO, and other styles
46

Nugent, Annita. "Government funding of university-industry collaboration: Exploring the impact of targeted funding on research translation." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/233669/1/Annita_Nugent_Thesis.pdf.

Full text
Abstract:
Public funding of academic research comes with an expectation that the outcome will yield economic and societal benefit. In this thesis I investigate the effect of funding targeting university-industry collaboration on the overall objective of providing societal and economic benefit. I observe that funding schemes designed to include industry stakeholders drive university innovation, but do not necessarily seed new innovation. My study highlights the influence of the technology transfer office, and its level of sophistication, in the capture of economic value from university research.
APA, Harvard, Vancouver, ISO, and other styles
47

Papadopoulou, Anthi. "Automatic Error Detection and Correction in Neural Machine Translation : A comparative study of Swedish to English and Greek to English." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385085.

Full text
Abstract:
Automatic detection and automatic correction of machine translation output are important steps to ensure an optimal quality of the final output. In this work, we compared the output of neural machine translation of two different language pairs, Swedish to English and Greek to English. This comparison was made using common machine translation metrics (BLEU, METEOR, TER) and syntax-related ones (POSBLEU, WPF, WER on POS classes). It was found that neither common metrics nor purely syntax-related ones were able to capture the quality of the machine translation output accurately, but the decomposition of WER over POS classes was the most informative one. A sample of each language was taken, so as to aid in the comparison between manual and automatic error categorization of five error categories, namely reordering errors, inflectional errors, missing and extra words, and incorrect lexical choices. Both Spearman’s ρ and Pearson’s r showed that there is a good correlation with human judgment with values above 0.9. Finally, based on the results of this error categorization, automatic post editing rules were implemented and applied, and their performance was checked against the sample, and the rest of the data set, showing varying results. The impact on the sample was greater, showing improvement in all metrics, while the impact on the rest of the data set was negative. An investigation of that, alongside the fact that correction was not possible for Greek due to extremely free reference translations and lack of error patterns in spoken speech, reinforced the belief that automatic post-editing is tightly connected to consistency in the reference translation, while also proving that in machine translation output handling, potentially more than one reference translations would be needed to ensure better results.
APA, Harvard, Vancouver, ISO, and other styles
48

Lindblad, Hannes. "Lost in Translation : A case of BIM implementation at a large public client." Licentiate thesis, KTH, Projektkommunikation, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208636.

Full text
Abstract:
The technology of Building Information Modelling (BIM) is being introduced to the Architectural, Engineering and Construction (AEC) industry. This industry is generally perceived as being fragmented, having low productivity and with a low rate of innovation. BIM is presented as new paradigm within this industry, enabling inter-organisational collaboration, overall increasing quality while simultaneously lowering costs. But widespread BIM implementation has not yet taken place. In order to increase adoption, public client organisations have been argued to be the actors needed to initiate and drive the implementation of BIM. However, the client perspective of such implementation initiatives has not been the main focus of earlier research. In this licentiate thesis the BIM implementation process conducted by the largest infrastructure client in Sweden is analysed. This organisation implements BIM both to benefit their own organisation but also to increase productivity and innovation within the whole infrastructure branch of the AEC-industry. With the purpose of increasing the understanding of technology driven change processes at public client organisations, the BIM implementation at this organisation is analysed as an empirical example.  The case study is analysed by a theoretical framework taking inspiration from Actor-Network theory and Sociology of Translation. Based on this analysis the translation process, where key actors are identified and enrolled into using BIM are described. The analysis reveals a complex network of actors linked to the implementation in question. Instead of a single BIM concept, several different and sometimes conflicting interpretations of BIM are simultaneously being translated. This licentiate thesis problematizes the role of client organisations in initiating change within a network of actors. The main ways of enrolling actors into using BIM have in the studied case been the development of new demand documents. This strategy has however been problematic as the new demands for BIM have not been accepted as intended within many construction projects. Instead, the results of this thesis argue for the importance of client organisations as negotiators, not only needed to establish demand for a new innovation.
Tekniken Byggands Informations Modellering (BIM) introduceras just nu i byggbranschen. Denna bransch beskrivs ofta som varande fragmenterad med låg produktivitet samt bristande innovationskraft. BIM beskrivs som ett nytt paradigm för denna bransch, möjliggörande interorganisatoriskt sammarbete, ökad kvalitet och samtidigt minskande kostnader. Däremot har BIM ännu inte anammats på bred front. För att öka implementeringstakten har statliga beställarorganisationer beskrivits som de aktörer som behövs för att initiera och driva implementeringen av BIM. Dock har tidigare forskning inte fokuserat på beställarperspektivet kring sådana initiativ. I denna licentiatavhandling presenteras en fallstudie av BIM implementeringen på den största infrastrukturbeställaren i Sverige. Denna organisation implementerar BIM både för att gynna den egna organisationen men också för att öka produktiviteten och innovationstakten i intrastrukturbranschen som helhet. Med syfte att öka förståelsen kring beställarrollen i förhållande till teknikdrivna förändringsprocesser studeras denna förändringsprocess som ett empiriskt exempel. Denna fallstudie analyseras utifrån ett teoretiskt ramverk med inspiration tagen från Actor-Network Theory och Sociology of Translation. Utifrån denna analys beskrivs översättningsprocesser där nyckelaktörer identifieras och försöks knytas till att använda BIM. Analysen visar på ett komplext nätverk av aktörer kopplat till implementeringen i fråga. Istället för ett enskilt BIM koncept hittas ett flertal samtidiga, och stundtals motsägelsefulla tolkningar av BIM som alla samtidigt översätts i organisationen. Denna licentiatavhandling problematiserar beställarrollen i relation till initiering av förändring inom ett aktörsnätverk. Det huvudsakliga sättet genom vilket aktörer knyts till att använda BIM har i det studerade fallet varit genom utveckling av nya kravdokument. Denna strategi har däremot varit problematisk då dessa krav inte accepterats som tänk inom ett flertal byggprojekt. Istället visar resultatet av denna studie på vikten av beställarorganisationer som varande förhandlare, inte endast viktiga för att upprätta efterfrågan för en ny innovation.

QC 20170614

APA, Harvard, Vancouver, ISO, and other styles
49

Eggert, Donna Marie. "Case Study Analysis Of Osseointegration And Limb-Salvaging Technology In Animal Subject's Bilateral Osseointegrated Implant Journey With Potential Human Translation." Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/332675.

Full text
Abstract:
In the twenty-first century, new cutting-edge osseointegration technology is improving quality of life. Osseointegration is a new technique to suspend an implant prosthetic device for individuals with a limb-loss. The science of osseointegration is not clearly understood although there appears to be a special relationship between pure titanium that promotes activation of our bone building cells and bone remodeling. Direct bone-anchored osseointegration to integrate a foreign device into the body without the body rejecting the prosthesis is the new technology lacking knowledge and research clinical cases for human translation. The objective of the case study was to review the science of bone-anchored osseointegration as a limb-salvaging technique and potential translation to humans using a canine model in a well-defined control study. Aims were to enhance knowledge technology, improve mobility, decrease pain to improve quality of life and influence health care practices. By the year 2050, the projected number of American amputees is expected to reach 3.6 million. Many people depend on artificial limbs to perform their activities of daily living. Often these limbs start developing complications associated with stump-socket designs such as separation from the human tissue, poor fit with repeated fittings, recurrent skin infections, ulcers and pressure sores due to non-uniform pressure distribution over the socket contact area and pain which decreases their mobility. Since 2001, close to 28,500 American troops have been wounded in combat in Iraq and Afghanistan. Greater than 24,600 of them have survived their injury, the highest survival rate of any war in the history of the United States. Sadly, 700 of these services members have lost at least one limb from amputations. Lower limb amputations are still performed above the knee as not enough bone can be preserved below the knee for prosthesis. Osseointegration with limb-salvaging techniques could enhance mobility and quality of life for those individuals who sacrifice their limbs defending our freedoms.
APA, Harvard, Vancouver, ISO, and other styles
50

Andersson, Karin. "'Consider' and its Swedish equivalents in relation to machine translation." Thesis, University of Skövde, School of Humanities and Informatics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-771.

Full text
Abstract:

This study describes the English verb ’consider’ and the characteristics of some of its senses. An investigation of this kind may be useful, since a machine translation program, SYSTRAN, has invariably translated ’consider’ with the Swedish verbs ’betrakta’ (Eng: ’view’, regard’) and ’anse’ (Eng: ’regard’). This handling of ’consider’ is not satisfactory in all contexts.

Since ’consider’ is a cogitative verb, it is fascinating to observe that both the theory of semantic primes and universals and conceptual semantics are concerned with cogitation in various ways. Anna Wierzbicka, who is one of the advocates of semantic primes and universals, argues that THINK should be considered as a semantic prime. Moreover, one of the prime issues of conceptual semantics is to describe how thoughts are constructed by virtue of e.g. linguistic components, perception and experience.

In order to define and clarify the distinctions between the different senses, we have taken advantage of the theory of mental spaces.

This thesis has been structured in accordance with the meanings that have been indicated in WordNet as to ’consider’. As a consequence, the senses that ’consider’ represents have been organized to form the subsequent groups: ’Observation’, ’Opinion’ together with its sub-group ’Likelihood’ and ’Cogitation’ followed by its sub-group ’Attention/Consideration’.

A concordance tool, http://www.nla.se/culler, provided us with 90 literary quotations that were collected in a corpus. Afterwards, these citations were distributed between the groups mentioned above and translated into Swedish by SYSTRAN.

Furthermore, the meanings as to ’consider’ have also been related to the senses, recorded by the FrameNet scholars. Here, ’consider’ is regarded as a verb of ’Cogitation’ and ’Categorization’.

When this study was accomplished, it could be inferred that certain senses are connected to specific syntactic constructions. In other cases, however, the distinctions between various meanings can only be explained by virtue of semantics.

To conclude, it appears to be likely that an implementation is facilitated if a specific syntactic construction can be tied to a particular sense. This may be the case concerning some meanings of ’consider’. Machine translation is presumably a much more laborious task, if one is solely governed by semantic conditions.

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography