To see the other types of publications on this topic, follow the link: Dynamic-equivalence.

Dissertations / Theses on the topic 'Dynamic-equivalence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 21 dissertations / theses for your research on the topic 'Dynamic-equivalence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ghanekar, Milind. "Dynamic equivalence conditions and controller scaling laws for robotic manipulators." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22205.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Fuchen. "Hierarchical clustering using equivalence test : application on automatic segmentation of dynamic contrast enhanced image sequence." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCB013/document.

Full text
Abstract:
L'imagerie de perfusion permet un accès non invasif à la micro-vascularisation tissulaire. Elle apparaît comme un outil prometteur pour la construction de biomarqueurs d'imagerie pour le diagnostic, le pronostic ou le suivi de traitement anti-angiogénique du cancer. Cependant, l'analyse quantitative des séries dynamiques de perfusion souffre d'un faible rapport signal sur bruit (SNR). Le SNR peut être amélioré en faisant la moyenne de l'information fonctionnelle dans de grandes régions d'intérêt, qui doivent néanmoins être fonctionnellement homogènes. Pour ce faire, nous proposons une nouvelle méthode pour la segmentation automatique des séries dynamiques de perfusion en régions fonctionnellement homogènes, appelée DCE-HiSET. Au coeur de cette méthode, HiSET (Hierarchical Segmentation using Equivalence Test ou Segmentation hiérarchique par test d'équivalence) propose de segmenter des caractéristiques fonctionnelles ou signaux (indexées par le temps par exemple) observées discrètement et de façon bruité sur un espace métrique fini, considéré comme un paysage, avec un bruit sur les observations indépendant Gaussien de variance connue. HiSET est un algorithme de clustering hiérarchique qui utilise la p-valeur d'un test d'équivalence multiple comme mesure de dissimilarité et se compose de deux étapes. La première exploite la structure de voisinage spatial pour préserver les propriétés locales de l'espace métrique, et la seconde récupère les structures homogènes spatialement déconnectées à une échelle globale plus grande. Etant donné un écart d'homogénéité $\delta$ attendu pour le test d'équivalence multiple, les deux étapes s'arrêtent automatiquement par un contrôle de l'erreur de type I, fournissant un choix adaptatif du nombre de régions. Le paramètre $\delta$ apparaît alors comme paramètre de réglage contrôlant la taille et la complexité de la segmentation. Théoriquement, nous prouvons que, si le paysage est fonctionnellement constant par morceaux avec des caractéristiques fonctionnelles bien séparées entre les morceaux, HiSET est capable de retrouver la partition exacte avec grande probabilité quand le nombre de temps d'observation est assez grand. Pour les séries dynamiques de perfusion, les hypothèses, dont dépend HiSET, sont obtenues à l'aide d'une modélisation des intensités (signaux) et une stabilisation de la variance qui dépend d'un paramètre supplémentaire $a$ et est justifiée a posteriori. Ainsi, DCE-HiSET est la combinaison d'une modélisation adaptée des séries dynamiques de perfusion avec l'algorithme HiSET. A l'aide de séries dynamiques de perfusion synthétiques en deux dimensions, nous avons montré que DCE-HiSET se révèle plus performant que de nombreuses méthodes de pointe de clustering. En terme d'application clinique de DCE-HiSET, nous avons proposé une stratégie pour affiner une région d'intérêt grossièrement délimitée par un clinicien sur une série dynamique de perfusion, afin d'améliorer la précision de la frontière des régions d'intérêt et la robustesse de l'analyse basée sur ces régions tout en diminuant le temps de délimitation. La stratégie de raffinement automatique proposée est basée sur une segmentation par DCE-HiSET suivie d'une série d'opérations de type érosion et dilatation. Sa robustesse et son efficacité sont vérifiées grâce à la comparaison des résultats de classification, réalisée sur la base des séries dynamiques associées, de 99 tumeurs ovariennes et avec les résultats de l'anapathologie sur biopsie utilisés comme référence. Finalement, dans le contexte des séries d'images 3D, nous avons étudié deux stratégies, utilisant des structures de voisinage des coupes transversales différentes, basée sur DCE-HiSET pour obtenir la segmentation de séries dynamiques de perfusion en trois dimensions. (...)
Dynamical contrast enhanced (DCE) imaging allows non invasive access to tissue micro-vascularization. It appears as a promising tool to build imaging biomarker for diagnostic, prognosis or anti-angiogenesis treatment monitoring of cancer. However, quantitative analysis of DCE image sequences suffers from low signal to noise ratio (SNR). SNR may be improved by averaging functional information in large regions of interest, which however need to be functionally homogeneous. To achieve SNR improvement, we propose a novel method for automatic segmentation of DCE image sequence into functionally homogeneous regions, called DCE-HiSET. As the core of the proposed method, HiSET (Hierarchical Segmentation using Equivalence Test) aims to cluster functional (e.g. with respect to time) features or signals discretely observed with noise on a finite metric space considered to be a landscape. HiSET assumes independent Gaussian noise with known constant level on the observations. It uses the p-value of a multiple equivalence test as dissimilarity measure and consists of two steps. The first exploits the spatial neighborhood structure to preserve the local property of the metric space, and the second recovers (spatially) disconnected homogeneous structures at a larger (global) scale. Given an expected homogeneity discrepancy $\delta$ for the multiple equivalence test, both steps stop automatically through a control of the type I error, providing an adaptive choice of the number of clusters. Parameter $\delta$ appears as the tuning parameter controlling the size and the complexity of the segmentation. Assuming that the landscape is functionally piecewise constant with well separated functional features, we prove that HiSET will retrieve the exact partition with high probability when the number of observation times is large enough. In the application for DCE image sequence, the assumption is achieved by the modeling of the observed intensity in the sequence through a proper variance stabilization, which depends only on one additional parameter $a$. Therefore, DCE-HiSET is the combination of this DCE imaging modeling step with our statistical core, HiSET. Through a comparison on synthetic 2D DCE image sequence, DCE-HiSET has been proven to outperform other state-of-the-art clustering-based methods. As a clinical application of DCE-HiSET, we proposed a strategy to refine a roughly manually delineated ROI on DCE image sequence, in order to improve the precision at the border of ROIs and the robustness of DCE analysis based on ROIs, while decreasing the delineation time. The automatic refinement strategy is based on the segmentation through DCE-HiSET and a series of erosion-dilation operations. The robustness and efficiency of the proposed strategy are verified by the comparison of the classification of 99 ovarian tumors based on their associated DCE-MR image sequences with the results of biopsy anapathology used as benchmark. Furthermore, DCE-HiSET is also adapted to the segmentation of 3D DCE image sequence through two different strategies with distinct considerations regarding the neighborhood structure cross slices. This PhD thesis has been supported by contract CIFRE of the ANRT (Association Nationale de la Recherche et de la Technologie) with a french company INTRASENSE, which designs, develops and markets medical imaging visualization and analysis solutions including Myrian®. DCE-HiSET has been integrated into Myrian® and tested to be fully functional
APA, Harvard, Vancouver, ISO, and other styles
3

Janjusic, Tomislav. "Framework for Evaluating Dynamic Memory Allocators Including a New Equivalence Class Based Cache-conscious Allocator." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc500151/.

Full text
Abstract:
Software applications’ performance is hindered by a variety of factors, but most notably by the well-known CPU-memory speed gap (often known as the memory wall). This results in the CPU sitting idle waiting for data to be brought from memory to processor caches. The addressing used by caches cause non-uniform accesses to various cache sets. The non-uniformity is due to several reasons, including how different objects are accessed by the code and how the data objects are located in memory. Memory allocators determine where dynamically created objects are placed, thus defining addresses and their mapping to cache locations. It is important to evaluate how different allocators behave with respect to the localities of the created objects. Most allocators use a single attribute, the size, of an object in making allocation decisions. Additional attributes such as the placement with respect to other objects, or specific cache area may lead to better use of cache memories. In this dissertation, we proposed and implemented a framework that allows for the development and evaluation of new memory allocation techniques. At the root of the framework is a memory tracing tool called Gleipnir, which provides very detailed information about every memory access, and relates it back to source level objects. Using the traces from Gleipnir, we extended a commonly used cache simulator for generating detailed cache statistics: per function, per data object, per cache line, and identify specific data objects that are conflicting with each other. The utility of the framework is demonstrated with a new memory allocator known as equivalence class allocator. The new allocator allows users to specify cache sets, in addition to object size, where the objects should be placed. We compare this new allocator with two well-known allocators, viz., Doug Lea and Pool allocators.
APA, Harvard, Vancouver, ISO, and other styles
4

VInter, Vanja. "Sex, slang and skopos : Analysing a translation of The Smart Bitches’ Guide to Romance." Thesis, Linnéuniversitetet, Institutionen för språk (SPR), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-85641.

Full text
Abstract:
This paper analyses the translation methods used in translating a colloquial, culture-specific text containing allusions and informal language. The analysis focuses on the difficulties arising in the translation of culture-specific phenomena and aspects such as slang and cultural references as well as allusions and language play. The theoretical framework used for structuring the analysis is supported by the theories of Newmark (1988), Nida (1964), Schröter (2005), Reiss (1989), Pym (2010) and Leppihalme (1994), among others. The results indicate that the translation of culturally and connotatively charged words require knowledge and understanding of languages and cultures alike. Further, the results indicate that concept of a word or concept being ‘untranslatable’ may originate from such lack of understanding or knowledge and that further research on the subject is needed.
APA, Harvard, Vancouver, ISO, and other styles
5

Nguyen, Huy. "Sequential Equivalence Checking with Efficient Filtering Strategies for Inductive Invariants." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/31986.

Full text
Abstract:
Powerful sequential optimization techniques can drastically change the Integrated Circuit (IC) design paradigm. Due to the limited capability of sequential verification tools, aggressive sequential optimization is shunned nowadays as there is no efficient way to prove the preservation of equivalence after optimization. Due to the fact that the number of transistors fitting on single fixed-size die increases with Mooreâ s law, the problem gets harder over time and in an exponential rate. It is no surprise that functional verification becomes a major bottleneck in the time-to-market of a product. In fact, literature has reported that 70% of design time is spent on making sure the design is bug-free and operating correctly. One of the core verification tasks in achieving high quality products is equivalence checking. Essentially, equivalence checking ensures the preservation of optimized productâ s functionality to the unoptimized model. This is important for industry because the products are modified constantly to meet different goals such as low power, high performance, etc. The mainstream in conducting equivalence checking includes simulation and formal verification. In simulation approach, golden design and design under verification (DUV) are fed with same stimuli for input expecting outputs to produce identical responses. In case of discrepancy, traces will be generated and DUV will undergo modifications. With the increase in input pins and state elements in designs, exhaustive simulation becomes infeasible. Hence, the completeness of the approach is not guaranteed and notions of coverage has to be accompanied. On the other hand, formal verification incorporates mathematical proofs and guarantee the completeness over the search space. However, formal verification has problems of its own in which it is usually resource intensive. In addition, not all design can be verified after optimization processes. That is to say the golden model and DUV are vastly different in structure which cause modern checker to give inconclusive result. Due to this nature, this thesis focuses in improving the strength and the efficiency of sequential equivalence checking (SEC) using formal approach. While there has been great strides made in the verification for combinational circuits, SEC still remains rather rudimentary. Without powerful SEC as a backbone, aggressive sequential synthesis and optimization are often avoided if the optimized design cannot be proved to be equivalent to the original one. In an attempt to take on the challenges of SEC, we propose two frameworks that successfully determining equivalence for hard-to-verify circuits. The first framework utilizes arbitrary relations between any two nodes within the two sequential circuits in question. The two nodes can reside in the same or across the circuits; likewise, they can be from the same time-frame or across time-frames. The merit for this approach is to use global structure of the circuits to speed up the verification process. The second framework introduces techniques to identify subset but yet powerful multi-node relations (involve more than 2 nodes) which then help to prune large donâ t care search space and result in a successful SEC framework. In contrast with previous approaches in which exponential number of multi-node relations are mined and learned, we alleviate the computation cost by selecting much fewer invariants to achieve desired conclusion. Although independent, the two frameworks could be used in sequential to complement each other. Experimental results demonstrate that our frameworks can take on many hard-to-verify cases and show a significant speed up over previous approaches.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Mert, Raziye. "Qualitative Behavior Of Solutions Of Dynamic Equations On Time Scales." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611528/index.pdf.

Full text
Abstract:
In this thesis, the asymptotic behavior and oscillation of solutions of dynamic equations on time scales are studied. In the first part of the thesis, asymptotic equivalence and asymptotic equilibrium of dynamic systems are investigated. Sufficient conditions are established for the asymptotic equivalence of linear systems and linear and quasilinear systems, respectively, and for the asymptotic equilibrium of quasilinear systems by unifying and extending some known results for differential systems and difference systems to dynamic systems on arbitrary time scales. In particular, for the asymptotic equivalence of differential systems, the well-known theorems of Levinson and Yakubovich are improved and the well-known theorem of Wintner for the asymptotic equilibrium of linear differential systems is generalized to arbitrary time scales. Some of our results for asymptotic equilibrium are new even for difference systems. In the second part, the oscillation of solutions of a particular class of second order nonlinear delay dynamic equations and, more generally, two-dimensional nonlinear dynamic systems, including delay-dynamic systems, are discussed. Necessary and sufficient conditions are derived for the oscillation of solutions of nonlinear delay dynamic equations by extending some continuous results. Specifically, the classical theorems of Atkinson and Belohorec are generalized. Sufficient conditions are established for the oscillation of solutions of nonlinear dynamic systems by unifying and extending the corresponding continuous and discrete results. Particularly, the oscillation criteria of Atkinson, Belohorec, Waltman, and Hooker and Patula are generalized.
APA, Harvard, Vancouver, ISO, and other styles
7

Nichols, Anthony H. "Translating the Bible a critical analysis of E.A. Nida's theory of Dynamic Equivalence and its impact upon recent Bible translations /." Thesis, Available via Macquarie University ResearchOnline, 1996. http://hdl.handle.net/1959.14/79339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nichols, Anthony Howard. "Translating the Bible : a critical analysis of E.A. Nida's theory of dynamic equivalence and its impact upon recent Bible translations." Thesis, University of Sheffield, 1996. http://etheses.whiterose.ac.uk/5994/.

Full text
Abstract:
Developments in translation theory have externalized processes used intuitively by translators for centuries. The literature on Bible translation in particular is dominated by Eugene A. Nida and his proteges whose work is informed by a wealth of intercultural experience. This thesis is a critique of the Dynamic Equivalence (DE) theory of translation propounded by Nida, exemplified in the Good News Bible, and promoted in non- Western languages by the United Bible Societies. Section I of the thesis surveys the history of translation, its theory and problems, and describes relevant developments in linguistics. Section II examines Nida's sociolinguistic model and his methods of grammatical and semantic analysis, transfer and restructuring. Section III focuses on the translation of seven texts representing different Bible genres into Septuagint Greek, English and Indonesian versions, noting the distinctive features of DE translations. Section IV takes up and examines key issues that have arisen: the nature of Biblical language, the handling of important Biblical motifs and technical terminology, and the implications of naturalness and explicitness in translation. Nida has provided excellent discussion on most translation problems, as well as useful tools for semantic analysis. However, the DE model is found to be defective for Bible translation. Firstly, it underestimates the intricate relationship of form and meaning in language. Secondly, while evaluation of translation must take account of its purpose and intended audience, 'equivalence' defined in terms of the receptor's reactions is impossible to measure, and blurs the distinction between 'translation’ and ‘communication'. Thirdly, the determinative role given to receptor response constantly jeopardizes the historical and cultural 'otherness' of the Biblical text. Finally the drive for explicitness guarantees that indigenous receptors must approach Scripture through a Western grid and denies them direct access to the Biblical universe of discourse.
APA, Harvard, Vancouver, ISO, and other styles
9

Moreira, Tarsilio Soares. "Os salmos na NTLH: uma análise da equivalência dinâmica aplicada à Poesia Hebraica." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/8/8152/tde-17022014-112314/.

Full text
Abstract:
Essa pesquisa tem como objetivo analisar a tradução dos salmos na NTLH (Nova Tradução na Linguagem de Hoje), a qual segue o princípio da equivalência dinâmica ou funcional de Eugene Nida, em que o sentido prevalece sobre a forma. Investigamos como essa tradução lida com textos poéticos, abundantes em figuras de linguagem e onde aspectos formais geram sentido. Para isso, descrevemos a teoria de Nida e as críticas pertinentes da teoria de tradução, aspectos da Poesia Hebraica e finalizamos com o estudo da tradução de alguns salmos feita pela NTLH.
This research aims at analyzing the translation of the Psalms in NTLH (Nova Tradução na Linguagem de Hoje), which adopts Eugene Nidas principles of Dynamic Equivalence or Functional Equivalence, in which sense prevails over form. We investigate how this translation deals with poetic texts that abound with figures of speech and in which formal aspects generate sense. Therefore, we describe Nidas theory and some of its pertinent criticism; aspects of Hebrew Poetry; and, finally, we analyze some of the Psalms translations in NTLH.
APA, Harvard, Vancouver, ISO, and other styles
10

Rise, Gard R. "Mori Ōgai and the translation of Henrik Ibsen’s John Gabriel Borkman." Thesis, Högskolan Dalarna, Japanska, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:du-31073.

Full text
Abstract:
Mori Ōgai’s (1862-1922) 1909 translation and the subsequent theater production of Henrik Ibsen’s 1896 play John Gabriel Borkman was in many ways instrumental in the formation of Japanese Meiji-era shingeki theater. Through his career as a translator, Ōgai’s translation approach shifted from one of decreasingly relying on domestication techniques to staying more faithful to the source text through use of foreignization techniques and arguably towards what has been identified by Eugene Nida and Jin Di as dynamical equivalence or equivalent effect, respectively, in drama translation. In this project, Ōgai’s translation of John Gabriel Borkman is examined using a set of categories peculiar to drama translation, as proposed by Chinese scholars Xu and Cui (2011), again based on the theories of Nida and Di. The categories are intelligibility, brevity, characterization and actability. The results from the analysis are used to do a qualitative analysis of Ōgai’s approach to drama translation. Results from the study indicate that Ōgai put large emphasis on the intelligibility of the play, and perhaps over the aspects of brevity, characterization and actability. However, wherever the brevity aspect seems not to be in violation of any of the other aspects, Ōgai seems to have tried to adhere as close as possible to the source texts in terms of speaking length.
APA, Harvard, Vancouver, ISO, and other styles
11

Persson, Ulrika. "Culture-specific items : Translation procedures for a text about Australian and New Zealand children's literature." Thesis, Linnéuniversitetet, Institutionen för språk (SPR), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-46025.

Full text
Abstract:
The aim of this study is to analyse the problems met when translating culture-specific items in a text about Australian and New Zealand colonial and post-colonial children’s literature into Swedish. The analysis quantifies and describes the different translation procedures used, and contrasts different strategies when there was more than one possible choice. It also outlines the reasons for the choices made when creating a text adapted for a Swedish audience. The translation methods applied are dynamic equivalence and domestication. As for the categorization of the material, the theories of Newmark (1988) have primarily been followed. The study shows that the frequency of each translation procedure depends on the type of culture-specific item, and the chosen translation method. It is argued that transference is the most commonly used procedure, and recognized translations are not as frequent as could have been expected with the choice of domestication. This is the case for proper nouns and references to literary works, where transference and dynamic equivalence has been given priority over domestication whenever the factual content was considered to be the most important aspect to follow. As for culture-specific items of the category social culture, neutralisation is the most commonly used procedure. In such cases the domestication method was more influential than dynamic equivalence as the consideration of ethics as well as avoidance of cultural taboos in the target culture were considered to be more important than content.
APA, Harvard, Vancouver, ISO, and other styles
12

Olorisade, Babatunde Kazeem. "Summarizing the Results of a Series of Experiments : Application to the Effectiveness of Three Software Evaluation Techniques." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3799.

Full text
Abstract:
Software quality has become and persistently remains a big issue among software users and developers. So, the importance of software evaluation cannot be overemphasized. An accepted fact in software engineering is that software must undergo evaluation process during development to ascertain and improve its quality level. In fact, there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. Therefore, it may not be realistic or cost effective to remove all software defects prior to product release. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products - it bogs down to choosing the most appropriate for different situations. However, not much knowledge is available on the strengths and weaknesses of the available evaluation techniques. Most of the information related to the techniques available is focused on how to apply the techniques but not on the applicability conditions of the techniques – practical information, suitability, strengths, weaknesses etc. This research focuses on contributing to the available applicability knowledge of software evaluation techniques. More precisely, it focuses on code reading by stepwise abstraction as representative of the static technique, as well as equivalence partitioning (functional technique) and decision coverage (structural technique) as representatives of the dynamic technique. The specific focus of the research is to summarize the results of a series of experiments conducted to investigate the effectiveness of these techniques among other factors. By effectiveness in this research, we mean the potential of each of the techniques to generate test cases capable of revealing software faults in the case of the dynamic techniques or the ability of the static technique to generate abstractions that will aid the detection of faults. The experiments used two versions of three different programs with seven different faults seeded into each of the programs. This work uses the results of the eight different experiments performed and analyzed separately, to explore this fact. The analysis results were pooled together and jointly summarized in this research to extract a common knowledge from the experiments using a qualitative deduction approach created in this work as it was decided not to use formal aggregation at this stage. Since the experiments were performed by different researchers, in different years and in some cases at different site, there were several problems that have to be tackled in order to be able to summarize the results. Part of the problems is the fact that the data files exist in different languages, the structure of the files are different, different names is used for data fields, the analysis were done using different confidence level etc. The first step, taken at the inception of this research was to apply all the techniques to the programs used during the experiments in order to detect the faults. This purpose of this personal experience with the experiment is to be familiarized and get acquainted to the faults, failures, the programs and the experiment situations in general and also, to better understand the data as recorded from the experiments. Afterwards, the data files were recreated to conform to a uniform language, data meaning, file style and structure. A well structured directory was created to keep all the data, analysis and experiment files for all the experiments in the series. These steps paved the way for a feasible results synthesis. Using our method, the technique, program, fault, program – technique, program – fault and technique – fault were selected as main and interaction effects having significant knowledge relevant to the analysis summary result. The result, as reported in this thesis, indicated that the functional technique and the structural technique are equally effective as far as the programs and faults in these experiments are concerned. Both perform better than the code review. Also, the analysis revealed that the effectiveness of the techniques is influenced by the fault type and the program type. Some faults were found to exhibit better behavior with certain programs, some were better detected with certain techniques and even the techniques yield different result in different programs.
I can alternatively be contacted through: qasimbabatunde@yahoo.co.uk
APA, Harvard, Vancouver, ISO, and other styles
13

Estling, Hellberg Sanna. "Translating pragmatic markers : or whatever you want to call them." Thesis, Linnéuniversitetet, Institutionen för språk (SPR), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-26149.

Full text
Abstract:
This study analyses the translation of pragmatic markers from English into Swedish. The source text that was translated and used as a basis for the study is an article called “Black Books”, which was published in the British music magazine Prog in January 2013. The study is limited to question tags, general extenders and single-word pragmatic markers. It aims to investigate how these types of pragmatic markers can be translated in a dynamic and natural way, as well as how a careful analysis can facilitate the search for appropriate translation equivalents. Previous research and theories were used to determine the functions of the pragmatic markers in the source text, and the translation choices made on the basis of these findings were supported by corpus searches in the English-Swedish Parallel Corpus and Korp. The study revealed that because of the different ways in which pragmatic functions are expressed in English and Swedish, almost none of the pragmatic markers in the source text could be translated directly into Swedish. Formally equivalent solutions such as tja as a translation of well were generally considered too unnatural. While the study is too small to provide any general guidelines, it shows how a careful analysis may help the translator find more dynamically equivalent and natural solutions in the form of, for instance, other Swedish pragmatic markers, modal particles, adverbs and conjunctions.
APA, Harvard, Vancouver, ISO, and other styles
14

Paparella, Karin. "Stilistiska normer i översatt sakprosa : En kvalitativ undersökning med fokus på preferensmönster hos en målgrupp med italienska som förstaspråk." Thesis, Stockholms universitet, Tolk- och översättarinstitutet, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-182416.

Full text
Abstract:
I denna kandidatuppsats undersöks uppfattningen av översatta texter hos en grupp med italienska som förstaspråk i syfte att redovisa om ett preferensmönster finns gällande tilltal, idiomatiska uttryck, meningsbyggnad och meningslängd. Texter som är undersökningens material i denna studie översätts enligt Nidas principer om formell och dynamisk ekvivalens och definieras enligt Tourys teorier. Undersökningen genomförs genom ett antal öppna frågor som ställs till respondenterna under semistrukturerade intervjuer. Slutsatserna dras från diskussionen av respondenternas svar kopplad till teorin samt redovisning av resultat för preferensmönstret. Studien visar att kontext, forum och målgruppen är avgörande för valet av översättningsstrategin.Förslag på vidare forskning ges i det slutliga kapitlet med formulering av hypotes gällande behovet att bestämma översättningsstrategi och tillämpning av normer enligt texttyp och målgrupp.
This bachelor’s thesis is a study of perception of translated texts in a group of people with Italian as their first language. The aim of this work is to investigate whether a pattern in the preference of form of address, idiomatic expressions and syntax can be identified. The texts that are used in this study are translated according to Nida’s principles of formal and dynamic equivalence and are defined according to Toury’s theories.The study is conducted thanks to a set of open questions asked to the responders in semi-structured interviews. The conclusions are derived from the analysis of the responders’ answers in relation to the theories mentioned and the pattern of preferences revealed. The study shows that context, forum and target group are crucial for the choice of which translation strategy it will be used,Finally, we outline a proposal for further research to test the hypothesis that translation strategies and application of norms should be chosen according to the type of text and the target group.Nyckelord
APA, Harvard, Vancouver, ISO, and other styles
15

Poznyak, Dmytro. "The American Attitude: Priming Issue Agendas and Longitudinal Dynamic of Political Trust." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1342715776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ku, Kai-Hung, and 古凱宏. "Equivalence between the Biological Model and Dynamic Model for Control Design." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/upc5h9.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
107
Differential equations describing the dynamic performance of physical systems are usually obtained by exploiting the physical laws of the process in question. The construction process of the biological system model is similar. The biological reaction equation describes the dynamic behavior of messenger RNA and protein. Because of the complexity of biochemical reactions, results inevitably appear nonlinear. In practice, most physical systems are linear within a limited range of variables. This approach applies equally well to mechanical, electrical, fluid, and thermodynamic systems. In this paper, we describe nonlinear parameters of biological systems and linearize these parameters through Taylor series expansion. The linearized biological equation is analogized to an equation constructed by RL and RC circuits, and the biological parameters analogize the resistance, inductance, and capacitance components of the circuit system parameters, which are converted into the transfer function of the S-domain. Through MATLAB simulation, we used P, PI, PD, and PID controllers to correct and minimize the output waveform and shorten the reaction time of the biological system to attain stability or critical stability; a divergent reaction result would obviate the necessity of actual experiments and save much in material costs and experimentation time. In this paper, we characterize a linearized model of common biological systems and establish physical relationships among the electrical system, rotary motion system, and biological system. People can observe their physical environments help to control the design of biological systems.
APA, Harvard, Vancouver, ISO, and other styles
17

"A Study of Backward Compatible Dynamic Software Update." Doctoral diss., 2015. http://hdl.handle.net/2286/R.I.36032.

Full text
Abstract:
abstract: Dynamic software update (DSU) enables a program to update while it is running. DSU aims to minimize the loss due to program downtime for updates. Usually DSU is done in three steps: suspending the execution of an old program, mapping the execution state from the old program to a new one, and resuming execution of the new program with the mapped state. The semantic correctness of DSU depends largely on the state mapping which is mostly composed by developers manually nowadays. However, the manual construction of a state mapping does not necessarily ensure sound and dependable state mapping. This dissertation presents a methodology to assist developers by automating the construction of a partial state mapping with a guarantee of correctness. This dissertation includes a detailed study of DSU correctness and automatic state mapping for server programs with an established user base. At first, the dissertation presents the formal treatment of DSU correctness and the state mapping problem. Then the dissertation presents an argument that for programs with an established user base, dynamic updates must be backward compatible. The dissertation next presents a general definition of backward compatibility that specifies the allowed changes in program interaction between an old version and a new version and identified patterns of code evolution that results in backward compatible behavior. Thereafter the dissertation presents formal definitions of these patterns together with proof that any changes to programs in these patterns will result in backward compatible update. To show the applicability of the results, the dissertation presents SitBack, a program analysis tool that has an old version program and a new one as input and computes a partial state mapping under the assumption that the new version is backward compatible with the old version. SitBack does not handle all kinds of changes and it reports to the user in incomplete part of a state mapping. The dissertation presents a detailed evaluation of SitBack which shows that the methodology of automatic state mapping is promising in deal with real world program updates. For example, SitBack produces state mappings for 17-75% of the changed functions. Furthermore, SitBack generates automatic state mapping that leads to successful DSU. In conclusion, the study presented in this dissertation does assist developers in developing state mappings for DSU by automating the construction of state mappings with a correctness guarantee, which helps the adoption of DSU ultimately.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2015
APA, Harvard, Vancouver, ISO, and other styles
18

Viswanath, Vinod. "Correct low power design transformations for hardware systems." 2013. http://hdl.handle.net/2152/21409.

Full text
Abstract:
We present a generic proof methodology to automatically prove correctness of design transformations introduced at the Register-Transfer Level (RTL) to achieve lower power dissipation in hardware systems. We also introduce a new algorithm to reduce switching activity power dissipation in microprocessors. We further apply our technique in a completely different domain of dynamic power management of Systems-on-Chip (SoCs). We demonstrate our methodology on real-life circuits. In this thesis, we address the dual problem of transforming hardware systems at higher levels of abstraction to achieve lower power dissipation, and a reliable way to verify the correctness of the afore-mentioned transformations. The thesis is in three parts. The first part introduces Instruction-driven Slicing, a new algorithm to automatically introduce RTL/System level annotations in microprocessors to achieve lower switching power dissipation. The second part introduces Dedicated Rewriting, a rewriting based generic proof methodology to automatically prove correctness of such high-level transformations for lowering power dissipation. The third part implements dedicated rewriting in the context of dynamically managing power dissipation of mobile and hand-held devices. We first present instruction-driven slicing, a new technique for annotating microprocessor descriptions at the Register Transfer Level in order to achieve lower power dissipation. Our technique automatically annotates existing RTL code to optimize the circuit for lowering power dissipated by switching activity. Our technique can be applied at the architectural level as well, achieving similar power gains. We first demonstrate our technique on architectural and RTL models of a 32-bit OpenRISC pipelined processor (OR1200), showing power gains for the SPEC2000 benchmarks. These annotations achieve reduction in power dissipation by changing the logic of the design. We further extend our technique to an out-of-order superscalar core and demonstrate power gains for the same SPEC2000 benchmarks on architectural and RTL models of PUMA, a fixed point out-of-order PowerPC microprocessor. We next present dedicated rewriting, a novel technique to automatically prove the correctness of low power transformations in hardware systems described at the Register Transfer Level. We guarantee the correctness of any low power transformation by providing a functional equivalence proof of the hardware design before and after the transformation. Dedicated rewriting is a highly automated deductive verification technique specially honed for proving correctness of low power transformations. We provide a notion of equivalence and establish the equivalence proof within our dedicated rewriting system. We demonstrate our technique on a non-trivial case study. We show equivalence of a Verilog RTL implementation of a Viterbi decoder, a component of the DRM System-On-Chip (SoC), before and after the application of multiple low power transformations. We next apply dedicated rewriting to a broader context of holistic power management of SoCs. This in turn creates a self-checking system and will automatically flag conflicting constraints or rules. Our system will manage power constraint rules using dedicated rewriting specially honed for dynamic power management of SoC designs. Together, this provides a common platform and representation to seamlessly cooperate between hardware and software constraints to achieve maximum platform power optimization dynamically during execution. We demonstrate our technique in multiple contexts on an SoC design of the state-of-the-art next generation Intel smartphone platform. Finally, we give a proof of instruction-driven slicing. We first prove that the annotations automatically introduced in the OR1200 processor preserve the original functionality of the machine using the ACL2 theorem prover. Then we establish the same proof within our dedicated rewriting system, and discuss the merits of such a technique and a framework. In the context of today's shrinking hardware and mobile internet devices, lowering power dissipation is a key problem. Verifying the correctness of transformations which achieve that is usually a time-consuming affair. Automatic and reliable methods of verification that are easy to use are extremely important. In this thesis we have presented one such transformation, and a generic framework to prove correctness of that and similar transformations. Our methodology is constructed in a manner that easily and seamlessly fits into the design cycle of creating complicated hardware systems. Our technique is also general enough to be applied in a completely different context of dynamic power management of mobile and hand-held devices.
text
APA, Harvard, Vancouver, ISO, and other styles
19

Lockett, Marcia Stephanie. "A comparative study of Roy Campbell's translation of the poetry of Federico Garcia Lorca." Thesis, 1994. http://hdl.handle.net/10500/17224.

Full text
Abstract:
Roy Campbell (1901-1957), who ranks among South Africa's leading poets, was also a gifted and skilled translator. Shortly after the Second World War he was commissioned by the Spanish scholar Rafael Martinez Nadal to supply the English translations for a planned edition of the complete works of the Spanish poet and dramatist, Federico Garcia Lorca, to be published by Faber and Faber, London. However, most of these translations remained unpublished until 1985, when the poetry translations (but not the translations of the plays) were included in Volume II of a four-volume edition entitled Campbell: Collected Works, edited by Alexander, Chapman and Leveson, and published in South Africa. In 198617, Eisenberg published a collection of letters from the archives of the Spanish poet and publisher Guillermo de Torre in a Spanish journal, Ana/es de Literatura Espanola, Alicante, which revealed that the politically-motivated intervention in 1946 of Arturo and Ilsa Barea, Republican supporters who were living in exile in London, prevented the publication of Campbell's Lorca translations. These poetry translations are studied here and compared with the work of other translators of Lorca, ranging from Lloyd (1937) to Havard (1990), and including some Afrikaans versions by Uys Krige (1987). For the analysis an eclectic framework is used that incorporates ideas from work on the relevance theory of communication (Sperber and Wilson 1986) as applied to translation theory by Gutt (1990, 1991) and Bell (1991), among others, together with Eco's (1979, 1990) semiotic-interpretive approach. The analysis shows that although Campbell's translating is constrained by its purpose of forming part of a Lorca edition, his versions of Lorca' s poetry are nevertheless predominantly oriented towards the target-language reader. In striving to communicate Lorca's poetry to an English audience, Campbell demonstrates his skill and creativity at all levels of language. Campbell's translations that were published during his lifetime earned him a place among the best poetry translators of this century. The Lorca translations, posthumously added to the corpus of his published work, enhance an already established reputation as a fine translator of poetry.
Classics & Modern European Languages
D. Lit. et Phil. (Spanish)
APA, Harvard, Vancouver, ISO, and other styles
20

Johnson, Wesley Irvin. "Evangelicals encountering Muslims : a pre-evangelistic approach to the Qu'ran." Thesis, 2015. http://hdl.handle.net/10500/19987.

Full text
Abstract:
This thesis looks at the development of Protestant and Evangelical encounter with Muslims from the earliest days of the Modern missions movement. Special attention is given to the dynamic equivalence model (DEM), which resulted in a new method for interpreting the Qur’an called the Christian Qur’anic hermeneutic (CQH). I begin with the early Protestant ministers among Muslims, such as Martyn and Muir. Pfander’s (1910) book, The balance of truth, embodies the view that the Qur’an teaches an irrevocable status of inspiration for the Old and New Testaments. The early and mid-twentieth century saw a movement away from usage of the Qur’an during Evangelical encounter with Muslims. Direct model advocates bypass the Qur’an and other religious questions for an immediate presentation of the gospel. The 1970s saw the development of the DEM, which produced significant changes in how Evangelicals encountered Muslims. Pioneers like Nida, Tabor, and Kraft implemented dynamic equivalence as a model in Evangelical ministry. Concurrently, Accad and Cragg laid groundwork for the CQH. The DEM creates obscurity in anthropology by promoting an evaluation of cultural forms as essentially neutral. This is extended to religious forms, even the Qur’an. Such a simple, asocial value for symbols is not sufficient to account for all of human life. Cultural forms, especially those intrinsically religious, are parts of a complex system. Meaning cannot be transferred or equivocated with integrity from one context to another without a corresponding re-evaluation of the entire system. Theological difficulties are also produced by the DEM and the CQH, and include the assigning a quasi-inspirational status to the Qur’an and a denial of unique inspirational status to the Christian Scriptures. If the gospel is communicated through the Qur’an, then it is difficult to deny some level of God-given status to it. Further, the Christian Scriptures are not unique as inspired literature. My proposal for how to use the Qur’an responsibly looks to Bavinck’s elenctics and is presented as Qur’anic pre-evangelism. Rather than communicating Biblical meaning through the Qur’an, Evangelicals can focus on areas of the Qur’an that coincide with a lack of assurance felt by Muslims in anthropology.
Christian Spirituality, Church History and Missiology
D. Th. (Missiology)
APA, Harvard, Vancouver, ISO, and other styles
21

Gbohoui, William Dieudonné Yélian. "Essays on the Effects of Corporate Taxation." Thèse, 2016. http://hdl.handle.net/1866/13976.

Full text
Abstract:
Cette thèse est une collection de trois articles en macroéconomie et finances publiques. Elle développe des modèles d'Equilibre Général Dynamique et Stochastique pour analyser les implications macroéconomiques des politiques d'imposition des entreprises en présence de marchés financiers imparfaits. Le premier chapitre analyse les mécanismes de transmission à l'économie, des effets d'un ré-échelonnement de l'impôt sur le profit des entreprises. Dans une économie constituée d'un gouvernement, d'une firme représentative et d'un ménage représentatif, j'élabore un théorème de l'équivalence ricardienne avec l'impôt sur le profit des entreprises. Plus particulièrement, j'établis que si les marchés financiers sont parfaits, un ré-échelonnement de l'impôt sur le profit des entreprises qui ne change pas la valeur présente de l'impôt total auquel l'entreprise est assujettie sur toute sa durée de vie n'a aucun effet réel sur l'économie si l'état utilise un impôt forfaitaire. Ensuite, en présence de marchés financiers imparfaits, je montre qu'une une baisse temporaire de l'impôt forfaitaire sur le profit des entreprises stimule l'investissement parce qu'il réduit temporairement le coût marginal de l'investissement. Enfin, mes résultats indiquent que si l'impôt est proportionnel au profit des entreprises, l'anticipation de taxes élevées dans le futur réduit le rendement espéré de l'investissement et atténue la stimulation de l'investissement engendrée par la réduction d'impôt. Le deuxième chapitre est écrit en collaboration avec Rui Castro. Dans cet article, nous avons quantifié les effets sur les décisions individuelles d'investis-sement et de production des entreprises ainsi que sur les agrégats macroéconomiques, d'une baisse temporaire de l'impôt sur le profit des entreprises en présence de marchés financiers imparfaits. Dans un modèle où les entreprises sont sujettes à des chocs de productivité idiosyncratiques, nous avons d'abord établi que le rationnement de crédit affecte plus les petites (jeunes) entreprises que les grandes entreprises. Pour des entreprises de même taille, les entreprises les plus productives sont celles qui souffrent le plus du manque de liquidité résultant des imperfections du marché financier. Ensuite, nous montré que pour une baisse de 1 dollar du revenu de l'impôt, l'investissement et la production augmentent respectivement de 26 et 3,5 centimes. L'effet cumulatif indique une augmentation de l'investissement et de la production agrégés respectivement de 4,6 et 7,2 centimes. Au niveau individuel, nos résultats indiquent que la politique stimule l'investissement des petites entreprises, initialement en manque de liquidité, alors qu'elle réduit l'investissement des grandes entreprises, initialement non contraintes. Le troisième chapitre est consacré à l'analyse des effets de la réforme de l'imposition des revenus d'entreprise proposée par le Trésor américain en 1992. La proposition de réforme recommande l'élimination des impôts sur les dividendes et les gains en capital et l'imposition d'une seule taxe sur le revenu des entreprises. Pour ce faire, j'ai eu recours à un modèle dynamique stochastique d'équilibre général avec marchés financiers imparfaits dans lequel les entreprises sont sujettes à des chocs idiosyncratiques de productivité. Les résultats indiquent que l'abolition des impôts sur les dividendes et les gains en capital réduisent les distorsions dans les choix d'investissement des entreprises, stimule l'investissement et entraîne une meilleure allocation du capital. Mais pour être financièrement soutenable, la réforme nécessite un relèvement du taux de l'impôt sur le profit des entreprises de 34\% à 42\%. Cette hausse du taux d'imposition décourage l'accumulation du capital. En somme, la réforme engendre une baisse de l'accumulation du capital et de la production respectivement de 8\% et 1\%. Néanmoins, elle améliore l'allocation du capital de 20\%, engendrant des gains de productivité de 1.41\% et une modeste augmentation du bien être des consommateurs.
This thesis is a collection of three papers in macroeconomics and public finance. It develops Dynamic Stochastic General Equilibrium Models with a special focus on financial frictions to analyze the effects of changes in corporate tax policy on firm level and macroeconomic aggregates. Chapter 1 develops a dynamic general equilibrium model with a representative firm to assess the short-run effects of changes in the timing of corporate profit taxes. First, it extends the Ricardian equivalence result to an environment with production and establishes that a temporary corporate profit tax cut financed by future tax-increase has no real effect when the tax is lump sum and capital markets are perfect. Second, I assess how strong the ricardian forces are in the presence of financing frictions. I find that when equity issuance is costly, and when the firm faces a lower bound on dividend payments, a temporary tax cut reduces temporary the marginal cost of investment and implies positive marginal propensity of investment. Third, I analyze how do the intertemporal substitution effects of tax cuts interact with the stimulative effects when tax is not lump-sum. The results show that when tax is proportional to corporate profit, the expectations of high future tax rates reduce the expected marginal return on investment and mitigate the stimulative effects of tax cuts. The net investment response depends on the relative strength of each effect. Chapter 2 is co-authored with Rui Castro. In this paper, we quantify how effective temporary corporate tax cuts are in stimulating investment and output via relaxation of financing frictions. In fact, policymakers often rely on temporary corporate tax cuts in order to provide incentives for business investment in recession times. A common motivation is that such policies help relax financing frictions, which might bind more during recessions. We assess whether this mechanism is effective. In an industry equilibrium model where some firms are financially constrained, marginal propensities to invest are high. We consider a transitory corporate tax cut, funded by public debt. By increasing current cash flows, corporate tax cuts are effective at stimulating current investment. On impact, aggregate investment increases by 26 cents per dollar of tax stimulus, and aggregate output by 3.5 cents. The stimulative output effects are long-lived, extending past the period the policy is reversed, leading to a cumulative effect multiplier on output of 7.2 cents. A major factor preventing larger effects is that this policy tends to significantly crowd out investment among the larger, unconstrained firms. Chapter 3 studies the effects of the 1992's U.S. Treasury Department proposal of a Comprehensive Business Income Tax (CBIT) reform. According to the U.S. tax code, dividend and capital gain are taxed at the firm level and further taxed when distributed to shareholders. This double taxation may reduce the overall return on investment and induce inefficient capital allocation. Therefore, tax reforms have been at the center of numerous debates among economists and policymakers. As part of this debate, the U.S. Department of Treasury proposed in 1992 to abolish dividend and capital gain taxes, and to use a Comprehensive Business Income Tax (CBIT) to levy tax on corporate income. In this paper, I use an industry equilibrium model where firms are subject to financing frictions, and idiosyncratic productivity and entry/exit shocks to assess the long run effects of the CBIT. I find that the elimination of the capital gain and dividend taxes is not self financing. More precisely, the corporate profit tax rate should be increased from 34\% to 42\% to keep the reform revenue-neutral. Overall, the results show that the CBIT reform reduces capital accumulation and output by 8\% and 1\%, respectively. However, it improves capital allocation by 20\%, resulting in an increase in aggregate productivity by 1.41\% and in a modest welfare gain.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography