To see the other types of publications on this topic, follow the link: Transformative code.

Dissertations / Theses on the topic 'Transformative code'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Transformative code.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Pei. "Unified system of code transformation and execution for heterogeneous multi-core architectures." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0441/document.

Full text
Abstract:
Architectures hétérogènes sont largement utilisées dans le domaine de calcul haute performance. Cependant, le développement d'applications sur des architectures hétérogènes est indéniablement fastidieuse et sujette à erreur pour un programmeur même expérimenté. Pour passer une application aux architectures multi-cœurs hétérogènes, les développeurs doivent décomposer les données de l'entrée, gérer les échanges de valeur intermédiaire au moment d’exécution et garantir l'équilibre de charge de système. L'objectif de cette thèse est de proposer une solution de programmation parallèle pour les programmeurs novices, qui permet de faciliter le processus de codage et garantir la qualité de code. Nous avons comparé et analysé les défauts de solutions existantes, puis nous proposons un nouvel outil de programmation STEPOCL avec un nouveau langage de domaine spécifique qui est conçu pour simplifier la programmation sur les architectures hétérogènes. Nous avons évalué la performance de STEPOCL sur trois cas d'application classiques : un stencil 2D, une multiplication de matrices et un problème à N corps. Le résultat montre que : (i) avec l'aide de STEPOCL, la performance d'application varie linéairement selon le nombre d'accélérateurs, (ii) la performance de code généré par STEPOCL est comparable à celle de la version manuscrite. (iii) les charges de travail, qui sont trop grandes pour la mémoire d'un seul accélérateur, peuvent être exécutées en utilisant plusieurs accélérateurs. (iv) grâce à STEPOCL, le nombre de lignes de code manuscrite est considérablement réduit<br>Heterogeneous architectures have been widely used in the domain of high performance computing. However developing applications on heterogeneous architectures is time consuming and error-prone because going from a single accelerator to multiple ones indeed requires to deal with potentially non-uniform domain decomposition, inter-accelerator data movements, and dynamic load balancing. The aim of this thesis is to propose a solution of parallel programming for novice developers, to ease the complex coding process and guarantee the quality of code. We lighted and analysed the shortcomings of existing solutions and proposed a new programming tool called STEPOCL along with a new domain specific language designed to simplify the development of an application for heterogeneous architectures. We evaluated both the performance and the usefulness of STEPOCL. The result show that: (i) the performance of an application written with STEPOCL scales linearly with the number of accelerators, (ii) the performance of an application written using STEPOCL competes with an handwritten version, (iii) larger workloads run on multiple devices that do not fit in the memory of a single device, (iv) thanks to STEPOCL, the number of lines of code required to write an application for multiple accelerators is roughly divided by ten
APA, Harvard, Vancouver, ISO, and other styles
2

Boije, Niklas, and Kristoffer Borg. "Semi-automatic code-to-code transformer for Java : Transformation of library calls." Thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129861.

Full text
Abstract:
Having the ability to perform large automatic software changes in a code base gives new possibilities for software restructuring and cost savings. The possibility of replacing software libraries in a semi-automatic way has been studied. String metrics are used to find equivalents between two libraries by looking at class- and method names. Rules based on the equivalents are then used to describe how to apply the transformation to the code base. Using the abstract syntax tree, locations for replacements are found and transformations are performed. After the transformations have been performed, an evaluation of the saved effort of doing the replacement automatically versus manually is made. It shows that a large part of the cost can be saved. An additional evaluation calculating the maintenance cost saved annually by changing libraries is also performed in order to prove the claim that an exchange can reduce the annual cost for the project.
APA, Harvard, Vancouver, ISO, and other styles
3

Iftikhar, Muhammad Usman. "Java Code Transformation for Parallelization." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-13179.

Full text
Abstract:
This thesis describes techniques for defining independent tasks in Java programs forparallelization. Existing Java parallelization APIs like JOMP, Parallel Java,Deterministic Parallel Java, JConqurr and JaMP are discussed. We have seen that JaMPis an implementation of OpenMP for Java, and it has a set of OpenMP directives andruntime library functions. We have discussed that JaMP has source to byte codecompiler, and it does not help in debugging the parallel source codes. There is no designtime syntax checking support of JaMP directives, and we know about mistakes onlywhen we compile the source code with JaMP compiler. So we have decided tocontribute JaMP with adding an option in the compiler to get parallel source code. Wehave created an eclipse plug-in to support design time syntax checking of JaMPdirectives too. It also helps the programmers to get quickly parallel source code withjust one click instead of using shell commands with JaMP compiler.
APA, Harvard, Vancouver, ISO, and other styles
4

Lecerf, Jason. "Designing language-agnostic code transformation engines." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I077.

Full text
Abstract:
Les transformations automatiques de code apparaissent dans diverses situations, les refactorings, les migrations inter-langages ou encore la spécialisation de code. Les moteurs supportant ces transformations cherchent dans le code source les occurrences de motifs spécifiés par l’utilisateur, puis les réécrivent grâce à une transformation. Cette transformation peut soit modifier les occurrences elles-mêmes, des éléments de la représentation intermédiaire (IR) du langage, en nouveaux éléments ou réécrire leur code source. Nous nous concentrons sur la réécriture de code source qui offre une meilleure flexibilité grâce à des transformations arbitraires particulièrement utiles à la migration et à la spécialisation de code. Les motifs sont divisés en deux catégories : les motifs explicites et syntaxiques. Les premiers demandent que l’utilisateur connaisse l’IR du langage, un effort d’apprentissage non négligeable. Les seconds demandent seulement de connaître la syntaxe du langage et non son IR, mais requièrent un effort d’implémentation supplémentaire pour les back-ends de langage du moteur. Tandis que les experts en langage connaissent l’IR et la syntaxe du langage, les autres utilisateurs connaissent seulement la syntaxe. Nous proposons un moteur de reconnaissance de motifs offrant une représentation hybride des motifs : les motifs peuvent être à la fois explicites et syntaxiques. Par défaut, le moteur se rabat sur un fonctionnement syntaxique, car la barrière à l’entrée est plus basse. Pour pallier au coup d’implémentation des back-ends de langage pour la reconnaissance syntaxique, nous prenons une approche générative. Le moteur de reconnaissance hybride est couplé avec un moteur de génération d’analyseurs syntaxiques. Ce dernier génère des analyseurs syntaxiques LR généralisés (GLR) capables d’analyser non seulement le code source à réécrire, mais également le motif à reconnaitre. L’implémenteur du back-end de langage n’a alors qu’à ajouter une ligne à la grammaire pour avoir accès au moteur de reconnaissance de motifs pour ce langage. L’approche est basée sur des analyseurs syntaxiques GLR pouvant se dupliquer et traquant ses sous-analyseurs. Ces implémentations particulières de GLR ne passent pas à l’échelle quand trop de duplications sont nécessaires pour gérer les ambiguïtés et notre approche ajoute de la duplication. Pour éviter une explosion du temps d’exécution, nos analyseurs syntaxiques FGLR fusionnent plus régulièrement et permettent une désambiguïsation à la volée pendant l’analyse via des effets de bord<br>Code transformations are needed in various cases: refactorings, migrations, code specialization, and so on. Code transformation engines work by finding a pattern in the source code and rewriting its occurrences according to the transformation. The transformation either rewrites the occurrences, elements of the intermediate representation (IR) of the language, into new elements or directly rewrites the source code. In this work, we focused on source rewriting since it offers more flexibility through arbitrary transformations, especially for migrations and specializations. Matching patterns come in two different flavors, explicit and syntactic. The former requires the user to know the IR of the language, a heavy knowledge burden. The latter only relies on the syntax of the matched language and not its IR, but requires significantly more work to implement the language back-ends. Language experts tend to know the IR and the syntax of a language, while other users know only the syntax. We propose a pattern matching engine offering a hybrid pattern representation: both explicit and syntactic matching are available in the same pattern. The engine always defaults to syntactic as it is the lowest barrier to entry for patterns. To counterbalance the implementation cost of language back-ends for syntactic pattern matching, we take a generative approach. We combine the hybrid pattern matching engine with a parser generator. The parser generator generates generalized LR (GLR) parsers capable of not only parsing the source but also the hybrid pattern. The back-end implementer only needs to add one line to the grammar of the language to activate the pattern matching engine. This approach to pattern matching requires GLR parsers capable of forking and keeping track of each individual fork. These GLR implementations suffer the more forking is done to handle ambiguities and patterns require even more forking. To prevent an explosion, our Fibered-GLR parsers merge more often and allow for classic disambiguation during the parse through side-effects
APA, Harvard, Vancouver, ISO, and other styles
5

Manilov, Stanislav Zapryanov. "Analysis and transformation of legacy code." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/29612.

Full text
Abstract:
Hardware evolves faster than software. While a hardware system might need replacement every one to five years, the average lifespan of a software system is a decade, with some instances living up to several decades. Inevitably, code outlives the platform it was developed for and may become legacy: development of the software stops, but maintenance has to continue to keep up with the evolving ecosystem. No new features are added, but the software is still used to fulfil its original purpose. Even in the cases where it is still functional (which discourages its replacement), legacy code is inefficient, costly to maintain, and a risk to security. This thesis proposes methods to leverage the expertise put in the development of legacy code and to extend its useful lifespan, rather than to throw it away. A novel methodology is proposed, for automatically exploiting platform specific optimisations when retargeting a program to another platform. The key idea is to leverage the optimisation information embedded in vector processing intrinsic functions. The performance of the resulting code is shown to be close to the performance of manually retargeted programs, however with the human labour removed. Building on top of that, the question of discovering optimisation information when there are no hints in the form of intrinsics or annotations is investigated. This thesis postulates that such information can potentially be extracted from profiling the data flow during executions of the program. A context-aware data dependence profiling system is described, detailing previously overlooked aspects in related research. The system is shown to be essential in surpassing the information that can be inferred statically, in particular about loop iterators. Loop iterators are the controlling part of a loop. This thesis describes and evaluates a system for extracting the loop iterators in a program. It is found to significantly outperform previously known techniques and further increases the amount of information about the structure of a program that is available to a compiler. Combining this system with data dependence profiling improves its results even more. Loop iterator recognition enables other code modernising techniques, like source code rejuvenation and commutativity analysis. The former increases the use of idiomatic code and as a result increases the maintainability of the program. The latter can potentially drive parallelisation and thus dramatically improve runtime performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Janapa, Reddi Vijay. "Deploying dynamic code transformation in modern computing environments." Diss., Connect to online resource, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Newman, Christian D. "A SOURCE CODE TRANSFORMATION LANGUAGE TO SUPPORT SOFTWARE EVOLUTION." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1500560236029486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hashii, Brant. "Policy-based protection against external programs via code transformation /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2003. http://uclibs.org/PID/11984.

Full text
Abstract:
Thesis (Ph. D.)--University of California, Davis, 2004.<br>Degree granted in Computer Science. Dissertation completed in 2003; degree granted in 2004. Also available via the World Wide Web. (Restricted to UC campuses)
APA, Harvard, Vancouver, ISO, and other styles
9

Clapham, Jessica. "Code-switching, pedagogy and transformation : teachers' perceptions of the dynamics of code-switching and bilingual identity." Thesis, University of Exeter, 2016. http://hdl.handle.net/10871/21887.

Full text
Abstract:
This thesis presents the findings from a qualitative investigation into teachers’ use of code-switching in bilingual classrooms in Wales. The results of the 2001 census show a slight increase in the proportion of Welsh speakers in Wales, to 21%. This change, combined with increasing governmental support for the Welsh language, suggests that we may now be entering a period of stable Welsh-English bilingualism for those who speak Welsh. This study builds upon previous research into teachers’ use of code-switching by investigating 6 teachers’ perceptions of code-switching during the research period. It is proposed that teachers’ perceptions and awareness of their bilingual identity, examined through case studies have a central role in the decisions made in the bilingual classroom. Synthesising various approaches to code-switching provides educators with an overview of code-switching and its implications for instruction and the classroom as a community. This study makes an important contribution to the understanding of the dynamics of code-switching at classroom level rather than syntactic level, as there is very little research into the bilingual teaching interface in Wales. Ideally, the findings will contribute to the debate on multilingual practice as a natural and effective means of language teaching as well as a force for intercultural understanding. The author is interested in exploring how far and in what ways teachers are aware of the benefits of code-switching and to raise awareness of the relationship between code choice and wider social factors. The study has two main objectives. Firstly, to investigate how far teachers employ code-switching as a strategy and their reasons for doing so. Secondly, to explore how far, and in what ways, these teachers’ identities undergo a process of transformation as a result of their experiences of the research process. The study provides a number of useful insights into the dynamic interplay between code-switching and learning as a legitimate way of using a shared language to scaffold pupils’ learning. A range of teachers’ perceptions of code-switching were detected and the significance of these findings are discussed. The study provides an insight into perceptions of the functions and rationale for code-switching from a teacher’s perspective, which may contribute to the multilingual turn debate and have pedagogical implications.
APA, Harvard, Vancouver, ISO, and other styles
10

Brunnlieb, Malte [Verfasser]. "Source Code Transformation based on Architecture Implementation Patterns / Malte Brunnlieb." München : Verlag Dr. Hut, 2019. http://d-nb.info/1196415056/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Noaje, Gabriel. "Un environnement parallèle de développement haut niveau pour les accélérateurs graphiques : mise en œuvre à l’aide d’OPENMP." Thesis, Reims, 2013. http://www.theses.fr/2013REIMS028/document.

Full text
Abstract:
Les processeurs graphiques (GPU), originellement dédiés à l'accélération de traitements graphiques, ont une structure hautement parallèle. Les innovations matérielles et de langage de programmation ont permis d'ouvrir le domaine du GPGPU, où les cartes graphiques sont utilisées comme des accélérateurs de calcul pour des applications HPC généralistes.L'objectif de nos travaux est de faciliter l'utilisation de ces nouvelles architectures pour les besoins du calcul haute performance ; ils suivent deux objectifs complémentaires.Le premier axe de nos recherches concerne la transformation automatique de code, permettant de partir d'un code de haut niveau pour le transformer en un code de bas niveau, équivalent, pouvant être exécuté sur des accélérateurs. Dans ce but nous avons implémenté un transformateur de code capable de prendre en charge les boucles « pour » parallèles d'un code OpenMP (simples ou imbriquées) et de le transformer en un code CUDA équivalent, qui soit suffisamment lisible pour permettre de le retravailler par des optimisations ultérieures.Par ailleurs, le futur des architectures HPC réside dans les architectures distribuées basées sur des nœuds dotés d'accélérateurs. Pour permettre aux utilisateurs d'exploiter les nœuds multiGPU, il est nécessaire de mettre en place des schémas d'exécution appropriés. Nous avons mené une étude comparative et mis en évidence que les threads OpenMP permettent de gérer de manière efficace plusieurs cartes graphiques et les communications au sein d'un nœud de calcul multiGPU<br>Graphic cards (GPUs), initially used for graphic processing, have a highly parallel architecture. Innovations in both architecture and programming languages opened the new domain of GPGPU where GPUs are used as accelerators for general purpose HPC applications.Our main objective is to facilitate the use of these new architectures for high-performance computing needs; our research follows two main directions.The first direction concerns an automatic code transformation from a high level code into an equivalent low level one, capable of running on accelerators. To this end we implemented a code transformer that can handle parallel “for” loops (single or nested) of an OpenMP code and convert it into an equivalent CUDA code, which is in a human readable form that allows for further optimizations.Moreover, the future of HPC lies in distributed architectures based on hybrid nodes. Specific programming schemes have to be used in order to allow users to benefit from such multiGPU nodes. We conducted a comparative study which revealed that using OpenMP threads is the most adequate way to control multiple graphic cards as well as manage communications efficiently within a multiGPU node
APA, Harvard, Vancouver, ISO, and other styles
12

Paleri, Vineeth Kumar. "An Environment for Automatic Generation of Code Optimizers." Thesis, Indian Institute of Science, 1999. https://etd.iisc.ac.in/handle/2005/82.

Full text
Abstract:
Code optimization or code transformation is a complex function of a compiler involving analyses and modifications with the entire program as its scope. In spite of its complexity, hardly any tools exist to support this function of the compiler. This thesis presents the development of a code transformation system, specifically for scalar transformations, which can be used either as a tool to assist the generation of code transformers or as an environment for experimentation with code transformations. The development of the code transformation system involves the formal specification of code transformations using dependence relations. We have written formal specifications for the whole class of traditional scalar transformations, including induction variable elimination - a complex transformation - for which no formal specifications are available in the literature. All transformations considered in this thesis are global. Most of the specifications given here, for which specifications are already available in the literature, are improved versions, in terms of conservativeness.The study of algorithms for code transformations, in the context of their formal specification, lead us to the development of a new algorithm for partial redundancy elimination. The basic idea behind the algorithm is the new concepts of safe partial availability and safe partial anticipability. Our algorithm is computationally and lifetime optimal. It works on flow graphs whose nodes are basic blocks, which makes it practical.In comparison with existing algorithms the new algorithm also requires four unidirectional analyses, but saves some preprocessing time. The main advantage of the algorithm is its conceptual simplicity. The code transformation system provides an environment in which one can specify a transformation using dependence relations (in the specification language we have designed), generate code for a transformer from its specification,and experiment with the generated transformers on real-world programs. The system takes a program to be transformed, in C or FORTRAN, as input,translates it into intermediate code, interacts with the user to decide the transformation to be performed, computes the necessary dependence relations using the dependence analyzer, applies the specified transformer on the intermediate code, and converts the transformed intermediate code back to high-level. The system is unique of its kind,providing a complete environment for the generation of code transformers, and allowing experimentations with them using real-world programs.
APA, Harvard, Vancouver, ISO, and other styles
13

Paleri, Vineeth Kumar. "An Environment for Automatic Generation of Code Optimizers." Thesis, Indian Institute of Science, 1999. http://hdl.handle.net/2005/82.

Full text
Abstract:
Code optimization or code transformation is a complex function of a compiler involving analyses and modifications with the entire program as its scope. In spite of its complexity, hardly any tools exist to support this function of the compiler. This thesis presents the development of a code transformation system, specifically for scalar transformations, which can be used either as a tool to assist the generation of code transformers or as an environment for experimentation with code transformations. The development of the code transformation system involves the formal specification of code transformations using dependence relations. We have written formal specifications for the whole class of traditional scalar transformations, including induction variable elimination - a complex transformation - for which no formal specifications are available in the literature. All transformations considered in this thesis are global. Most of the specifications given here, for which specifications are already available in the literature, are improved versions, in terms of conservativeness.The study of algorithms for code transformations, in the context of their formal specification, lead us to the development of a new algorithm for partial redundancy elimination. The basic idea behind the algorithm is the new concepts of safe partial availability and safe partial anticipability. Our algorithm is computationally and lifetime optimal. It works on flow graphs whose nodes are basic blocks, which makes it practical.In comparison with existing algorithms the new algorithm also requires four unidirectional analyses, but saves some preprocessing time. The main advantage of the algorithm is its conceptual simplicity. The code transformation system provides an environment in which one can specify a transformation using dependence relations (in the specification language we have designed), generate code for a transformer from its specification,and experiment with the generated transformers on real-world programs. The system takes a program to be transformed, in C or FORTRAN, as input,translates it into intermediate code, interacts with the user to decide the transformation to be performed, computes the necessary dependence relations using the dependence analyzer, applies the specified transformer on the intermediate code, and converts the transformed intermediate code back to high-level. The system is unique of its kind,providing a complete environment for the generation of code transformers, and allowing experimentations with them using real-world programs.
APA, Harvard, Vancouver, ISO, and other styles
14

Newman, Christian D. "NORMALIZING-REFACTORINGS: SIMPLIFYING THE CONSTRUCTION OF SOURCE CODE TRANSFORMATIONS." Kent State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=kent1385057030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Steijger, Tamara. "Downgrading Java 5.0 Projects : An approach based on source-code transformations." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2421.

Full text
Abstract:
<p>The introduction of Java 5.0 came along with an extension of the language syntax. Several new language features as generic types and enumeration types were added to the language specification. These features cause downward-incompatibilities, code written in Java 5.0 will not work on older versions of the Java runtime environment. For some active projects, however, it is not possible to upgrade to higher Java versions, since some code might not be supported on Java 5.0. If one still wants to use components written in Java 5.0, these must be downgraded. Up to now this has been accomplished mostly by transforming the byte code of these programs.</p><p>In this thesis, we present a set of transformations which transform Java 5.0 source code to Java 1.4 compatible code. We successfully apply these transformations to two larger projects and compare our approach to the up to now common byte-code based tools.</p>
APA, Harvard, Vancouver, ISO, and other styles
16

ASSAR, SAAID. "Meta-modelisation pour la transformation des schemas et la generation de code." Paris 6, 1995. http://www.theses.fr/1995PA066757.

Full text
Abstract:
Les ateliers de genie logiciel (ou outils case) sont des logiciels complexes destines a assister l'ingenieur dans la conception et la construction d'un systeme d'information. Cette these concerne le developpement de ce genre d'outils. Elle complete les travaux relatifs a l'atelier de genie logiciel rubis et contient deux parties. La premiere partie est relative au developpement et a l'integration du langage proquel au sein du systeme rubis. Proquel est un langage executable pour la specification et le prototypage d'une application ayant une composante base de donnees. L'execution repose sur la creation d'une base de donnees prototype et sur l'interpreteur de proquel. Celui-ci joue un role fondamental dans l'architecture de rubis. Il est utilise par tous les modules de rubis pour contribuer a l'execution des specifications conceptuelles. La seconde partie concerne l'etude, la representation et l'automatisation du processus de derivation d'une application transactionnelle. Ce processus designe l'ensemble des taches que le developpeur doit mettre en uvre pour construire, a partir des specifications conceptuelles prototypees avec rubis, une application transactionnelle avec les langages et les outils d'un sgbd cible. Pour representer ce processus, on utilise un meta-modele general de processus. Ce meta-modele est oriente decision: le deroulement d'un processus est vu comme une suite de decisions prises dans des situations specifiques. Le couple situation-decision constitue un contexte. La representation obtenue est une arborescence de contextes dont les feuilles terminales sont des actions executables. Ces actions de generation se classifient en trois categories: transformation, implantation et compilation. Pour automatiser le processus de derivation, la these propose une solution logicielle generique. La genericite est obtenue grace a l'usage extensif des techniques de meta-modelisation. Une base etendue des specifications sert a stocker une representation persistante du processus et des produits manipules par ce processus. Cette base est organisee selon trois niveaux d'instanciations, le niveau le plus haut contient l'ensemble des meta-modeles. Elle est construite avec le langage telos qui est un langage de representation des connaissances developpe dans le cadre des projets europeens daida et nature. La generation de code transactionnel correspond a l'execution d'une instance particuliere du meta-modele du processus. Cette execution est prise en charge par un module generique de conduite de processus. Ce module est capable d'executer les actions rattachees aux feuilles de l'arborescence de contextes. Enfin, on demontre la faisabilite de cette approche generique en considerant le couple rubis-sybase
APA, Harvard, Vancouver, ISO, and other styles
17

Jochen, Michael J. "Mobile code integrity through static program analysis, steganography, and dynamic transformation control." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, p, 2008. http://proquest.umi.com/pqdweb?did=1601522871&sid=4&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Mellberg, Linus. "Baseband Processing Using the Julia Language." Thesis, Linköpings universitet, Kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-113284.

Full text
Abstract:
Baseband processing is an important and computationally heavy part of modern mobile cellular systems. These systems use specialized hardware that has many digital signal processing cores and hardware accelerators. The algorithms that run on these systems are complexand needs to take advantage of this hardware. Developing software for these systems requires domain knowledge about baseband processing and low level programming on parallel real time systems. This thesis investigates if the programming language Julia can be used to implement algorithms for baseband processing in mobile telephony base stations. If it is possible to use a scientific language like Julia to directly implement programs for the special hardware in the base stations it can reduce lead times and costs. In this thesis a uplink receiver is implemented in Julia. This implementation is written usinga domain specific language. This makes it possible to specify a number of transformations that use the metaprogramming capabilities in Julia to transform the uplink receiver such that it is better suited to execute on the hardware described above. This is achieved by transforming the program such that it consists of functions that either can be executed on single digital signal processing cores or hardware accelerators. It is concluded that Julia seems suited for prototyping baseband processing algorithms. Using metaprogramming to transform a baseband processing algorithm to be better suited for baseband processing hardware is also a feasible approach.
APA, Harvard, Vancouver, ISO, and other styles
19

Bartman, Brian M. "SUPPORTING SOFTWARE EXPLORATION WITH A SYNTACTIC AWARESOURCE CODE QUERY LANGUAGE." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1500967681232291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

De, Souza Santos Gustavo Jansen. "Assessing and improving code transformations to support software evolution." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10034/document.

Full text
Abstract:
Dans le domaine du développement logiciel, le changement est la seule constante. Les logiciels évoluent parfois de façon substantielle et, pendant ce processus, des séquences de transformation de code (par exemple, créer une classe, puis surcharger une méthode) sont systématiquement appliquées dans le système (e.g. à certaines classes dans une même hiérarchie). De par la nature répétitive de ces transformations, il est nécessaire d’automatiser leur support afin d’assurer que ces séquences de transformations sont appliquées de façon consistante sur la globalité du système.Dans cette thèse, nous proposons d’améliorer les transformations de code pour mieux aider les développeurs dans l’application de transformation de code systématiques et complexes. Nous couvrons deux aspects:• Le support automatisé pour composer et appliquer des séquences de transformations de code. Nous réalisons une recherche de l’existence de telles séquences dans de vrais logiciels. Nous proposons un outil pour appliquer automatiquement ces séquences dans les systèmes que nous avons analysés. • La détection de violations de bons principes dans la conception lors d’efforts de transformation. Nous proposons un outil qui recommande des transformations additionnelles pour résoudre les violations de conception qui ont pu être détectées après avoir effectué les transformations de refactoring.Nous évaluons les approches proposées quantitativement et qualitativement sur des cas d’étude issus du monde réel, parfois avec l’aide des experts du système analysé. Les résultats obtenus montrent la pertinence de nos approches<br>In software development, change is the only constant. Software systems sometimes evolve in a substantial way and, during this process, sequences of code transformations (e.g., create a class, then override a method) are systematically performed in the system (e.g., to some classes in the same hierarchy). Due to the repetitive nature of these transformations, some automated support is needed to ensure that these sequences of transformations are consistently applied to the entire system.In this thesis we propose to improve source code transformations to better sup- port developers performing more complex and systematic code transformations. We cover two aspects: • The automated support to compose and apply sequences of code transformations. We undergo an investigation on the existence of these sequences in real-world software systems. We propose a tool to automatically apply these sequences in the systems we analyzed. • The detection of design violations during a transformation effort. We undergo an investigation on cases of systematic application of refactoring transformations. We proposed a tool that recommends additional transformations to fix design violations that are detected after performing refactoring transformations.We evaluated the proposed approaches quantitatively and qualitatively in real-world case studies and, in some cases, with the help of experts on the systems under analysis. The results we obtained demonstrate the usefulness of our approaches
APA, Harvard, Vancouver, ISO, and other styles
21

Darling, Karen. "Realizing the technical advantages of Star Transformation." [Denver, Colo.] : Regis University, 2010. http://adr.coalliance.org/codr/fez/view/codr:146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Araújo, Vítor Bujés Ubatuba de. "Týr : a dependent type based code transformation for spatial memory safety in LLVM." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/174477.

Full text
Abstract:
A linguagem C não provê segurança espacial de memória: não garante que a memória acessada através de um ponteiro para um objeto, tal como um vetor, de fato pertence ao objeto em questão. Em vez disso, o programador é responsável por gerenciar informações de alocações e limites, e garantir que apenas acessos válidos à memória são realizados pelo programa. Por um lado, isso provê flexibilidade: o programador tem controle total sobre o layout dos dados em memória, e sobre o momento em que verificações são realizadas. Por outro lado, essa é uma fonte frequente de erros e vulnerabilidades de segurança em programas C. Diversas técnicas já foram propostas para prover segurança de memória em C. Tipicamente tais sistemas mantêm suas próprias informações de limites e instrumentam o programa para garantir que a segurança de memória não seja violada. Isso causa uma série de inconvenientes, tais como mudanças no layout de memória de estruturas de dados, quebrando assim a compatibilidade binária com bibliotecas externas, e/ou um aumento no consumo de memória. Uma abordagem diferente consiste em usar tipos dependentes para descrever a informação de limites já latente em programas C e assim permitir que o compilador use essa informação para garantir a segurança espacial de memória. Embora tais sistemas tenham sido propostos no passado, eles estão atrelados especificamente à linguagem C. Outras linguagens, como C++, sofrem de problemas similares de segurança de memória, e portanto poderiam se beneficiar de uma abordagem mais independente de linguagem. Este trabalho propõe Týr, uma transformação de código baseada em tipos dependentes para garantir a segurança espacial de memória de programas C ao nível LLVM IR. O sistema permite que o programador descreva no nível dos tipos as relações entre pointeiros e informação de limites já presente em programas C. Dessa maneira, Týr provê segurança espacial de memória verificando o uso consistente desses metadados pré-existentes, através de verificações em tempo de execução inseridas no programa guiadas pela informação de tipos dependentes. Ao trabalhar no nível mais baixo do LLVM IR, Týr tem por objetivo ser usável como uma fundação para segurança espacial de memória que possa ser facilmente estendida no futuro para outras linguagens compiláveis para LLVM IR, tais como C++ e Objective C. Demonstramos que Týr é eficaz na proteção contra violações de segurança espacial de memória, com um overhead de tempo de execução relativamente baixo e de consumo de memória próximo de zero, atingindo assim um desempenho competitivo com outros sistemas para segurança espacial de memória de uma maneira mais independente de linguagem.<br>The C programming language does not enforce spatial memory safety: it does not ensure that memory accessed through a pointer to an object, such as an array, actually belongs to that object. Rather, the programmer is responsible for keeping track of allocations and bounds information and ensuring that only valid memory accesses are performed by the program. On the one hand, this provides flexibility: the programmer has full control over the layout of data in memory, and when checks are performed. On the other hand, this is a frequent source of bugs and security vulnerabilities in C programs. A number of techniques have been proposed to provide memory safety in C. Typically such systems keep their own bounds information and instrument the program to ensure that memory safety is not violated. This has a number of drawbacks, such as changing the memory layout of data structures and thus breaking binary compatibility with external libraries and/or increased memory usage. A different approach is to use dependent types to describe the bounds information already latent in C programs and thus allow the compiler to use that information to enforce spatial memory safety. Although such systems have been proposed before, they are tied specifically to the C programming language. Other languages such as C++ suffer from similar memory safety problems, and thus could benefit from a more language-agnostic approach. This work proposes Týr, a program transformation based on dependent types for ensuring spatial memory safety of C programs at the LLVM IR level. It allows programmers to describe at the type level the relationships between pointers and bounds information already present in C programs. In this way, Týr ensures spatial memory safety by checking the consistent usage of this pre-existing metadata, through run-time checks inserted in the program guided by the dependent type information. By targeting the lower LLVM IR level, Týr aims to be usable as a foundation for spatial memory which could be easily extended in the future to other languages that can be compiled to LLVM IR, such as C++ and Objective C. We show that Týr is effective at protecting against spatial memory safety violations, with a reasonably low execution time overhead and nearly zero memory consumption overhead, thus achieving performance competitive with other systems for spatial memory safety, in a more language-agnostic way.
APA, Harvard, Vancouver, ISO, and other styles
23

Damouche, Nasrine. "Improving the Numerical Accuracy of Floating-Point Programs with Automatic Code Transformation Methods." Thesis, Perpignan, 2016. http://www.theses.fr/2016PERP0032/document.

Full text
Abstract:
Les systèmes critiques basés sur l’arithmétique flottante exigent un processus rigoureux de vérification et de validation pour augmenter notre confiance en leur sureté et leur fiabilité. Malheureusement, les techniques existentes fournissent souvent une surestimation d’erreurs d’arrondi. Nous citons Arian 5 et le missile Patriot comme fameux exemples de désastres causés par les erreurs de calculs. Ces dernières années, plusieurs techniques concernant la transformation d’expressions arithmétiques pour améliorer la précision numérique ont été proposées. Dans ce travail, nous allons une étape plus loin en transformant automatiquement non seulement des expressions arithmétiques mais des programmes complets contenant des affectations, des structures de contrôle et des fonctions. Nous définissons un ensemble de règles de transformation permettant la génération, sous certaines conditions et en un temps polynômial, des expressions pluslarges en appliquant des calculs formels limités, au sein de plusieurs itérations d’une boucle. Par la suite, ces larges expressions sont re-parenthésées pour trouver la meilleure expression améliorant ainsi la précision numérique des calculs de programmes. Notre approche se base sur les techniques d’analyse statique par interprétation abstraite pour sur-rapprocher les erreurs d’arrondi dans les programmes et au moment de la transformation des expressions. Cette approche est implémenté dans notre outil et des résultats expérimentaux sur des algorithmes numériques classiques et des programmes venant du monde d’embarqués sont présentés<br>Critical software based on floating-point arithmetic requires rigorous verification and validation process to improve our confidence in their reliability and their safety. Unfortunately available techniques for this task often provide overestimates of the round-off errors. We can cite Arian 5, Patriot rocket as well-known examples of disasters. These last years, several techniques have been proposed concerning the transformation of arithmetic expressions in order to improve their numerical accuracy and, in this work, we go one step further by automatically transforming larger pieces of code containing assignments, control structures and functions. We define a set of transformation rules allowing the generation, under certain conditions and in polynomial time, of larger expressions by performing limited formal computations, possibly among several iterations of a loop. These larger expressions are better suited to improve, by re-parsing, the numerical accuracy of the program results. We use abstract interpretation based static analysis techniques to over-approximate the round-off errors in programs and during the transformation of expressions. A tool has been implemented and experimental results are presented concerning classical numerical algorithms and algorithms for embedded systems
APA, Harvard, Vancouver, ISO, and other styles
24

Karth, Vanja. "Transforming the South African magistracy : how far have we come?" Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/3788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Carvajal-Del, Mar Zunilda. "La réforme de la procédure pénale chilienne : le principe du contradictoire, pivot d’une transformation démocratique." Thesis, Paris 10, 2013. http://www.theses.fr/2013PA100123.

Full text
Abstract:
En 2000, le Chili a promulgué un nouveau Code de procédure pénale qui a bouleversé les fondements de l’ancienne procédure. Cette réforme s’est appuyée sur l’idée d’une rupture totale avec la législation antérieure en faisant graviter la procédure autour de la notion clé de débat contradictoire. L’émergence de ce principe a été obtenue grâce à une répartition innovante des rôles entre les différents protagonistes du procès, notamment avec la réapparition du Ministère Public en première instance. Afin d’innerver l’ensemble du procès, le débat contradictoire se réalise à toutes les étapes de la procédure et permet d’aboutir à une décision judiciaire qui puise sa légitimité dans la participation active des parties aux débats. Toutefois, ce débat contradictoire ne peut acquérir sa pleine dimension et son effectivité maximale que grâce à l’institution de mécanismes particuliers. Ainsi, la prévention des atteintes au contradictoire est assurée par la refonte du statut des protagonistes du procès et par l’obligation du juge de motiver ses décisions. Enfin, ce sont les voies de recours qui ont été modifiées et repensées afin d’assurer l’effectivité du principe du contradictoire. Grâce à ce mouvement, le Chili a effectué une véritable révolution juridique parachevant ainsi sa transition vers la démocratie<br>In 2000, Chili promulgated a new Code of criminal procedure that deeply shook the foundations of the previous procedure. This reform is based on the idea of a complete change from the former legislation by having criminal procedure rotate around the key concept of adversarial debate. The development of this principle was allowed thanks to the innovative distribution of the tasks bearing on those involved in the trial, such as the appearance of the Public Prosecutor who has recovered his function of criminal prosecution, which had been so far handled by the judge. In order to affect the whole trial, the adversarial principle carries out its effects at every stage of the proceedings. It leads to a ruling that grounds its legitimacy in the active participation of the parties in the proceedings. Consequently, the law of evidence was modified and the legal hierarchy of evidence gave way to the principle of free assessment of evidence. Yet, the adversarial debate can only develop and be effective through the setting up of specific mechanisms. Therefore, a reworking of the status of the actors of the criminal proceedings has been designed to prevent any breach of the adversarial principle. Regarding the possible breaches of the adversarial principle by the judge, these are prevented by the obligation bearing on the judge to give grounds for his rulings, as well as the modification of his status. Finally, the judicial remedies were modified and rethought in order to ensure the efficiency of the adversarial principle. Thanks to this reform, Chile made a real legal revolution, thereby completing its transition to democracy
APA, Harvard, Vancouver, ISO, and other styles
26

Yoshida, Toshihiro M. B. A. Massachusetts Institute of Technology. "The transformation of the Japanese commercial code and its impact on the Japanese economy." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39535.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management, 2007.<br>Includes bibliographical references (leaves 96-100).<br>The legal system is an essential basis of the society and economy of every country, and it changes continually in accordance with national social and economic situations. The Japanese Commercial Code is no exception. One of the most significant and fundamental Japanese laws, it was enacted in 1899 as part of the modernization of Japanese society as the country moved away from a policy of seclusion followed by the old feudal government. The new Code was strongly affected by corporate laws in place at the time in Germany and France. However, as the balance of power in the world economy changed over the ensuing years, the Japanese Commercial Code and other corporate laws were influenced by American laws, and the nature of the Commercial Code was slowly transformed, especially following the Second World War and after the collapse of the so-called "bubble economy" in Japan. In this thesis, I discuss the history of fundamental Japanese laws, the steps and players in the legislative process, and present details about the introduction of the share exchange system, which stimulated many more mergers and acquisitions in Japan. Then I analyze the economic impact of new legal procedures for M&As, including the share exchange system, and identify possible directions for Japanese corporate laws in the future.<br>by Toshihiro Yoshida.<br>M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
27

Blanchard, Allan. "Aide à la vérification de programmes concurrents par transformation de code et de spécifications." Thesis, Orléans, 2016. http://www.theses.fr/2016ORLE2073/document.

Full text
Abstract:
Vérifier formellement des programmes concurrents est une tâche difficile. S’il existe différentes techniques pour la réaliser, très peu sont effectivement mises en oeuvre pour des programmes écrits dans des langages de programmation réalistes. En revanche, les techniques de vérification formelles de programmes séquentiels sont utilisées avec succès depuis plusieurs années déjà, et permettent d’atteindre de hauts degrés de confiance dans nos systèmes. Cette thèse propose une alternative aux méthodes d’analyses dédiées à la vérification de programmes concurrents consistant à transformer le programme concurrent en un programme séquentiel pour le rendre analysable par des outils dédiés aux programmes séquentiels. Nous nous plaçons dans le contexte de FRAMA-C, une plate-forme d’analyse de code C spécifié avec le langage ACSL. Les différentes analyses de FRAMA-C sont des greffons à la plate-forme, ceux-ci sont à ce jour majoritairement dédiés aux programmes séquentiels. La méthode de vérification que nous proposons est appliquée manuellement à la vérification d’un code concurrent issu d’un hyperviseur. Nous automatisons la méthode à travers un nouveau greffon à FRAMA-C qui permet de produire automatiquement, depuis un programme concurrent spécifié, un programme séquentiel spécifié équivalent. Nous présentons les bases de sa formalisation, ayant pour but d’en prouver la validité. Cette validité n’est valable que pour la classe des programmes séquentiellement consistant. Nous proposons donc finalement un prototype de solveur de contraintes pour les modèles mémoire faibles, capable de déterminer si un programme appartient bien à cette classe en fonction du modèle mémoire cible<br>Formal verification of concurrent programs is a hard task. There exists different methods to perform such a task, but very few are applied to the verification of programs written using real life programming languages. On the other side, formal verification of sequential programs is successfully applied for many years, and allows to get high confidence in our systems. As an alternative to dedicated concurrent program analyses, we propose a method to transform concurrent programs into sequential ones to make them analyzable by tools dedicated to sequential programs. This work takes place within the analysis framework FRAMA-C, dedicated to the analysis of C code specified with ACSL. The different analyses provided by FRAMA-C are plugins to the framework, which are currently mostly dedicated to sequential programs. We apply this method to the verification of a concurrent code taken from an hypervisor. We describe the automation of the method implemented by a new plugin to FRAMAC that allow to produce, from a specified concurrent program, an equivalent specified sequential program. We present the basis of a formalization of the method with the objective to prove its validity. This validity is admissible only for the class of sequentially consistent programs. So, we finally propose a prototype of constraint solver for weak memory models, which is able to determine whether a program is in this class or not, depending on the targeted hardware
APA, Harvard, Vancouver, ISO, and other styles
28

SEMITI, ANI-JULES. "Education, acculturation et transformation des structures sociales en cote d'ivoire." Paris 7, 1986. http://www.theses.fr/1986PA070011.

Full text
Abstract:
Depuis l'independance en 1960, la cote d'ivoire n'arrive pas a se degager du systeme educatif herite de la colonisation ou du systeme colonial dans lequel les responsables politiques actuels ont tous ete formes. Leur attitude releve du mimetisme servile dont les consequences sont nefastes au systeme educatif en vigueur. Cette situation nous a fait prendre conscience de la necessite de reflechir plus profondement sur ce systeme, si l'on veut vraiment le modifier. A cet egard, nous avons presente une analyse tres critique en partant des differents cycles d'enseignement. Nous avons encore procede a une enquete par questionnaire pour apprehender le vecu des ivoiriens face a la domination culturelle et economique dont ils sont presentement victimes. A la suite de notre etude, nous sommes arrives aux conclusions suivantes : - le systeme educatif ivoirien n'entretient que l'occidentalisation progressive des ivoiriens. Il realise une veritable fonction de de-deculturation - tout son objectif actuel est de preparer le jeune ivoirien a imiter toutes les valeurs d'importation et a se relacher des valeurs du terroir. Dans ces conditions, que faudra-t-il faire pour remedier a tous ses aspects negatifs ? - prise en main effective de tout le systeme par les responsables ivoiriens de l'education. - suppression au sein du systeme educatif de toutes les hierarchies arbitraires qui ne font que perpetuer un certain mandarinat. - education populaire et alphabetisation fonctionnelle. Information objective des masses populaires et rupture avec la bourgeoisie<br>Since independance in 1960, cote d'ivoire has not managed to escape from the colonial inherted educational system in which present political officials were trained. Their attitude stems from the slavish mimicry whose consequences are pernicious to the educational system. This situation has brought us to think of the necessity to reflect more deeply on this system, should it be really get modified. In this respect, we have presented a very critical analysis beginning from the different cycles of education. We have also proceeded with a survey through a list of questions to percieve the ivoiriens' passed experiences as regards economic and cultural domination of which they are presently victims. Following our study, we came to these conclusions. - the ivoirien system of education only carries on the gradual werternization of ivoiriens. It carries out a real function of acculturation. All its present objective aims to prepare yourg ivoiriens to imitate imported values and turn down the local ones. In this conditions, what should be done to cope with these negative aspects ? - effective control of the whole system by the local officials of education. - suppression within the educational system of all arbitrary hierarchies that keep up a certain mandarinate. - mass education and complete elimination of illeteracy. - objective information of the uneducated masses and the breaking with the bourgeoisie
APA, Harvard, Vancouver, ISO, and other styles
29

Roychoudhury, Suman. "Genaweave a generic aspect weaver framework based on model-driven program transformation /." Birmingham, Ala. : University of Alabama at Birmingham, 2008. https://www.mhsl.uab.edu/dt/2008p/roychoudhury.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Alabama at Birmingham, 2008.<br>Additional advisors: Purushotham Bangalore, Barrett Bryant, Marjan Mernik, Anthony Skjellum, Randy Smith. Description based on contents viewed Oct. 8, 2008; title from PDF t.p. Includes bibliographical references (p. 161-173).
APA, Harvard, Vancouver, ISO, and other styles
30

Swamy, Sneha. "Transformation of Object-Oriented Associations and Embedded References to Them." Wright State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=wright1218692829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Calnan, III Paul W. "EXTRACT: Extensible Transformation and Compiler Technology." Digital WPI, 2003. https://digitalcommons.wpi.edu/etd-theses/484.

Full text
Abstract:
Code transformation is widely used in programming. Most developers are familiar with using a preprocessor to perform syntactic transformations (symbol substitution and macro expansion). However, it is often necessary to perform more complex transformations using semantic information contained in the source code. In this thesis, we developed EXTRACT; a general-purpose code transformation language. Using EXTRACT, it is possible to specify, in a modular and extensible manner, a variety of transformations on Java code such as insertion, removal, and restructuring. In support of this, we also developed JPath, a path language for identifying portions of Java source code. Combined, these two technologies make it possible to identify source code that is to be transformed and then specify how that code is to be transformed. We evaluate our technology using three case studies: a type name qualifier which transforms Java class names into fully-qualified class names; a contract checker which enforces pre- and post-conditions across behavioral subtypes; and a code obfuscator which mangles the names of a class's methods and fields such that they cannot be understood by a human, without breaking the semantic content of the class.
APA, Harvard, Vancouver, ISO, and other styles
32

Kiepas, Patryk. "Analyses de performances et transformations de code pour les applications MATLAB." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEM063.

Full text
Abstract:
MATLAB est un environnement informatique doté d'un langage de programmation simple et d'une vaste bibliothèque de fonctions couramment utilisées en science et ingénierie (CSE) pour le prototypage rapide. Cependant, certaines caractéristiques de son environnement, comme son langage dynamique ou son style de programmation interactif, affectent la rapidité d'exécution des programmes. Les approches actuelles d'amélioration des programmes MATLAB traduisent le code dans des langages statiques plus rapides comme C ou Fortran, ou bien appliquent systématiquement des transformations de code au programme MATLAB sans considérer leur impact sur les performances. Dans cette thèse, nous comblons cette lacune en développant des techniques d'analyse et de transformation de code des programmes MATLAB afin d'augmenter leur performance. Plus précisément, nous analysons et modélisons le comportement d'un environnement MATLAB black-box uniquement en mesurant l'exécution caractéristique des programmes sur CPU. À partir des données obtenues, nous formalisons un modèle statique qui prédit le type et l'ordonnancement des instructions programmées lors de l'exécution par le compilateur Just-In-Time (JIT). Ce modèle nous permet de proposer plusieurs transformations de code qui améliorent les performances des programmes MATLAB en influençant la façon dont le compilateur JIT génère le code machine. Les résultats obtenus démontrent les avantages pratiques de la méthodologie présentée<br>MATLAB is a computing environment with an easy programming language and a vast library of functions commonly used in Computation Science and Engineering (CSE) for fast prototyping. However, some features of its environment, such as its dynamic language or interactive style of programming affect how fast the programs can execute. Current approaches to improve MATLAB programs either translate the code to faster static languages like C or Fortran, or apply code transformations to MATLAB code systematically without considering their impact on the performance. In this thesis, we fill this gap by developing techniques for the analysis and codetransformation of MATLAB programs in order to improve their performance. More precisely, we analyse and model the behaviour of the black-box MATLAB environment by measuring the execution characteristics of programs on CPU. From the resulting data, we formalise a static model which predicts the type and order of instructions scheduled by the Just-In-Time (JIT)compiler. This model allows us to propose several code transformations which increase the performance of MATLAB programs by influencing how the JIT compiler generates the machine code. The obtained results demonstrate the practical benefits of the presented methodology
APA, Harvard, Vancouver, ISO, and other styles
33

Mattson, Stéphanie, and Jenny Nordin. "Uppförandekoder : En studie över hur en uppförandekod transformeras inom en organisation och dess roll i att skapa en etisk företagskultur." Thesis, Karlstads universitet, Handelshögskolan, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-55372.

Full text
Abstract:
Efter flertalet skandaler och kriser samt en ökad kunskap om hur jorden påverkas av individers agerande behöver också företag börja ta sitt ansvar. Det utvidgade ansvaret kallas CSR (Corporate Social Responsibility) eller ansvarsfullt företagande och det arbetet kräver självregleringar exempelvis med hjälp av uppförandekoder. Livsmedelsföretag är en bransch där allt fler företag satsar på ansvarsfullt företagande och använder uppförandekoder. Kodernas innehåll och effekt utgör två inriktningar inom forskningen som har studerats tidigare med ett blandat och motsägande resultat angående dess effekt på det etiska beteendet. Dock återfinns relativt få studier om den tredje inriktningen som berör hur koderna transformeras i organisationer och hur kontextuella faktorer kan inverka i processen. Syftet med studien är således att få en ökad förståelse för vad som kan påverka uppförandekodernas effekt på beteende genom att studera en intern transformativ process, det vill säga hur en kod implementeras, används och kommuniceras. Vidare syftar studien också till att skapa förståelse för varför koden fungerar som uppvisat och belysa dess roll i att skapa en etisk företagskultur. Studien har genomförts med en kvalitativ forskningsstrategi för att på djupet skapa förståelse för fenomenet uppförandekoder. En fallstudiedesign utgjordes av en livsmedelskoncern med fokus på två olika butikskedjor, bolag A och B. Huvudsakligen har intervjuer genomförts och kompletterats med information från årsredovisningen, hållbarhetsredovisningen och hemsidan. Resultatet av studien visar att arbetet med uppförandekoden som riktar sig till leverantörerna, påverkas av interna och externa kontextuella faktorer. Främst används koden av personer i inköpsprocessen. På butiksnivå är det dock ingen som känner till uppförandekoden men det framkommer ett eventuellt behov av att även de får mer kunskap om hur arbetet med leverantörerna sker. Dels för att möta kundernas frågor i butik men också för att känna stolthet och trygghet vilket kan stärka deras identitet. Därmed är uppförandekodens roll i att skapa en etisk företagskultur större på central nivå jämfört med butiksnivå, men för att skapa ett genuint engagemang anses ledarna vara viktigast. På butiksnivå innehar det andra dokumentet etiska riktlinjer, med krav inåt i organisationen, en möjlig indirekt påverkan via ledarna eftersom ca 1000 tjänstemän årligen signerar det. Det innebär dock att långt ifrån samtliga medarbetare kommer i direkt kontakt med någon form av uppförandekod. Studien visar därmed att etiska kärnvärden och etiskt ledarskap i kombination utgör ett värderingsstyrt ledarskap som innehar en större roll i att skapa en etisk företagskultur på butiksnivå.<br>After multiple scandals, crises as well as increased knowledge in how the earth is affected by actions of individuals, there is a need for companies to start owning their responsibility. The extended responsibility named CSR (Corporate Social Responsibility) or responsible business, demands work in form of self-regulations with help from codes of conduct. Grocery retailers are in an industry where more companies aim to be responsible businesses and uses codes of conduct. Regarding the research field of codes, the contents of the codes and their effects make two directions that have been researched earlier with various and inconsistent results regarding the codes’ effect on behaviour. However, little research has been found regarding the third direction, which involves research in how the codes transform in organizations and contextual factors can affect this process. The aim with the study is therefore to receive a greater understanding of what might affect codes’ effect on behaviour, by studying an internal transformation process, that is, how a code gets implemented, used and communicated in the organization. Furthermore, the study aims to explain why the codes work as shown and tries to enlighten its role in creating an ethical corporate culture. The study has been conducted by a qualitative research strategy in order to create a greater understanding for the phenomena that is codes of conduct. A case study design was made with focus on a food group and two of its store chains called firm A and B. Primarily interviews have been completed and complemented by information from the annual report, sustainability report and the website. Results of the study show that the work with the code of conduct directed towards suppliers are being affected by internal and external contextual factors. People involved in the purchase process mainly use the code. At store level however, no one is greatly aware of the code of conduct. Nevertheless, it is apparent that there is a need for more knowledge regarding how the work with suppliers takes place. Partly because there is need to be able to answer customer’s questions, but also because the employees should feel pride and security, which can strengthen their identity. Consequently, the role of the code of conduct in creating an ethical corporate culture is larger on a central level rather than on store level, but in order to create genuine engagement leaders are the most important. On store level, another document called ethical guidelines; aimed inwards the organization, has a possible indirect effect through store managers since 1000 officials annually sign it. This implies that far from all employees come in direct contact with any kind of code of conduct. The study rather shows that the combination of ethical core values and ethical leadership creates a values-based leadership that holds a greater role in creating an ethical corporate culture, on store level.
APA, Harvard, Vancouver, ISO, and other styles
34

Hamieh, Soumaya. "Transformation des alcools sur zéolithes protoniques : "rôle paradoxal du coke." Thesis, Poitiers, 2013. http://www.theses.fr/2013POIT2306/document.

Full text
Abstract:
L'éthanol est converti, à 350°C sous 30 bar et sur des zéolithes protoniques, en un mélange de paraffines légères et d'aromatiques ; produits incorporables dans le pool essence. Cependant, la transformation de EtOH sur zéolithes acides conduit à la formation du coke. Des techniques physiques avancées, en particulier les techniques MALDI et LDI-TOF MS, couplées à la méthode d'analyse qui consiste à récupérer dans un solvant les molécules carbonées après dissolution de la zéolithe dans HF, contribuent à caractériser finement le coke. Sa composition dépend du catalyseur : sur HBEA(11), zéolithe à larges pores, 17 familles ont été détectées contre 4 sur HZSM-5(40) de taille de pore intermédiaire. Sur cette dernière, le coke, composé de polyalkybenzènes / naphtalènes / phénalènes et pyrènes, est localisé à l'intersection des canaux et a une toxicité vis-à-vis des sites acides de Brønsted de 1. En dépit d'un empoisonnement total, cette zéolithe est toujours capable de convertir EtOH, comme MeOH, en hydrocarbures et qui plus est avec les mêmes sélectivités en produits. La transformation de ces deux alcools ne s'explique pas par un mécanisme classique de catalyse acide, mais par un mécanisme concerté radicalaire-acide. La présence d'un inhibiteur de radicaux dans la charge réactionnelle, l'hydroquinone, provoque une désactivation immédiate et une diminution de la concentration des radicaux. La transformation de EtOH et MeOH passe par un intermédiaire réactionnel commun, le carbène :CH2, dont l'oligomérisation radicalaire conduit à la formation d'oléfines. Ces oléfines légères (n-O3-n-O5) sont très réactives et se transforment par catalyse acide (oligomérisation / cyclisation / t<br>Ethanol is converted into light paraffins and aromatics mixture at 350°C under 30 bar over protonic zeolites. These products can be incorporated in the gasoline pool. Nevertheless, EtOH transformation over acid zeolites leads to the formation of the coke. Advanced physical techniques, in particular MALDI and LDI-TOF MS, were coupled to the coke analysis method. This method consists of the recovery of the carbonaceous molecules in a solvent after zeolite dissolution in hydrofluoric acid solution. This coupling allows characterizing the coke through an extensive way. The coke composition depends on the catalyst morphology: over HBEA(11) zeolite of large pores, 17 families were detected while 4 over HZSM-5(40) of intermediate pore size. Over this latter, the coke, composed of polyalkylbenzenes/naphtalenes/phenalenes and pyrenes, is located in the channels intersections and has toxicity of 1 towards Brønsted acid sites. In spite of a total poisoning, HZSM-5 zeolite is always able to convert EtOH, like MeOH, into hydrocarbons with the same products selectivity. The transformation of the two alcohols cannot be explained by a classical mechanism of acid catalysis, but by a cooperative radical-acid mechanism. The presence of a radical inhibitor in the feed, the hydroquinone, causes an immediate deactivation and a decrease in the concentration of radicals. The transformation of EtOH and MeOH passed by the common reaction intermediate, the :CH2 carbene, which its radical oligomerization leads to the formation of olefins. Olefins (n-O3-n-O5) are very active and can be transformed through acid catalysis (oligomerization/cyclisation/Hydrogen transfer) into aromatics or undergo isomer
APA, Harvard, Vancouver, ISO, and other styles
35

Garcia, Joe Klingel John Mull John Summers Dennis Taylor Vickie. "Army transformation leadership a study of core competencies for civilian leadership /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FGarcia.pdf.

Full text
Abstract:
Thesis (M.S. in Program Management)--Naval Postgraduate School, September 2006.<br>Thesis Advisor(s): Cary Simon. "September 2006." Includes bibliographical references (p. 117-119). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
36

Taylor, Vickie. "Army transformation leadership a study of core competencies for civilian leadership." Thesis, Monterey, California. Naval Postgraduate School, 2006. http://hdl.handle.net/10945/2586.

Full text
Abstract:
The U. S. Army is undergoing a substantial departure from its historical underpinnings to adapt and succeed in the emerging arena of asymmetric warfare-i.e., migrating from a traditional 'heavy' approach to an agile and responsive capability. Changes are not limited to equipment and doctrine, but are pervasive throughout all aspects of infrastructure and processes, including leadership. Army Transformation is outlined by the Department of Defense (DoD) April 2003 Transformation Planning Guidance and the subsequent 2004 Army Transformation Roadmap. One tenet of leadership transformation includes increased capability to develop and sustain innovation. This paper analyzes civilian leadership competencies and capabilities related to the current Army training environment and identifies leadership competencies and capabilities deemed crucial for civilian leadership transformation. A researchers-developed survey and interviews revealed noteworthy conclusions, including the following: (1) Civilian and military personnel share a common view of core leadership competencies required for transformative change; (2) Diversity of leadership experiences was widely regarded as a core leadership competency and is generally considered inadequate for civilian leadership; and (3) Cultural differences between civilian and military leadership are narrowing, but momentum must be nourished and encouraged to affect positive and permanent leadership improvements for Army civilians.
APA, Harvard, Vancouver, ISO, and other styles
37

Didone', Susanna <1996&gt. "Digital Transformation: il CRM come abilitatore della trasformazione digitale in azienda." Master's Degree Thesis, Università Ca' Foscari Venezia, 2022. http://hdl.handle.net/10579/20713.

Full text
Abstract:
Il mercato, e le aziende con esso, ha cambiato il modo in cui presentare ed offrire prodotti e servizi. Questa evoluzione ha determinato non solo un ampliamento delle attività che lo caratterizzano, ma anche, nel lungo periodo, ad accrescere sempre più l’interesse per le strategie di marketing. Inoltre, la frammentazione e differenziazione della domanda, anche in relazione al soddisfacimento di uno stesso bisogno, hanno causato un altro cambiamento: il singolo cliente è diventato l’interlocutore principale dell’azienda, la quale ha reagito personalizzando comunicazione, promozione e assistenza. Quella in corso è una vera e propria rivoluzione, e la regola per le aziende che vogliono rimanere competitive e mantenere i propri clienti è solo una: distinguersi. Per questo motivo nasce il CRM. Il CRM (Customer Relationship Management) introduce una nuova tipologia di software, che aiuta le aziende a gestire, elaborare ed ottimizzare le interazioni con i clienti e tutti i relativi dati ottenuti. L’obiettivo principale del CRM è quindi quello di migliorare il livello di soddisfazione e di coinvolgimento dei clienti, aumentando le performance del proprio business ed il fatturato. Ma il CRM non significa meramente l’applicazione di un software, è invece una nuova impronta tecnologica aziendale, un nuovo modo di pensare dell’impresa che, attraverso una visione maggiormente improntata al digitale, conduce ad una serie di pratiche, di strategie di implementazione e di processi di integrazione, che modificano il modus operandi dell’azienda, sia in contesti B2C sia in quelli B2B. Per raggiungere gli obiettivi strategici del CRM, è necessario un approccio integrato che permetta di individuare e coordinare le azioni che riguardano il ciclo di vita del cliente e che considerino i punti di interazione con l’azienda. Per questo motivo, il CRM può essere considerato una strategia integrata di tre pilastri: vendita, marketing e customer service. Se un tempo i sistemi informatici aziendali erano talmente eterogenei tra loro da non consentire un’integrazione che permettesse di presentare un’unica interfaccia al cliente, oggi il CRM ha permesso a sistemi molto diversi di comunicare tra loro e di offrire al cliente un’unica modalità di comunicazione con l’azienda. Lo scopo di questa tesi è quello di evidenziare i benefici derivanti da un’applicazione ottimale del CRM, analizzandolo nel dettaglio, con il file di fornire una guida pratica alle aziende che intendono intraprendere questo percorso di digitalizzazione.
APA, Harvard, Vancouver, ISO, and other styles
38

Mihindou-Koumba, Pierre-Claver. "Transformation du méthylcyclohexane sur zeolithes à taille de pores intermediaire." Poitiers, 2007. http://www.theses.fr/2007POIT2280.

Full text
Abstract:
Ce travail se situe dans le cadre du craquage catalytique (FCC). L’objectif de cette Thèse a été d’étudier, à 350° C, plus particulièrement la transformation du méthylcyclohexane (molécule modèle représentative des naphtènes) sur zéolithes à taille de pores intermédiaire : MFI, EUO et MWW (Si/Al = 15) afin d’appréhender l’influence de la structure poreuse (présence de canaux, cages, coupes externes) et de l’acidité de la zéolithe. La zéolithe EUO, bien que possédant moins de sites protoniques et des ouvertures de pores plus étroites, est plus active que la MFI. Cette différence d’activité est certainement due à la taille des cristallites de la MFI. Le craquage est la réaction principale sur les deux zéolithes, mais la formation de coke est plus importante sur EUO, ce qui entraîne une plus forte désactivation. La zéolithe MWW est aussi active que la zéolithe MFI, mais la sélectivité en craquage est plus faible. Ceci est certainement dû à la présence de supercages et de coupes externes sur MWW, qui facilitent la diffusion des isomères du méthylcyclohexane et défavorisent légèrement leur craquage. Par contre, un changement très net de sélectivité accompagne la désactivation de la zéolithe. La sélectivité en isomérisation augmente au cours du temps de réaction. L’explication de ce phénomène est liée à la structure poreuse de cette zéolithe, qui possède deux types de systèmes poreux internes qui se désactivent au cours du temps par formation de coke, et un système externe (coupes externes) qui ne se désactive pas et qui défavorise le craquage par une désorption rapide des isomères formés<br>This work refers to the catalytic cracking (FCC). The objective was to study, at 350° C, the transformation of methylcyclohexane, a model molecule of naphtene compounds over various intermediate pore size zeolites: MFI, EUO et MWW (Si/Al = 15) in order to determine the effect of the pore structure and acidity of zeolites. EUO zeolite is more active than MFI, as EUO possess a low amount of protonic acid sites and narrowest pores. This result can be attributed to a large crystallite size of MFI. Cracking is the predominant reaction of both zeolites, but coke formation is higher over EUO. MWW zeolite is also active than MFI while cracking selectivity is lower. This is certainly due to the presence of supercages and of external cups on MWW which make easier the diffusion of methylcyclohexane isomers and disadvantage this cracking. However, change in selectivity was observed during zeolite deactivation, isomers formation increases with time on stream. The explanation can be related to the particular pore structure of this zeolite. The internal porous system deactivates by coke formation increases the internal cups don’t favorise the retention of coke components. This reaction occurs preferentially in the internal cups when the methylcyclohexane isomers can be rapidly desorbed before their cracking
APA, Harvard, Vancouver, ISO, and other styles
39

Maqbool, Shahbaz. "Transformation of a core scenario model and activity diagrams into Petri nets." Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/26971.

Full text
Abstract:
For any software development project it is important to capture the requirements in a clear and concise manner. Standardization efforts, such as the development of version 2 of the Unified Modeling Language (UML) by the Object Management Group, and the development of the User Requirements Notation (URN) by the International Telecommunication Union, propose visual languages for capturing requirements in terms of scenario notations. Activity Diagrams and Use Case Maps (UCM) are examples of such scenario languages in UML and in URN, respectively. The developers of these languages have concentrated on the visual notations and only a small amount of effort has been spent in defining precise and formal semantics for these languages. Core Scenario Model (CSM) is a step towards defining formal semantics to the scenario based languages like UCM, Activity Diagrams and Interaction Diagrams. It includes common scenario information found in the UCM notation and in UML 2.0 Activity Diagrams and Interaction Diagrams, and has been developed as an intermediate language before transformation into formal languages like Petri Nets, Layered Queuing Network etc. The thesis proposes a transformation method that takes UML Activity Diagrams as input and generates equivalent Petri Nets as output. The transformation approach takes into account the concurrency characteristics of Activity Diagrams. The thesis also proposes a method for transforming a Core Scenario Model (CSM) representation into equivalent Petri Nets. A Java tool was designed and built for realizing the proposed transformation from CSM to Petri Nets. The application takes as input XML files produced by an existing tool, which contain CSM in XML format. The Petri Nets produced by our transformation is in XML format. It can be used for validating the original models by simulation. The results from this analysis can be traced back to improve design decisions.
APA, Harvard, Vancouver, ISO, and other styles
40

Ziemann, Paul. "An integrated operational semantics for a UML core based on graph transformation." Berlin Logos-Verl, 2005. http://deposit.ddb.de/cgi-bin/dokserv?id=2774091&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ziemann, Paul. "An integrated operational semantics for a UML core based on graph transformation /." Berlin : Logos-Verl, 2006. http://deposit.ddb.de/cgi-bin/dokserv?id=2774091&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kang, Min Jay. "Urban transformation and adaptation in Bangka, Taipei : marginalization of a historical core /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/10798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Škultinas, Tomas. "MDA panaudojimo programinės įrangos kūrimui tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2005. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2005~D_20050524_163850-89727.

Full text
Abstract:
IT industry all time is looking for ways to improve software development productivity as well as the quality and longevity of the software that it creates. OMG announced Model Driven Architecture as its strategic direction. It is software development methodology that provides new viewpoint in software development process. The modeling of problem domain and model transformation are key elements of MDA architecture and they are analyzed in this work using OMG specifications and other resources. The purpose of this work is to evaluate benefits of MDA framework in software development process. The new MDA framework is developed according to the results of MDA architecture analysis. Experimental usage of new MDA framework concentrates on productivity of software development process, automation of repeated tasks and required skill set of application developers.
APA, Harvard, Vancouver, ISO, and other styles
44

Jelesnianski, Christopher Stanisław. "A Compiler Framework to Support and Exploit Heterogeneous Overlapping-ISA Multiprocessor Platforms." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/78177.

Full text
Abstract:
As the demand for ever increasingly powerful machines continues, new architectures are sought to be the next route of breaking past the brick wall that currently stagnates the performance growth of modern multi-core CPUs. Due to physical limitations, scaling single-core performance any further is no longer possible, giving rise to modern multi-cores. However, the brick wall is now limiting the scaling of general-purpose multi-cores. Heterogeneous-core CPUs have the potential to continue scaling by reducing power consumption through exploitation of specialized and simple cores within the same chip. Heterogeneous-core CPUs join fundamentally different processors each which their own peculiar features, i.e., fast execution time, improved power efficiency, etc; enabling the building of versatile computing systems. To make heterogeneous platforms permeate the computer market, the next hurdle to overcome is the ability to provide a familiar programming model and environment such that developers do not have to focus on platform details. Nevertheless, heterogeneous platforms integrate processors with diverse characteristics and potentially a different Instruction Set Architecture (ISA), which exacerbate the complexity of the software. A brave few have begun to tread down the heterogeneous-ISA path, hoping to prove that this avenue will yield the next generation of super computers. However, many unforeseen obstacles have yet to be discovered. With this new challenge comes the clear need for efficient, developer-friendly, adaptable system software to support the efforts of making heterogeneous-ISA the golden standard for future high-performance and general-purpose computing. To foster rapid development of this technology, it is imperative to put the proper tools into the hands of developers, such as application and architecture profiling engines, in order to realize the best heterogeneous-ISA platform possible with available technology. In addition, it would be in the best interest to create tools to be as "timeless" as possible to expose fundamental concepts industry could benefit from and adopt in future designs. We demonstrate the feasibility of a compiler framework and runtime for an existing heterogeneous-ISA operating system (Popcorn Linux) for automatically scheduling compute blocks within an application on a given heterogeneous-ISA high-performance platform (in our case a platform built with Intel Xeon - Xeon Phi). With the introduced Profiler, Partitioner, and Runtime support, we prove to be able to automatically exploit the heterogeneity in an overlapping-ISA platform, being faster than native execution and other parallelism programming models. Empirically evaluating our compiler framework, we show that application execution on Popcorn Linux can be up to 52% faster than the most performant native execution for Xeon or Xeon Phi. Using our compiler framework relieves the developer from manual scheduling and porting of applications, requiring only a single profiling run per application.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
45

ANGELI, LAURENT. "Factorisation de sous-programmes. Integration dans un outil de transformation de codes fortran77." Nice, 1996. http://www.theses.fr/1996NICE5036.

Full text
Abstract:
La maintenance des programmes est l'une des activites essentielles durant la vie d'un logiciel. C'est aussi l'une des activites les plus couteuses. De plus, comme elle est effectuee, dans la plupart des cas, de facon manuelle, elle represente un risque non negligeable d'introduction d'erreurs. Tout ce qui permettra d'automatiser cette maintenance permettra donc de diminuer cette depense et ce risque. Nous avons travaille sur un outil permettant de simplifier les programmes a maintenir en factorisant dans un sous-programme unique un certain nombre de fragments de programme similaires. Nous avons appele cet outil: factorisation de sous-programme. Apres avoir defini les objectifs de notre outil, nous specifions les structures de donnees ainsi que les algorithmes nous permettant de traiter en partie ce probleme. Nous presentons egalement un prototype de notre outil. Cette implementation, specifiee pour le langage fortran77, donne de bons resultats sur des exemples simples et representatifs des problemes a resoudre. Nous avons egalement des resultats interessants a partir de portions issues de programmes reels. L'existence de ces exemples reels montre l'utilite de notre outil. Il reste de nombreux problemes ouverts autour de cette factorisation de sous-programmes. Nous pensons que ce premier pas permettra de les clarifier et a terme de les resoudre
APA, Harvard, Vancouver, ISO, and other styles
46

Cerqueira, Henrique Soares. "Transformation du méthylcyclohexane et du m-xylène sur catalyseurs zéolithiques : formation de coke et désactivation." Poitiers, 2000. http://www.theses.fr/2000POIT2297.

Full text
Abstract:
Ce travail se situe dans le cadre du craquage catalytique (fcc), l'objectif central etant de comprendre les phenomenes lies a la conversion de naphtenes - cinetique et mecanismes, desactivation par depot de coke - sur des catalyseurs utilises dans le procede de fcc (zeolithes hfau ou hmfi) ou susceptibles de l'etre (zeolithe hbea). Il existe en effet a ce jour tres peu d'etudes de la transformation de ces composes, pourtant presents en quantite importante dans les charges et produits du craquage catalytique. L'essentiel de la recherche concerne la transformation du methylcyclohexane. La conversion de divers autres hydrocarbures representatifs de produits du fcc est egalement abordee. L'etude comporte deux volets complementaires : la comprehension des mecanismes de transformation des hydrocarbures y compris en produits non desorbes des zeolithes (coke) qui utilise la modelisation cinetique comme principal outil et la comprehension des modes de formation de coke et de desactivation qui s'appuie a la fois sur la caracterisation de ce melange complexe (par cpv/sm) et sur la definition par irtf de son effet sur l'acidite des catalyseurs. Les conclusions suivantes peuvent etre tirees de cette etude : - le mode (primaire ou secondaire) de transformation d'un reactif en coke peut etre deduit du profil de coke dans un reacteur a lit fixe. - de larges differences en activite, en frequence de rotation des sites protoniques et en selectivite sont observees en transformation du methylcyclohexane a 450\c sur les trois types de zeolithes. Elles s'expliquent pour l'essentiel par l'effet de leur structure poreuse. - de grandes differences sont egalement observees dans la composition du coke, la croissance des molecules de coke dans les micropores
APA, Harvard, Vancouver, ISO, and other styles
47

Acton, Donald, Kimberly Voll, Steven Wolfman, and Benjamin Yu. "Pedagogical Transformations in the UBC CS Science Education Initiative." ACM, 2009. http://hdl.handle.net/2429/8884.

Full text
Abstract:
The UBC CS Science Education Initiative (CSSEI) has resulted in a number of research projects. New teaching methods and student assessment instruments are introduced to engage student learning and evaluations of their understanding. In this paper, we report four of these recent initiatives and their initial findings.
APA, Harvard, Vancouver, ISO, and other styles
48

Doni, Pracner. "Translation and Transformation of Low Level Programs." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=110184&source=NDLTD&language=en.

Full text
Abstract:
This thesis presents an approach for working with low level source code that enables automatic restructuring and raising the abstraction level of the programs. This makes it easier to understand the logic of the program, which in turn reduces the development time.The process in this thesis was designed to be flexible and consists of several independent tools. This makes the process easy to adapt as needed, while at the same time the developed tools can be used for other processes. There are usually two basic steps. First is the translation to WSL language, which has a great number of semantic preserving program transformations. The second step are the transformations of the translated WSL. Two tools were developed for translation: one that works with a subset of x86 assembly, and another that works with MicroJava bytecode. The result of the translation is a low level program in WSL.The primary goal of this thesis was to fully automate the selection of the transformations. This enables users with no domain&nbsp; knowledge to efficiently use this process as needed. At the same time, the flexibility of the process enables experienced users to adapt it as needed or integrate it into other processes. The automation was achieved with a <em>hill climbing </em>algorithm.Experiments that were run on several types of input programs showed that the results can be excellent. The fitness function used was a built-in metric that gives the &ldquo;weight&rdquo; of structures in a program. On input samples that had original high level source codes, the end result metrics of the translated and transformed programs were comparable. On some samples the result was even better than the originals, on some others they were somewhat more complex. When comparing with low level original source code, the end results was always significantly improved.<br>U okviru ove teze se predstavlja pristup radu sa programima niskog nivoa koji omogućava automatsko restrukturiranje i podizanje na vi&scaron;e nivoe. Samim tim postaje mnogo lak&scaron;e razumeti logiku programa &scaron;to smanjuje vreme razvoja.Proces je dizajniran tako da bude fleksibilan i sastoji se od vi&scaron;e nezavisnih alata. Samim tim je lako menjati proces po potrebi, ali i upotrebiti razvijene alate u drugim procesima. Tipično se mogu razlikovati dva glavna koraka. Prvi je prevođenje u jezik WSL,za koji postoji veliki broj transformacija programa koje očuvavaju semantiku. Drugi su transformacije u samom WSL-u. Za potrebe prevođenja su razvijena dva alata, jedan koji radi sa podskupom x86 asemblera i drugi koji radi sa MikroJava bajtk&ocirc;dom. Rezultat prevođenja je program niskog nivoa u WSL jeziku.Primarni cilj ovog istraživanja je bila potpuna automatizacija odabira transformacija, tako da i korisnici bez iskustva u radu sa sistemom mogu efikasno da primene ovaj proces za svoje potrebe. Sa druge strane zbog fleksibilnosti procesa, iskusni korisnici mogu lakoda ga pro&scaron;ire ili da ga integri&scaron;u u neki drugi već postojeći&nbsp;&nbsp; proces.Automatizacija je&nbsp; postignuta pretraživanjem usponom (eng. hill climbing).Eksperimenti vr&scaron;eni na nekoliko tipova ulaznih programa niskog nivoa su pokazali da rezultati mogu biti&nbsp; izuzetni. Za funkciju pogodnosti je kori&scaron;ćena ugrađena metrika koja daje &ldquo;težinu&rdquo; struktura u programu. Kod ulaza za koje je originalni izvorni k&ocirc;d bio dostupan, krajnje metrike najboljih varijanti prevedenih i transformisanih programa su bile na sličnom nivou. Neki primeri su bolji od originala, dok su drugi bili ne&scaron;to kompleksniji. Rezultati su uvek pokazivali značajna unapređenja u odnosu na originalni k&ocirc;d niskog nivoa.
APA, Harvard, Vancouver, ISO, and other styles
49

De, Klerk Johannes Christiaan. "The perceptions of the work environment of women in core mining activities / Johannes Christiaan (Ian) de Klerk." Thesis, North-West University, 2012. http://hdl.handle.net/10394/8670.

Full text
Abstract:
Until 1996, all women in South Africa were prohibited, by law, from working underground. With the introduction of the Mining Charter all this changed and companies started hiring women for different positions. The objectives of the study were: to determine the perceptions of the working environment of women in the mining activities, to establish what changes were made to accommodate women in this specific mine and to establish if women can advance in this company. A field study was done at a chrome mine and a random sample of 100 employees participated. The central research tool utilised was a questionnaire using a Likert-type 5 rating scale. The findings were that mining companies will have to work hard on the perception that women are not wanted in the industry, but that a lot has happened since 1996. As expected the study found that there are significant resistance towards women working in the core mining industry. Mines are making changes to accommodate women. Women are receiving a lot of support from management to become part of the mining environment. Different programs are being implemented to develop skills of women and ensure their progression within the mining companies. The study concluded with recommendations as to what can be done to improve the perception of the working environment of women.<br>Thesis (MBA)--North-West University, Potchefstroom Campus, 2013
APA, Harvard, Vancouver, ISO, and other styles
50

Namdar, Kamran. "In Quest of the Globally Good Teacher : Exploring the need, the possibilities, and the main elements of a globally relevant core curriculum for teacher education." Doctoral thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14137.

Full text
Abstract:
This primarily theoretical-philosophical study is aimed at identifying the main principles according to which a globally relevant core curriculum for teacher education could be devised at a critical juncture in human history. In order to do that, a Weberian ideal type of the globally good teacher is outlined. The notion of the globally good teacher refers to a teacher role, with the salient associated principles and action capabilities that, by rational criteria, would be relevant to the developmental challenges and possibilities of humanity as an entity, would be acceptable in any societal context across the globe, and would draw on wisdom and knowledge from a broad range of cultures. Teachers as world makers, implying a teacher role which is based on the most salient task of a teacher being the promotion of societal transformation towards a new cosmopolitan culture, is suggested as the essence of the globally good teacher. Such a role is enacted in three main aspects of an inspiring driving force, a responsive explorer, and a synergizing harmonizer, each manifested in a set of guiding principles and an action repertoire. Though a theoretical construct, the ideal type of the globally good teacher is shown to have been instantiated in the educational practices of teachers and teacher educators, as well as in national and international policy documents. Based on the characterization of the globally good teacher, the main elements for developing a globally relevant core curriculum for teacher education are concluded to be transformativity, normativity, and potentiality. The study closes with a discussion of the strategic possibilities for bringing the ideal type of the globally good teacher to bear upon the discourses and practices of teacher education.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!