Dissertations / Theses on the topic 'Compiler Construction'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 25 dissertations / theses for your research on the topic 'Compiler Construction.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Parkes, Michael A. B. "An improved tool for automated compiler construction." Thesis, Aston University, 1989. http://publications.aston.ac.uk/10643/.
Full textMoon, Hae-Kyung. "Compiler construction for a simple Pascal-like language." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/897511.
Full textDepartment of Computer Science
Kilpatrick, P. L. "An application of parallelism to compilation." Thesis, Queen's University Belfast, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373538.
Full textTinnerholm, John. "An LLVM backend for the Open Modelica Compiler." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154291.
Full textColeman, Jesse J. "The design, construction, and implementation of an engineering software command processor and macro compiler /." Online version of thesis, 1995. http://hdl.handle.net/1850/12219.
Full textKlinghed, Joel, and Kim Jansson. "Incremental Compilation and Dynamic Loading of Functions in OpenModelica." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12329.
Full textAdvanced development environments are essential for efficient realization of complex industrial products. Powerful equation-based object-oriented (EOO) languages such as Modelica are successfully used for modeling and virtual prototyping complex physical systems and components. The Modelica language enables engineers to build large, sophisticated and complex models. Modelica environments should scale up and be able to handle these large models. This thesis addresses the scalability of Modelica tools by employing incremental compilation and dynamic loading. The design, implementation and evaluation of this approach is presented. OpenModelica is an open-source Modelica environment developed at PELAB in which we have implemented our strategy for incremental compilation and dynamic loading of functions. We have tested the performance of these strategies in a number of different scenarios in order to see how much of an impact they have on the compilation and execution time.
Our solution contains an overhead of one or two hash calls during runtime as it uses dynamic hashes instead of static arrays.
Törnblom, John. "Improving Quality of Avionics Software Using Mutation Testing." Thesis, Linköpings universitet, Databas och informationsteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-105456.
Full textBurke, Patrick William. "A New Look at Retargetable Compilers." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc699988/.
Full textZárybnický, Jakub. "Just-in-time kompilace závisle typovaného lambda kalkulu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445576.
Full textLangner, Daniel, and Christoff Bürger. "Die C# Schnittstelle der Referenzattributgrammatik-gesteuerten Graphersetzungsbibliothek RACR: Übersicht, Anwendung und Implementierung." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-191908.
Full textLangner, Daniel, and Christoff Bürger. "Die C# Schnittstelle der Referenzattributgrammatik-gesteuerten Graphersetzungsbibliothek RACR: Übersicht, Anwendung und Implementierung: Entwicklerhandbuch." Technische Universität Dresden, 2015. https://tud.qucosa.de/id/qucosa%3A29142.
Full textPrinz, Andreas. "Formal Semantics for SDL." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2001. http://dx.doi.org/10.18452/13752.
Full textIn this habilitation thesis the formal semantics of the standardised specification language SDL (Specification and Description Language) is described. Because of the size of the language SDL a representative subset of the language called RSDL (Restricted SDL) was selected to present the concepts of the formal definition. In this thesis two major parts are covered: the definition of the formal semantics and its implementation. The RSDL formal semantics is intelligible, easily comparable with the informal description and represents the general understanding of RSDL. We distinguish between two phases of the definition, namely the static semantics and the dynamic semantics. The static semantics comprises the definition of a concrete grammar, a set of correctness constraints, a set of transformation rules and an abstract syntax as basis for the dynamic semantics. The result of the static semantics is a representation of the specification in abstract syntax. The dynamic semantics starts with the abstract syntax. From here a behaviour model is derived based on the theory of Abstract State Machines (ASM). In order to keep the presentation intelligible a special abstract machine is defined using ASM. This abstract machine in fact represents an abstract SDL-machine. The formal semantics describes the properties of SDL exactly. However, in order to check the correctness of the formalisation, it has to be compared with the informal language description and the intentions of the language designers. This is most easily done using a correct implementation of the semantics. The implementation of the semantics is based on a representation of the input as an abstract syntax tree. For implementing the semantics with minimal effort existing tools are used. The compiler is produced using the standard tools lex and yacc. After parsing the remaining processing is defined over abstract syntax trees, which is covered by a tool called kimwitu. The formal semantics of RSDL is implemented using these tools. The same approach is applicable for SDL.
Farkas, Alex Miklós. "Program construction and evolution in a persistent integrated programming environment /." Title page, contents and abstract only, 1995. http://web4.library.adelaide.edu.au/theses/09PH/09phf229.pdf.
Full textBürger, Christoff. "RACR: A Scheme Library for Reference Attribute Grammar Controlled Rewriting." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-104623.
Full textBürger, Christoff. "RACR: A Scheme Library for Reference Attribute Grammar Controlled Rewriting: Developer Manual." Technische Universität Dresden, 2012. https://tud.qucosa.de/id/qucosa%3A25494.
Full textRoyer, Véronique. "Compilation dirigee par la semantique : une methode constructive." Toulouse 3, 1986. http://www.theses.fr/1986TOU30170.
Full textYang, Zhenkun. "Scalable Equivalence Checking for Behavioral Synthesis." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2461.
Full textMutigwe, Charles. "Automatic synthesis of application-specific processors." Thesis, Bloemfontein : Central University of Technology, Free State, 2012. http://hdl.handle.net/11462/163.
Full textThis thesis describes a method for the automatic generation of appli- cation speci_c processors. The thesis was organized into three sepa- rate but interrelated studies, which together provide: a justi_cation for the method used, a theory that supports the method, and a soft- ware application that realizes the method. The _rst study looked at how modern day microprocessors utilize their hardware resources and it proposed a metric, called core density, for measuring the utilization rate. The core density is a function of the microprocessor's instruction set and the application scheduled to run on that microprocessor. This study concluded that modern day microprocessors use their resources very ine_ciently and proposed the use of subset processors to exe- cute the same applications more e_ciently. The second study sought to provide a theoretical framework for the use of subset processors by developing a generic formal model of computer architecture. To demonstrate the model's versatility, it was used to describe a number of computer architecture components and entire computing systems. The third study describes the development of a set of software tools that enable the automatic generation of application speci_c proces- sors. The FiT toolkit automatically generates a unique Hardware Description Language (HDL) description of a processor based on an application binary _le and a parameterizable template of a generic mi- croprocessor. Area-optimized and performance-optimized custom soft processors were generated using the FiT toolkit and the utilization of the hardware resources by the custom soft processors was character- ized. The FiT toolkit was combined with an ANSI C compiler and a third-party tool for programming _eld-programmable gate arrays (FPGAs) to create an unconstrained C-to-silicon compiler.
Wu, Pei-Chi, and 吳培基. "Integrating Generative and Compositional Techniques in Compiler Construction." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/97807113513094312522.
Full text國立交通大學
資訊工程研究所
83
Compiler front-ends today are getting bigger and more complex. Compiler designers need a software architecture that can survive or be reused during evolution. There are two kinds of techniques to approach software reuse: compositional and generative. Compositional techniques are rarely explored in compiler domains. Generative techniques are applied well in some compiling tasks; however, few production-quality compilers adopt generative technique for semantic analysis. This thesis proposes an approach that integrates both techniques in compiler construction and presents some improvements in both techniques. First, we propose the issues in extending attribute grammars (AGs) as a practical specification method and sketch a framework of software components for compiler construction. Our approach adopts AGs as the backbone of compilers and employs reusable components in AG specifications. Second, we present a semantic specification language for compiler construction. The language extends AGs with object-oriented features, remote attribute access constructs, and interfaces with state- transitional components. Remote attribute access in the language is based on a theoretical model called remote dependency AGs. Third, we present the software components for symbol processing and data-flow analysis, and an object- oriented code generation interface. By employing reusable components to handle circular dependency and non-tree structures, we can remove the specification difficulties due to theoretical limitations of AGs. Compiler construction can be greatly simplified using our approach. Finally, we discuss the efficiency of AGs and object- oriented programming techniques, including AG circularity test, fine tuning of attribute evaluators, and techniques for constructing software components.
Yang, Ji-Tzay, and 楊基載. "Design and Implementation of a Semantic Specification Language for Compiler Construction." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/05243466956935846689.
Full text國立交通大學
資訊工程研究所
83
Compiler front-ends for contemporary programming languages are getting bigger and more elaborate than ever. There are in general two approaches to software reuse within compiler construction: generative and compositional. The thesis presents a specification language called AG++ which is designed to integrate both approaches. AG++ adopts the attribute grammars (AGs) and its extension as the theoretical foundation. It introduces the constructs which satisfy modularity, remote access, collective computing, and object-oriented views on tree nodes to make concise and modular compiler specifications. Besides, it allows the employment of reusable components in a specification to compensate the theoretical limitation of AGs when handling circular dependency and non-tree structures.
Yue-LinYang and 楊岳霖. "The Design and Implementation of a Course Aid for Teaching Compiler Construction." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/8667vd.
Full textRichards, Timothy David. "Generalized Instruction Selector Generation: The Automatic Construction of Instruction Selectors from Descriptions of Compiler Internal Forms and Target Machines." 2010. https://scholarworks.umass.edu/open_access_dissertations/178.
Full textRichards, Timothy D. "Generalized instruction selector generation: The automatic construction of instruction selectors from descriptions of compiler internal forms and target machines." 2010. https://scholarworks.umass.edu/dissertations/AAI3397741.
Full textCavazos, John. "Automatically constructing compiler optimization heuristics using supervised learning." 2005. https://scholarworks.umass.edu/dissertations/AAI3163654.
Full textYeh, Yin-Lin, and 葉盈麟. "Object-Oriented Dependence-Analyzer Construction for Parallelizing Compilers." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/97798187458325268228.
Full text國立成功大學
資訊工程學系
86
The main difference between parallelizing compilers and conventionalcompilers is in the loop transformation of sequential languages. In general, theprocess of loop transformation includes dependence analysis, loop optimization, measurement on the effect of loop parallelization, and parallel code generation. The object-oriented software development technology, because it encapsulates data structures and operations into objects and utilizes somefacilities ( inheritance, polymorphism, dynamic binding) to increase softwarereusability, has increased the productivity of software for many applicationsunder the support of object libraries. In this thesis we use the object-oriented methods and techniques to studyobject-oriented dependence-analyzer construction for parallelizing compilers.First, we study the front-end subsystem which generates the loop structure and gets the features of the processed loop. The logic for this sub-system is toanalyze the syntax of input language based on objects, and then identify theclasses included in the syntax. Next, we design an operation flow for dependenceanalysis subsystem that includes two major components called loop normalizer anddependence analyzer. We investigate the included objects and their functions forthe two subsystems, establish the classes for these objects, and use them to forma class library. Experiment to test the feasibility of building a dependence analyzer basedon the existed objects shows that it is indeed feasible. This implies that usingobject library associated with some compiler-compilers such as Lex, Yacc, Bison,Flex, etc. the cost of developing parallelizing compilers can be reducedsignificantly.