Academic literature on the topic 'Language compilers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Language compilers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Language compilers"

1

Zimmermann, Wolf, and Thilo Gaul. "On the Construction of Correct Compiler Back-Ends: An ASM-Approach." JUCS - Journal of Universal Computer Science 3, no. (5) (1997): 504–67. https://doi.org/10.3217/jucs-003-05-0504.

Full text
Abstract:
Existing works on the construction of correct compilers have at least one of the following drawbacks: (i) correct compilers do not compile into machine code of existing processors. Instead they compile into programs of an abstract machine which ignores limitations and properties of real-life processors. (ii) the code generated by correct compilers is orders of magnitudes slower than the code generated by unverified compilers. (iii) the considered source language is much less complex than real-life programming languages. This paper focuses on the construction of correct compiler backends which generate machine-code for real-life processors from realistic intermediate languages. Our main results are the following: (i) We present a proof approach based on abstract state machines for bottom-up rewriting system specifications (BURS) for back-end generators. A significant part of this proof can be parametrized with the intermediate and machine language. (ii) The performance of the code constructed by our approach is in the same order of magnitude as the code generated by non-optimizing unverified C-compilers.
APA, Harvard, Vancouver, ISO, and other styles
2

Paraskevopoulou, Zoe, John M. Li, and Andrew W. Appel. "Compositional optimizations for CertiCoq." Proceedings of the ACM on Programming Languages 5, ICFP (2021): 1–30. http://dx.doi.org/10.1145/3473591.

Full text
Abstract:
Compositional compiler verification is a difficult problem that focuses on separate compilation of program components with possibly different verified compilers. Logical relations are widely used in proving correctness of program transformations in higher-order languages; however, they do not scale to compositional verification of multi-pass compilers due to their lack of transitivity. The only known technique to apply to compositional verification of multi-pass compilers for higher-order languages is parametric inter-language simulations (PILS), which is however significantly more complicated than traditional proof techniques for compiler correctness. In this paper, we present a novel verification framework for lightweight compositional compiler correctness . We demonstrate that by imposing the additional restriction that program components are compiled by pipelines that go through the same sequence of intermediate representations , logical relation proofs can be transitively composed in order to derive an end-to-end compositional specification for multi-pass compiler pipelines. Unlike traditional logical-relation frameworks, our framework supports divergence preservation—even when transformations reduce the number of program steps. We achieve this by parameterizing our logical relations with a pair of relational invariants . We apply this technique to verify a multi-pass, optimizing middle-end pipeline for CertiCoq, a compiler from Gallina (Coq’s specification language) to C. The pipeline optimizes and closure-converts an untyped functional intermediate language (ANF or CPS) to a subset of that language without nested functions, which can be easily code-generated to low-level languages. Notably, our pipeline performs more complex closure-allocation optimizations than the state of the art in verified compilation. Using our novel verification framework, we prove an end-to-end theorem for our pipeline that covers both termination and divergence and applies to whole-program and separate compilation, even when different modules are compiled with different optimizations. Our results are mechanized in the Coq proof assistant.
APA, Harvard, Vancouver, ISO, and other styles
3

Chitra, A., and G. Sudha Sadasivam. "DESIGN AND IMPLEMENTATION OF A COMPONENTISED IDL COMPILER." Journal of Integrated Design and Process Science: Transactions of the SDPS, Official Journal of the Society for Design and Process Science 6, no. 3 (2002): 75–91. http://dx.doi.org/10.3233/jid-2002-6305.

Full text
Abstract:
An interface definition language (IDL) is a traditional language describing the interfaces between the components. IDL compilers generate stubs that provide communicating processes with the abstraction of local object invocation or procedure call. Typical IDL compilers are limited to a single IDL and target language, but the proposed IDL compiler is based on the insight that IDLs are true languages amenable to modern compilation techniques. Through the support of intermediate language representation called as Abstract Object Interface (AOI), our compiler can support multiple IDLs and target languages. Given an IDL (for example CORBA) the IDL compiler can generate stubs and skeletons for different target languages like Java and C++ and for different distributed object technologies like Remote Method Invocation/ Java Remote Method Protocol (RMI/JRMP), RMI/Internet Interoperable Protocol (RMI/IIOP) and Common Object Request Architecture (CORBA/IIOP). Further, interoperability can be achieved between them using a single compiler.
APA, Harvard, Vancouver, ISO, and other styles
4

Tassarotti, Joseph, and Jean-Baptiste Tristan. "Verified Density Compilation for a Probabilistic Programming Language." Proceedings of the ACM on Programming Languages 7, PLDI (2023): 615–37. http://dx.doi.org/10.1145/3591245.

Full text
Abstract:
This paper presents ProbCompCert, a compiler for a subset of the Stan probabilistic programming language (PPL), in which several key compiler passes have been formally verified using the Coq proof assistant. Because of the probabilistic nature of PPLs, bugs in their compilers can be difficult to detect and fix, making verification an interesting possibility. However, proving correctness of PPL compilation requires new techniques because certain transformations performed by compilers for PPLs are quite different from other kinds of languages. This paper describes techniques for verifying such transformations and their application in ProbCompCert. In the course of verifying ProbCompCert, we found an error in the Stan language reference manual related to the semantics and implementation of a key language construct.
APA, Harvard, Vancouver, ISO, and other styles
5

Ebresafe, Oghenevwogaga, Ian Zhao, Ende Jin, Arthur Bright, Charles Jian, and Yizhou Zhang. "Certified Compilers à la Carte." Proceedings of the ACM on Programming Languages 9, PLDI (2025): 372–95. https://doi.org/10.1145/3729261.

Full text
Abstract:
Certified compilers are complex software systems. Like other large systems, they demand modular, extensible designs. While there has been progress in extensible metatheory mechanization, scaling extensibility and reuse to meet the demands of full compiler verification remains a major challenge. We respond to this challenge by introducing novel expressive power to a proof language. Our language design equips the Rocq prover with an extensibility mechanism inspired by the object-oriented ideas of late binding, mixin composition, and family polymorphism. We implement our design as a plugin for Rocq, called Rocqet. We identify strategies for using Rocqet’s new expressive power to modularize the monolithic design of large certified developments as complex as the CompCert compiler. The payoff is a high degree of modularity and reuse in the formalization of intermediate languages, ISAs, compiler transformations, and compiler extensions, with the ability to compose these reusable components—certified compilers à la carte. We report significantly improved proof-compilation performance compared to earlier work on extensible metatheory mechanization. We also report good performance of the extracted compiler.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Chenyuan, Yinlin Deng, Runyu Lu, et al. "WhiteFox: White-Box Compiler Fuzzing Empowered by Large Language Models." Proceedings of the ACM on Programming Languages 8, OOPSLA2 (2024): 709–35. http://dx.doi.org/10.1145/3689736.

Full text
Abstract:
Compiler correctness is crucial, as miscompilation can falsify program behaviors, leading to serious consequences over the software supply chain. In the literature, fuzzing has been extensively studied to uncover compiler defects. However, compiler fuzzing remains challenging: Existing arts focus on black- and grey-box fuzzing, which generates test programs without sufficient understanding of internal compiler behaviors. As such, they often fail to construct test programs to exercise intricate optimizations. Meanwhile, traditional white-box techniques, such as symbolic execution, are computationally inapplicable to the giant codebase of compiler systems. Recent advances demonstrate that Large Language Models (LLMs) excel in code generation/understanding tasks and even have achieved state-of-the-art performance in black-box fuzzing. Nonetheless, guiding LLMs with compiler source-code information remains a missing piece of research in compiler testing. To this end, we propose WhiteFox, the first white-box compiler fuzzer using LLMs with source-code information to test compiler optimization, with a spotlight on detecting deep logic bugs in the emerging deep learning (DL) compilers. WhiteFox adopts a multi-agent framework: (i) an LLM-based analysis agent examines the low-level optimization source code and produces requirements on the high-level test programs that can trigger the optimization; (ii) an LLM-based generation agent produces test programs based on the summarized requirements. Additionally, optimization-triggering tests are also used as feedback to further enhance the test generation prompt on the fly. Our evaluation on the three most popular DL compilers (i.e., PyTorch Inductor, TensorFlow-XLA, and TensorFlow Lite) shows that WhiteFox can generate high-quality test programs to exercise deep optimizations requiring intricate conditions, practicing up to 8 times more optimizations than state-of-the-art fuzzers. To date, WhiteFox has found in total 101 bugs for the compilers under test, with 92 confirmed as previously unknown and 70 already fixed. Notably, WhiteFox has been recently acknowledged by the PyTorch team, and is in the process of being incorporated into its development workflow. Finally, beyond DL compilers, WhiteFox can also be adapted for compilers in different domains, such as LLVM, where WhiteFox has already found multiple bugs.
APA, Harvard, Vancouver, ISO, and other styles
7

Mironov, Sergei Vladimirovich, Inna Aleksandrovna Batraeva, and Pavel Dmitrievich Dunaev. "Library for Development of Compilers." Proceedings of the Institute for System Programming of the RAS 34, no. 5 (2022): 77–88. http://dx.doi.org/10.15514/ispras-2022-34(5)-5.

Full text
Abstract:
This work is devoted to the development of a library designed to implement compilers. The article contains a description of the library's features and the main points of its functioning. In the course of the work, the generation of parsers using LR(1)-automata was studied and implemented, two auxiliary languages were designed and implemented: a semantic network query language and a language designed to generate executable code. The result of the work is a library for the platform .NET (the library was tested for the C# language), which contains classes that make easier the implementation of source code parsing, semantic analysis, and executable file generation. This library does not have external dependencies, except for the standard .NET library.
APA, Harvard, Vancouver, ISO, and other styles
8

Livinskii, Vsevolod, Dmitry Babokin, and John Regehr. "Fuzzing Loop Optimizations in Compilers for C++ and Data-Parallel Languages." Proceedings of the ACM on Programming Languages 7, PLDI (2023): 1826–47. http://dx.doi.org/10.1145/3591295.

Full text
Abstract:
Compilers are part of the foundation upon which software systems are built; they need to be as correct as possible. This paper is about stress-testing loop optimizers; it presents a major reimplementation of Yet Another Random Program Generator (YARPGen), an open-source generative compiler fuzzer. This new version has found 122 bugs, both in compilers for data-parallel languages, such as the Intel® Implicit SPMD Program Compiler and the Intel® oneAPI DPC++ compiler, and in C++ compilers such as GCC and Clang/LLVM. The first main contribution of our work is a novel method for statically avoiding undefined behavior when generating loops; the resulting programs conform to the relevant language standard, enabling automated testing. The second main contribution is a collection of mechanisms for increasing the diversity of generated loop code; in our evaluation, we demonstrate that these make it possible to trigger loop optimizations significantly more often, providing opportunities to discover bugs in the optimizers.
APA, Harvard, Vancouver, ISO, and other styles
9

Hartel, Pieter H., Marc Feeley, Martin Alt, et al. "Benchmarking implementations of functional languages with ‘Pseudoknot’, a float-intensive benchmark." Journal of Functional Programming 6, no. 4 (1996): 621–55. http://dx.doi.org/10.1017/s0956796800001891.

Full text
Abstract:
AbstractOver 25 implementations of different functional languages are benchmarked using the same program, a floating-point intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important consideration is how the program can be modified and tuned to obtain maximal performance on each language implementation. With few exceptions, the compilers take a significant amount of time to compile this program, though most compilers were faster than the then current GNU C compiler (GCC version 2.5.8). Compilers that generate C or Lisp are often slower than those that generate native code directly: the cost of compiling the intermediate form is normally a large fraction of the total compilation time. There is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age when it comes to implementing largely strict applications, such as the Pseudoknot program. The speed of C can be approached by some implementations, but to achieve this performance, special measures such as strictness annotations are required by non-strict implementations. The benchmark results have to be interpreted with care. Firstly, a benchmark based on a single program cannot cover a wide spectrum of ‘typical’ applications. Secondly, the compilers vary in the kind and level of optimisations offered, so the effort required to obtain an optimal version of the program is similarly varied.
APA, Harvard, Vancouver, ISO, and other styles
10

Serrano, Manuel. "Of JavaScript AOT compilation performance." Proceedings of the ACM on Programming Languages 5, ICFP (2021): 1–30. http://dx.doi.org/10.1145/3473575.

Full text
Abstract:
The fastest JavaScript production implementations use just-in-time (JIT) compilation and the vast majority of academic publications about implementations of dynamic languages published during the last two decades focus on JIT compilation. This does not imply that static compilers (AoT) cannot be competitive; as comparatively little effort has been spent creating fast AoT JavaScript compilers, a scientific comparison is lacking. This paper presents the design and implementation of an AoT JavaScript compiler, focusing on a performance analysis. The paper reports on two experiments, one based on standard JavaScript benchmark suites and one based on new benchmarks chosen for their diversity of styles, authors, sizes, provenance, and coverage of the language. The first experiment shows an advantage to JIT compilers, which is expected after the decades of effort that these compilers have paid to these very tests. The second shows more balanced results, as the AoT compiler generates programs that reach competitive speeds and that consume significantly less memory. The paper presents and evaluates techniques that we have either invented or adapted from other systems, to improve AoT JavaScript compilation.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Language compilers"

1

Seefried, Sean Computer Science &amp Engineering Faculty of Engineering UNSW. "Language extension via dynamically extensible compilers." Awarded by:University of New South Wales. Computer Science and Engineering, 2006. http://handle.unsw.edu.au/1959.4/29524.

Full text
Abstract:
This dissertation provides the motivation for and evidence in favour of an approach to language extension via dynamic loading of plug-ins. There is a growing realisation that language features are often a superior choice to software libraries for implementing applications. Among the benefits are increased usability, safety and efficiency. Unfortunately, designing and implementing new languages is difficult and time consuming. Thus, reuse of language infrastructure is an attractive implementation avenue. The central question then becomes, what is the best method to extend languages? Much research has focussed on methods of extension based on using features of the language itself such as macros or reflection. This dissertation focuses on a complementary solution: plug-in compilers. In this approach languages are extended at run-time via dynamic extensions to compilers, called plug-ins. Plug-ins can be used to extend the expressiveness, safety and efficiency of languages. However, a plug-in compiler provides other benefits. Plug-in compilers encourage modularity, lower the barrier of entry to development, and facilitate the distribution and use of experimental language extensions. This dissertation describes how plug-in support is added, to both the front and back-end of a compiler, and demonstrates their application through a pair of case studies.
APA, Harvard, Vancouver, ISO, and other styles
2

Reig, Galilea Fermín Javier. "Compiler architecture using a portable intermediate language." Connect to e-thesis, 2002. http://theses.gla.ac.uk/686/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2002.<br>Ph.D. thesis submitted to the Department of Computing Science, University of Glasgow, 2002. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
3

Junaidu, Sahalu B. "A parallel functional language compiler for message-passing multicomputers." Thesis, University of St Andrews, 1998. http://hdl.handle.net/10023/13450.

Full text
Abstract:
The research presented in this thesis is about the design and implementation of Naira, a parallel, parallelising compiler for a rich, purely functional programming language. The source language of the compiler is a subset of Haskell 1.2. The front end of Naira is written entirely in the Haskell subset being compiled. Naira has been successfully parallelised and it is the largest successfully parallelised Haskell program having achieved good absolute speedups on a network of SUN workstations. Having the same basic structure as other production compilers of functional languages, Naira's parallelisation technology should carry forward to other functional language compilers. The back end of Naira is written in C and generates parallel code in the C language which is envisioned to be run on distributed-memory machines. The code generator is based on a novel compilation scheme specified using a restricted form of Milner's 7r-calculus which achieves asynchronous communication. We present the first working implementation of this scheme on distributed-memory message-passing multicomputers with split-phase transactions. Simulated assessment of the generated parallel code indicates good parallel behaviour. Parallelism is introduced using explicit, advisory user annotations in the source' program and there are two major aspects of the use of annotations in the compiler. First, the front end of the compiler is parallelised so as to improve its efficiency at compilation time when it is compiling input programs. Secondly, the input programs to the compiler can themselves contain annotations based on which the compiler generates the multi-threaded parallel code. These, therefore, make Naira, unusually and uniquely, both a parallel and a parallelising compiler. We adopt a medium-grained approach to granularity where function applications form the unit of parallelism and load distribution. We have experimented with two different task distribution strategies, deterministic and random, and have also experimented with thread-based and quantum- based scheduling policies. Our experiments show that there is little efficiency difference for regular programs but the quantum-based scheduler is the best in programs with irregular parallelism. The compiler has been successfully built, parallelised and assessed using both idealised and realistic measurement tools: we obtained significant compilation speed-ups on a variety of simulated parallel architectures. The simulated results are supported by the best results obtained on real hardware for such a large program: we measured an absolute speedup of 2.5 on a network of 5 SUN workstations. The compiler has also been shown to have good parallelising potential, based on popular test programs. Results of assessing Naira's generated unoptimised parallel code are comparable to those produced by other successful parallel implementation projects.
APA, Harvard, Vancouver, ISO, and other styles
4

Cardone, Richard Joseph. "Language and compiler support for mixin programming." Access restricted to users with UT Austin EID Full text (PDF) from UMI/Dissertation Abstracts International, 2002. http://wwwlib.umi.com/cr/utexas/fullcit?p3077428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fross, Bradley K. "Splash-2 shared-memory architecture for supporting high level language compilers." Thesis, Virginia Tech, 1995. http://hdl.handle.net/10919/42064.

Full text
Abstract:
<p>Modem computer technology has been evolving for nearly fifty years, and has seen many architectural innovations along the way. One of the latest technologies to come about is the reconfigurable processor-based custom computing machine (CCM). CCMs use field programmable gate arrays (FPGAs) as their processing cores, giving them the flexibility of software systems with performance comparable to that of dedicated custom hardware. Hardware description languages are currently used to program CCMs. However, research is being performed to investigate the use of high-level languages (HLLs), such as the C programming language, to create CCM programs. Many aspects of CCM architectures, such as local memory systems, are not conducive to HLL compiler usage. This thesis proposes and evaluates the use of a shared-memory architecture on a Splash-2 CCM to promote the development and usage of HLL compilers for CCM systems.</p><br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Moon, Hae-Kyung. "Compiler construction for a simple Pascal-like language." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/897511.

Full text
Abstract:
In this thesis a compiler called SPASCAL is implemented which translates source programs in a simple Pascal-like language called SPASCAL into target programs in the VAX assembly language. This thesis clearly describes the main aspects of a compiler: lexical analysis and syntactic analysis, including the symbol-table routines and the error-handling routines. This thesis uses regular expressions to define the lexical structure and a context-free grammar to define the syntactic structure of SPASCAL. The compiler is constructed using syntax-directed translation, context-free grammars and a set of semantic rules. SPASCAL Compiler is written with standard C in UNIX.<br>Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
7

Seaton, Christopher Graham. "Specialising dynamic techniques for implementing the Ruby programming language." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/specialising-dynamic-techniques-for-implementing-the-ruby-programming-language(0899248b-bbec-4d4c-9507-f775f023407c).html.

Full text
Abstract:
The Ruby programming language is dynamically typed, uses dynamic and late bound dispatch for all operators, method calls and many control structures, and provides extensive metaprogramming and introspective tooling functionality. Unlike other languages where these features are available, in Ruby their use is not avoided and key parts of the Ruby ecosystem use them extensively, even for inner-loop operations. This makes a high-performance implementation of Ruby problematic. Existing implementations either do not attempt to dynamically optimise Ruby programs, or achieve relatively limited success in optimising Ruby programs containing these features. One way that the community has worked around the limitations of existing Ruby implementations is to write extension modules in the C programming language. These are statically compiled and then dynamically linked into the Ruby implementation. Compared to equivalent Ruby, this C code is often more efficient for computationally intensive code. However the interface that these C extensions provides is defined by the non-optimising reference implementation of Ruby. Implementations which want to optimise by using different internal representations must do extensive copying to provide the same interface. This then limits the performance of the C extensions in those implementations. This leaves Ruby in the difficult position where it is not only difficult to implement the language efficiently, but the previous workaround for that problem, C extensions, also limits efforts to improve performance. This thesis describes an implementation of the Ruby programming language which embraces the Ruby language and optimises specifically for Ruby as it is used in practice. It provides a high performance implementation of Ruby's dynamic features, at the same time as providing a high performance implementation of C extensions. The implementation provides a high level of compatibility with existing Ruby implementations and does not limit the available features in order to achieve high performance. Common to all the techniques that are described in this thesis is the concept of specialisation. The conventional approach taken to optimise a dynamic language such as Ruby is to profile the program as it runs. Feedback from the profiling can then be used to specialise the program for the data and control flow it is actually experiencing. This thesis extends and advances that idea by specialising for conditions beyond normal data and control flow. Programs that call a method, or lookup a variable or constant by dynamic name rather than literal syntax can be specialised for the dynamic name by generalising inline caches. Debugging and introspective tooling is implemented by specialising the code for debug conditions such as the presence of a breakpoint or an attached tracing tool. C extensions are interpreted and dynamically optimised rather than being statically compiled, and the interface which the C code is programmed against is provided as an abstraction over the underlying implementation which can then independently specialise. The techniques developed in this thesis have a significant impact on performance of both synthetic benchmarks and kernels from real-world Ruby programs. The implementation of Ruby which has been developed achieves an order of magnitude or better increase in performance compared to the next-best implementation. In many cases the techniques are 'zero-overhead', in that the generated machine code is exactly the same for when the most dynamic features of Ruby are used, as when only static features are used.
APA, Harvard, Vancouver, ISO, and other styles
8

Hessaraki, Alireza. "CCC86, a generic 8086 C-language cross compiler plus communication package." Virtual Press, 1987. http://liblink.bsu.edu/uhtbin/catkey/544004.

Full text
Abstract:
The Cross Compiler is an excellent and valuable program development tool. It provides to the user a low level compiled language that allows character (byte), integer (8086 word) and pointer (8086 one word address) manipulation. It also allows recursion, has modern flow and a rich set of operators.The Communication Program which include file transfer utility allows the student to download or upload their C program to a PC. It allows use of the Modem. The file transferring can be done using XON/XOFF or XMODEM. It also supports INS 8250 UART chip, plus 16450 high speed device found in hardware such as IBM AT Serial/Parallel Adapter.<br>Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
9

Calnan, III Paul W. "EXTRACT: Extensible Transformation and Compiler Technology." Digital WPI, 2003. https://digitalcommons.wpi.edu/etd-theses/484.

Full text
Abstract:
Code transformation is widely used in programming. Most developers are familiar with using a preprocessor to perform syntactic transformations (symbol substitution and macro expansion). However, it is often necessary to perform more complex transformations using semantic information contained in the source code. In this thesis, we developed EXTRACT; a general-purpose code transformation language. Using EXTRACT, it is possible to specify, in a modular and extensible manner, a variety of transformations on Java code such as insertion, removal, and restructuring. In support of this, we also developed JPath, a path language for identifying portions of Java source code. Combined, these two technologies make it possible to identify source code that is to be transformed and then specify how that code is to be transformed. We evaluate our technology using three case studies: a type name qualifier which transforms Java class names into fully-qualified class names; a contract checker which enforces pre- and post-conditions across behavioral subtypes; and a code obfuscator which mangles the names of a class's methods and fields such that they cannot be understood by a human, without breaking the semantic content of the class.
APA, Harvard, Vancouver, ISO, and other styles
10

Cook, Philip John. "Incremental compilation in language-based environments /." [St. Lucia, Qld.], 2006. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe19173.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Language compilers"

1

Kaplan, Randy M. Constructing language processors for little languages. Wiley, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kiong, Derek Beng Kee. Compiler technology: Tools, translators, and language implementation. Kluwer Academic Publishers, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lemone, Karen A. Fundamentalsof compilers: An introduction to computer language translation. CRC Press, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Research Institute for Advanced Computer Science (U.S.), ed. The paradigm compiler: Mapping a functional language for the connection machine. Research Institute for Advanced Computer Science, NASA Ames Research Center, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hansen, Per Brinch. Brinch Hansen on Pascal compilers. Prentice-Hall International, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

service), SpringerLink (Online, ed. Programming Language Concepts. Springer London, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Reis, Anthony J. Dos. Compiler construction using Java, JavaCC, and Yacc. Wiley-IEEE Computer Society, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kiong, Derek Beng Kee. Compiler Technology: Tools, Translators and Language Implementation. Springer US, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

(Firm), SunPro, ed. SPARCompiler C++ 3.0.1 language system: Release notes. SunPro, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Appel, Andrew W. Modern compiler implementation in ML. Cambridge University Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Language compilers"

1

Koskimies, Kai. "Software engineering aspects in language implementation." In Compiler Compilers and High Speed Compilation. Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/3-540-51364-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schumi, Richard, and Jun Sun. "SpecTest: Specification-Based Compiler Testing." In Fundamental Approaches to Software Engineering. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71500-7_14.

Full text
Abstract:
AbstractCompilers are error-prone due to their high complexity. They are relevant for not only general purpose programming languages, but also for many domain specific languages. Bugs in compilers can potentially render all programs at risk. It is thus crucial that compilers are systematically tested, if not verified. Recently, a number of efforts have been made to formalise and standardise programming language semantics, which can be applied to verify the correctness of the respective compilers. In this work, we present a novel specification-based testing method named SpecTest to better utilise these semantics for testing. By applying an executable semantics as test oracle, SpecTest can discover deep semantic errors in compilers. Compared to existing approaches, SpecTest is built upon a novel test coverage criterion called semantic coverage which brings together mutation testing and fuzzing to specifically target less tested language features. We apply SpecTest to systematically test two compilers, i.e., the Java compiler and the Solidity compiler. SpecTest improves the semantic coverage of both compilers considerably and reveals multiple previously unknown bugs.
APA, Harvard, Vancouver, ISO, and other styles
3

mughal, Khalid Azim. "Generation of incremental indirect threaded code for language-based programming environments." In Compiler Compilers and High Speed Compilation. Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/3-540-51364-7_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lipps, Peter, Ulrich Möncke, and Reinhard Wilhelm. "OPTRAN - A language/system for the specification of program transformations: System overview and experiences." In Compiler Compilers and High Speed Compilation. Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/3-540-51364-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Strout, Michelle Mills, Saumya Debray, Kate Isaacs, et al. "Language-Agnostic Optimization and Parallelization for Interpreted Languages." In Languages and Compilers for Parallel Computing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-35225-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kuncak, Viktor, Patrick Lam, and Martin Rinard. "A Language for Role Specifications." In Languages and Compilers for Parallel Computing. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-35767-x_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hukerikar, Saurabh, and Christian Engelmann. "Language Support for Reliable Memory Regions." In Languages and Compilers for Parallel Computing. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-52709-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chamberlain, Bradford L., E. Christopher Lewis, and Lawrence Snyder. "Language Support for Pipelining Wavefront Computations." In Languages and Compilers for Parallel Computing. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44905-1_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stichnoth, James M., and Thomas Gross. "A communication backend for parallel language compilers." In Languages and Compilers for Parallel Computing. Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0014202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sura, Zehra, Chi-Leung Wong, Xing Fang, Jaejin Lee, Samuel P. Midkiff, and David Padua. "Automatic Implementation of Programming Language Consistency Models." In Languages and Compilers for Parallel Computing. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11596110_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Language compilers"

1

Yu, Xiangzhong, Wai Kin Wong, and Shuai Wang. "EMI Testing of Large Language Model (LLM) Compilers." In 2024 IEEE 35th International Symposium on Software Reliability Engineering Workshops (ISSREW). IEEE, 2024. https://doi.org/10.1109/issrew63542.2024.00076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chae, Hyungjoo, Yeonghyeon Kim, Seungone Kim, et al. "Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models." In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-main.1253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Amruth, S. Jaya, T. M. Nigelesh, V. Sai Shruthik, Valluru Sateesh Reddy, and Meena Belwal. "A Quantum Computation Language-based Quantum Compiler." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10724823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dimitrov, Dimitar, and Ivaylo Penev. "DESIGN OF A TRAINING COMPILER FOR INCREASING THE EFFICIENCY OF LANGUAGE PROCESSORS LEARNING." In eLSE 2021. ADL Romania, 2021. http://dx.doi.org/10.12753/2066-026x-21-077.

Full text
Abstract:
The paper presents the design of a training compiler which is developed for the purposes of education in compilers and language processors in computer science courses. The presented compiler has the following main advantages compared to known training compilers used in various universities - a simplified modular structure and the building of an explicit abstract syntactic tree of the input program. The modules in the compiler structure are lexical analyzer, syntactic analyzer, semantic analyzer and code generator. This separation allows students to effectively study the main stages of compilation - lexical analysis, parsing, semantic analysis and code generation. Building and visualizing an explicit abstract syntax tree helps students to understand the translation of the program into the compiler's front-end and make the transition to the compiler's back-end. The compiler translates a program written in a high-level language into virtual machine code. An interpreter to execute the generated virtual machine code is also presented. The presented design is compared to other known training compilers used in various university courses. The input language is procedurally oriented and is a subset of the C and Java languages, which makes it easier for students to use it. Language has enough resources to solve many practical problems. The input program for the compiler is a sequence of definitions of variables and functions. The language of the training compiler is strongly typed. Variables, constants and expressions are related to a specific type. Input-output operations require a certain type of arguments, arithmetic-logical operations are defined for specific types of arguments and type of returned result. At the end of the paper are presented the results of the work of the training compiler in translating a sample input program to code for a virtual machine. The results demonstrate the output of each compiler module - a token stream, an abstract syntax tree, and a set of virtual machine instructions. The structure of the presented training compiler can be used for different input languages in training on compilers and language processors.
APA, Harvard, Vancouver, ISO, and other styles
5

Zaafrani, Abderrazek, and Xinmin Tian. "Performance Portability of XL HPF Compiler on IBM SP2 and SMP Multiprocessors." In International Symposium on Computer Architecture and High Performance Computing. Sociedade Brasileira de Computação, 1999. http://dx.doi.org/10.5753/sbac-pad.1999.19767.

Full text
Abstract:
High Performance Fortran (HPF) is a data-parallel programming language that allows the programmer to specify the data decomposition onto the processors while the compiler takes care of the tedious tasks of communication generation and computation partitioning. Shifting some of the complex tasks from the user to the compiler should encourage programmers to write and port code to parallel machines especially if the compiler implements these tasks efficiently. In this paper, performance results and analysis of a subset of the SPEC92 is presented for the XL HPF compiler on IBM SP2 machines. In addition to obtaining good performance from the compiler, one of the the main concerns of HPF users is portability. Experimental results and analysis are presented in this paper to investigate performance portability (consistency) first across multiprocessor architectures and then across compilers. For performance portability across multiprocessor machines, the same XL HPF compiler used for the IBM SP2 distributed memory machine experiment is also used to compile and execute the same applications but on IBM SMP machines. The comparable speedup and behaviour obtained for both machines indicates that HPF compilers can be portable across different architectures. For performance portability across compilers, various HPF programming techniques and recommendations are introduced to increase the chances of obtaining performance consistency with different HPF compilers.
APA, Harvard, Vancouver, ISO, and other styles
6

Davidson, J. "Session details: Compilers." In PLDI06: ACM SIGPLAN Conference on Programming Language Design and Implementation 2006. ACM, 2006. http://dx.doi.org/10.1145/3245510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pearce, David J. "Language Design Meets Verifying Compilers (Keynote)." In GPCE '22: 21st ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences. ACM, 2022. http://dx.doi.org/10.1145/3564719.3570917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rompf, Tiark, Arvind K. Sujeeth, Kevin J. Brown, HyoukJoong Lee, Hassan Chafi, and Kunle Olukotun. "Surgical precision JIT compilers." In PLDI '14: ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM, 2014. http://dx.doi.org/10.1145/2594291.2594316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Padhye, Rohan, Koushik Sen, and Paul N. Hilfinger. "ChocoPy: a programming language for compilers courses." In the 2019 ACM SIGPLAN Symposium. ACM Press, 2019. http://dx.doi.org/10.1145/3358711.3361627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Biggar, Paul, Edsko de Vries, and David Gregg. "A practical solution for scripting language compilers." In the 2009 ACM symposium. ACM Press, 2009. http://dx.doi.org/10.1145/1529282.1529709.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Language compilers"

1

Willcock, Jeremiah J. A Language for Specifying Compiler Optimizations for Generic Software. Office of Scientific and Technical Information (OSTI), 2007. http://dx.doi.org/10.2172/926400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Amarasinghe, Saman. ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing. Office of Scientific and Technical Information (OSTI), 2015. http://dx.doi.org/10.2172/1176882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gulden, Samuel L. The Development of a Compiler Design Course With Ada as The Implementation Language. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada265126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Furey, John, Austin Davis, and Jennifer Seiter-Moser. Natural language indexing for pedoinformatics. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/41960.

Full text
Abstract:
The multiple schema for the classification of soils rely on differing criteria but the major soil science systems, including the United States Department of Agriculture (USDA) and the international harmonized World Reference Base for Soil Resources soil classification systems, are primarily based on inferred pedogenesis. Largely these classifications are compiled from individual observations of soil characteristics within soil profiles, and the vast majority of this pedologic information is contained in nonquantitative text descriptions. We present initial text mining analyses of parsed text in the digitally available USDA soil taxonomy documentation and the Soil Survey Geographic database. Previous research has shown that latent information structure can be extracted from scientific literature using Natural Language Processing techniques, and we show that this latent information can be used to expedite query performance by using syntactic elements and part-of-speech tags as indices. Technical vocabulary often poses a text mining challenge due to the rarity of its diction in the broader context. We introduce an extension to the common English vocabulary that allows for nearly-complete indexing of USDA Soil Series Descriptions.
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Lian. The object-oriented design of a hardware description language analyser for the DIADES silicon compiler system. Portland State University Library, 2000. http://dx.doi.org/10.15760/etd.6144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pikilnyak, Andrey V., Nadia M. Stetsenko, Volodymyr P. Stetsenko, Tetiana V. Bondarenko, and Halyna V. Tkachuk. Comparative analysis of online dictionaries in the context of the digital transformation of education. [б. в.], 2021. http://dx.doi.org/10.31812/123456789/4431.

Full text
Abstract:
The article is devoted to a comparative analysis of popular online dictionaries and an overview of the main tools of these resources to study a language. The use of dictionaries in learning a foreign language is an important step to understanding the language. The effectiveness of this process increases with the use of online dictionaries, which have a lot of tools for improving the educational process. Based on the Alexa Internet resource it was found the most popular online dictionaries: Cambridge Dictionary, Wordreference, Merriam–Webster, Wiktionary, TheFreeDictionary, Dictionary.com, Glosbe, Collins Dictionary, Longman Dictionary, Oxford Dictionary. As a result of the deep analysis of these online dictionaries, we found out they have the next standard functions like the word explanations, transcription, audio pronounce, semantic connections, and examples of use. In propose dictionaries, we also found out the additional tools of learning foreign languages (mostly English) that can be effective. In general, we described sixteen functions of the online platforms for learning that can be useful in learning a foreign language. We have compiled a comparison table based on the next functions: machine translation, multilingualism, a video of pronunciation, an image of a word, discussion, collaborative edit, the rank of words, hints, learning tools, thesaurus, paid services, sharing content, hyperlinks in a definition, registration, lists of words, mobile version, etc. Based on the additional tools of online dictionaries we created a diagram that shows the functionality of analyzed platforms.
APA, Harvard, Vancouver, ISO, and other styles
7

Jones, Larry, and Frank Glandorf. Integrated Information Support System (IISS). Volume 8. User Interface Subsystem. Part 14. Forms Language Compiler Development Specification. Defense Technical Information Center, 1985. http://dx.doi.org/10.21236/ada182582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Morenc, Carol, Sandy Barker, and Penny Robie. Integrated Information Support System (IISS). Volume 8. User Interface Subsystem. Part 15. Forms Language Compiler Product Specification. Defense Technical Information Center, 1985. http://dx.doi.org/10.21236/ada182583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Glandorf, Frank. Integrated Information Support System (IISS). Volume 8. User Interface Subsystem. Part 16. Forms Language Compiler Unit Test Plan. Defense Technical Information Center, 1985. http://dx.doi.org/10.21236/ada182584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lavadenz, Magaly, Linda R. G. Kaminski, and Elvira G. Armas. California’s Treasures: Supporting Superdiverse Youth through Research, Policy and Practice. Center for Equity for English Learners, 2025. https://doi.org/10.15365/ceel.policy.16.

Full text
Abstract:
This research and policy brief provides an overview of the Center for Equity for English Learner’s Superdiverse Adolescent Multilingual Learners Resource Guide. The publication follows the Global California 2030 call to recognize and promote the home languages and cultures of English Learners as valuable resources to increase multilingualism within the state. The term “Superdiverse” is used to acknowledge the many facets of diversity that make up the identities of English/Multilingual Learners in addition to the breadth of linguistic diversity encompassed within their language journeys. Twenty-six English/Multilingual Learners between grades seven and twelve were interviewed about their school experiences as culturally and linguistically diverse adolescents. These students represent a vast array of diverse identities and language typologies from across California, and Superdiverse Adolescent profiles were created for each participant in addition to analysis of the interviews. Six of these profiles are presented in this brief to highlight the key aspects of Superdiverse student experiences, including advocacy, the importance of language support, the value of welcoming environments, multilingual pride, and cultural identity. Additionally, student insights from the interviews were compiled into three thematic modules of support for Superdiverse youth in education: (1) School Culture and Climate, (2) Culturally and Linguistically Sustaining Education, and (3) Systems of Excellence. These modules, their corresponding elements, and relevant research are presented along with educational policy recommendations at the state, district, and school level, as well as for educator preparation programs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!