We are proudly a Ukrainian website. Our country was attacked by Russian Armed Forces on Feb. 24, 2022.
You can support the Ukrainian Army by following the link: https://u24.gov.ua/.
Even the smallest donation is hugely appreciated.
Until January 31, 2023, the prices on our website are reduced by 25% for the citizens of the United States. Dear American friends, thank you for your leadership in providing military assistance to Ukraine!
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Select a source type:
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Compilers (Computer programs).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
The tag cloud allows you accessing even more related research topics, and the appropriate buttons after every section of the page allow consulting extended lists of books, articles, etc. on the chosen topic.
While compilers generally support parallel programming languages and APIs, their internal program representations are mostly designed from the sequential programs standpoint (exceptions include source-to-source parallel compilers, for instance). This makes the integration of compilation techniques dedicated to parallel programs more challenging. In addition, parallelism has various levels and different targets, each of them with specific characteristics and constraints. With the advent of multi-core processors and general purpose accelerators, parallel computing is now a common and pervasive consideration. Thus, software support to parallel programming activities is essential to make this technical transition more realistic and beneficial. The case of compilers is fundamental as they deal with (parallel) programs at a structural level, thus the need for intermediate representations. This article surveys and discusses attempts to provide intermediate representations for the proper support of explicitly parallel programs. We highlight the gap between available contributions and their concrete implementation in compilers and then exhibit possible future research directions.
Dold, Axel, Friedrich von Henke, and Wolfgang Goerigk. "A Completely Verified Realistic Bootstrap Compiler." International Journal of Foundations of Computer Science 14, no. 04 (August 2003): 659–80. http://dx.doi.org/10.1142/s0129054103001947.
This paper reports on a large verification effort in constructing an initial fully trusted bootstrap compiler executable for a realistic system programming language and real target processor. The construction and verification process comprises three tasks: the verification of the compiling specification (a relation between abstract source and target programs) with respect to the language semantics and a realistic correctness criterion. This proof has been completely mechanized using the PVS verification system and is one of the largest case-studies in formal verification we are aware of. Second, the implementation of the specification in the high-level source language following a transformational approach, and finally, the implementation and verification of a binary executable written in the compiler's target language. For the latter task, a realistic technique has been developed, which is based on rigorous a-posteriori syntactic code inspection and which guarantees, for the first time, trusted execution of generated machine programs. The context of this work is the joint German research effort Verifix aiming at developing methods for the construction of correct compilers for realistic source languages and real target processors.
Ciric, Miroslav, and Svetozar Rancic. "Parsing in different languages." Facta universitatis - series: Electronics and Energetics 18, no. 2 (2005): 299–307. http://dx.doi.org/10.2298/fuee0502299c.
A compiler is a translator that accepts as input formatted source file or files, and produces as output a file that may be run directly on a computer. Given the same ANSI C++ compliant input file, two different ANSI C++ compliant compilers running on the same operating system produce two different executable programs that should execute in exactly the same way. To some degree, this is achieved by the standardization of the C++ language, but it is also possible because computer programming languages like C++ can be compiled using reliable technologies with long traditions and understood characteristics. LALR(k), as practical version of LR, is such reliable technology for parsing. Traditional LALR(1) tool YACC has proved his value during years of successful applications. Nowadays there are a few commercial and noncommercial alternatives that are very interesting and promising. This paper will examine some of the them with ability of parsing in different programming languages.
Steele, James K., and Ronald R. Biederman. "Powder Diffraction Pattern Simulation and Analysis." Advances in X-ray Analysis 37 (1993): 101–7. http://dx.doi.org/10.1154/s0376030800015561.
The graphics capability and speed available in modern personal computers has encouraged an increase in the use of a direct pattern comparison approach to the analysis of x-ray and electron diffraction patterns. Several researchers over the past 30 years have presented programs and algorithms which calculate and display powder patterns for xray diffraction. These programs originally required a main frame computer which was expensive and generally not available to all researchers. With the recent advances in the speed of personal computers, language compilers, and high resoultion graphics, expecially within the past 5 years, real time calculations and display of calculated patterns is becoming widely available. The power of this approach will be demonstrated through the use of an IBM compatable personal computer code developed by the authors.
Burgin, Mark. "Triadic Automata and Machines as Information Transformers." Information 11, no. 2 (February 13, 2020): 102. http://dx.doi.org/10.3390/info11020102.
Algorithms and abstract automata (abstract machines) are used to describe, model, explore and improve computers, cell phones, computer networks, such as the Internet, and processes in them. Traditional models of information processing systems—abstract automata—are aimed at performing transformations of data. These transformations are performed by their hardware (abstract devices) and controlled by their software (programs)—both of which stay unchanged during the whole computational process. However, in physical computers, their software is also changing by special tools such as interpreters, compilers, optimizers and translators. In addition, people change the hardware of their computers by extending the external memory. Moreover, the hardware of computer networks is incessantly altering—new computers and other devices are added while other computers and other devices are disconnected. To better represent these peculiarities of computers and computer networks, we introduce and study a more complete model of computations, which is called a triadic automaton or machine. In contrast to traditional models of computations, triadic automata (machine) perform computational processes transforming not only data but also hardware and programs, which control data transformation. In addition, we further develop taxonomy of classes of automata and machines as well as of individual automata and machines according to information they produce.
Rushinek, Avi, and Sara F. Rushinek. "Operating systems, compilers, assemblers and application programs: audit trails of user satisfaction." Microprocessors and Microsystems 9, no. 5 (June 1985): 241–49. http://dx.doi.org/10.1016/0141-9331(85)90272-8.
Quantum computers are available to use over the cloud, but the recent explosion of quantum software platforms can be overwhelming for those deciding on which to use. In this paper, we provide a current picture of the rapidly evolving quantum computing landscape by comparing four software platforms - Forest (pyQuil), Qiskit, ProjectQ, and the Quantum Developer Kit (Q#) - that enable researchers to use real and simulated quantum devices. Our analysis covers requirements and installation, language syntax through example programs, library support, and quantum simulator capabilities for each platform. For platforms that have quantum computer support, we compare hardware, quantum assembly languages, and quantum compilers. We conclude by covering features of each and briefly mentioning other quantum computing software packages.
PHILLIPS, C., and R. PERROTT. "PROBLEMS WITH DATA PARALLELISM." Parallel Processing Letters 11, no. 01 (March 2001): 77–94. http://dx.doi.org/10.1142/s0129626401000440.
The gradual evolution of language features and approaches used for the programming of distributed memory machines underwent substantial advances in the 1990s. One of the most promising and widely praised approaches was based on data parallelism and resulted in High Performance Fortran. This paper reports on an experiment using that approach based on a commercial distributed memory machine, available compilers and simple test programs. The results are disappointing and not encouraging. The variety of components involved and the lack of detailed knowledge available for the compilers compound the difficulties of obtaining results and doing comparisons. The results show great variation and question the premise that communication is the decisive factor in performance determination. The results are also a contribution towards the difficult tasks of predicating performance on a distributed memory computer.
Jerbi, Khaled, Mickaël Raulet, Olivier Déforges, and Mohamed Abid. "Automatic Generation of Optimized and Synthesizable Hardware Implementation from High-Level Dataflow Programs." VLSI Design 2012 (August 16, 2012): 1–14. http://dx.doi.org/10.1155/2012/298396.
In this paper, we introduce the Reconfigurable Video Coding (RVC) standard based on the idea that video processing algorithms can be defined as a library of components that can be updated and standardized separately. MPEG RVC framework aims at providing a unified high-level specification of current MPEG coding technologies using a dataflow language called Cal Actor Language (CAL). CAL is associated with a set of tools to design dataflow applications and to generate hardware and software implementations. Before this work, the existing CAL hardware compilers did not support high-level features of the CAL. After presenting the main notions of the RVC standard, this paper introduces an automatic transformation process that analyses the non-compliant features and makes the required changes in the intermediate representation of the compiler while keeping the same behavior. Finally, the implementation results of the transformation on video and still image decoders are summarized. We show that the obtained results can largely satisfy the real time constraints for an embedded design on FPGA as we obtain a throughput of 73 FPS for MPEG 4 decoder and 34 FPS for coding and decoding process of the LAR coder using a video of CIF image size. This work resolves the main limitation of hardware generation from CAL designs.
Wu, Jiang, Jianjun Xu, Xiankai Meng, Haoyu Zhang, and Zhuo Zhang. "Enabling Reliability-Driven Optimization Selection with Gate Graph Attention Neural Network." International Journal of Software Engineering and Knowledge Engineering 30, no. 11n12 (November 2020): 1641–65. http://dx.doi.org/10.1142/s0218194020400240.
Modern compilers provide a huge number of optional compilation optimization options. It is necessary to select the appropriate compilation optimization options for different programs or applications. To mitigate this problem, machine learning is widely used as an efficient technology. How to ensure the integrity and effectiveness of program information is the key to problem mitigation. In addition, when selecting the best compilation optimization option, the optimization goals are often execution speed, code size, and CPU consumption. There is not much research on program reliability. This paper proposes a Gate Graph Attention Neural Network (GGANN)-based compilation optimization option selection model. The data flow and function-call information are integrated into the abstract syntax tree as the program graph-based features. We extend the deep neural network based on GGANN and build a learning model that learns the heuristics method for program reliability. The experiment is performed under the Clang compiler framework. Compared with the traditional machine learning method, our model improves the average accuracy by 5–11% in the optimization option selection for program reliability. At the same time, experiments show that our model has strong scalability.
Dissertations / Theses on the topic "Compilers (Computer programs)":
Biglari-Abhari, Morteza. "Performance improvement through predicated execution in VLIW machines." Title page, contents and abstract only, 2000. http://web4.library.adelaide.edu.au/theses/09PH/09phb593.pdf.
Park, Eun Jung. "Methodology of dynamic compiler option selection based on static program analysis implementation and evaluation /." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 74 p, 2007. http://proquest.umi.com/pqdweb?did=1407501141&sid=12&Fmt=2&clientId=8331&RQT=309&VName=PQD.
There are many optimizations that can be applied while translating Icon programs. These optimizations and the analyses needed to apply them are of interest for two reasons. First, Icon's unique combination of characteristics requires developing new techniques for implementing them. Second, these optimizations are used in variety of languages and Icon can be used as a medium for extending the state of the art. Many of these optimizations require detailed control of the generated code. Previous production implementations of the Icon programming language have been interpreters. The virtual machine code of an interpreter is seldom flexible enough to accommodate these optimizations and modifying the virtual machine to add the flexibility destroys the simplicity that justified using an interpreter in the first place. These optimizations can only reasonably be implemented in a compiler. In order to explore these optimizations for Icon programs, a compiler was developed. This dissertation describes the compiler and the optimizations it employs. It also describes a run-time system designed to support the analyses and optimizations. Icon variables are untyped. The compiler contains a type inferencing system that determines what values variables and expression may take on during program execution. This system is effective in the presence of values with pointer semantics and of assignments to components of data structures. The compiler stores intermediate results in temporary variables rather than on a stack. A simple and efficient algorithm was developed for determining the lifetimes of intermediate results in the presence of goal-directed evaluation. This allows an efficient allocation of temporary variables to intermediate results. The compiler uses information from type inferencing and liveness analysis to simplify generated code. Performance measurements on a variety of Icon programs show these optimizations to be effective.
Calnan, Paul W. "EXTRACT, Extensible Transformation and Compiler Technology." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0429103-152947.
Cardone, Richard Joseph. "Language and compiler support for mixin programming." Access restricted to users with UT Austin EID Full text (PDF) from UMI/Dissertation Abstracts International, 2002. http://wwwlib.umi.com/cr/utexas/fullcit?p3077428.
Lapinskii, Viktor. "Algorithms for compiler-assisted design space exploration of clustered VLIW ASIP datapaths /." Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008376.
The research presented in this thesis is about the design and implementation of Naira, a parallel, parallelising compiler for a rich, purely functional programming language. The source language of the compiler is a subset of Haskell 1.2. The front end of Naira is written entirely in the Haskell subset being compiled. Naira has been successfully parallelised and it is the largest successfully parallelised Haskell program having achieved good absolute speedups on a network of SUN workstations. Having the same basic structure as other production compilers of functional languages, Naira's parallelisation technology should carry forward to other functional language compilers. The back end of Naira is written in C and generates parallel code in the C language which is envisioned to be run on distributed-memory machines. The code generator is based on a novel compilation scheme specified using a restricted form of Milner's 7r-calculus which achieves asynchronous communication. We present the first working implementation of this scheme on distributed-memory message-passing multicomputers with split-phase transactions. Simulated assessment of the generated parallel code indicates good parallel behaviour. Parallelism is introduced using explicit, advisory user annotations in the source' program and there are two major aspects of the use of annotations in the compiler. First, the front end of the compiler is parallelised so as to improve its efficiency at compilation time when it is compiling input programs. Secondly, the input programs to the compiler can themselves contain annotations based on which the compiler generates the multi-threaded parallel code. These, therefore, make Naira, unusually and uniquely, both a parallel and a parallelising compiler. We adopt a medium-grained approach to granularity where function applications form the unit of parallelism and load distribution. We have experimented with two different task distribution strategies, deterministic and random, and have also experimented with thread-based and quantum- based scheduling policies. Our experiments show that there is little efficiency difference for regular programs but the quantum-based scheduler is the best in programs with irregular parallelism. The compiler has been successfully built, parallelised and assessed using both idealised and realistic measurement tools: we obtained significant compilation speed-ups on a variety of simulated parallel architectures. The simulated results are supported by the best results obtained on real hardware for such a large program: we measured an absolute speedup of 2.5 on a network of 5 SUN workstations. The compiler has also been shown to have good parallelising potential, based on popular test programs. Results of assessing Naira's generated unoptimised parallel code are comparable to those produced by other successful parallel implementation projects.
AbstractIn order to adapt parallel computers to general convenient tools for computational scientists, a high-level and easy-to-use portable parallel programming paradigm is mandatory. XcalableMP, which is proposed by the XcalableMP Specification Working Group, is a directive-based language extension for Fortran and C to easily describe parallelization in programs for distributed memory parallel computers. The Omni XcalableMP compiler, which is provided as a reference XcalableMP compiler, is currently implemented as a source-to-source translator. It converts XcalableMP programs to standard MPI programs, which can be easily compiled by the native Fortran compiler and executed on most of parallel computers. A three-dimensional Eulerian fluid code written in Fortran is parallelized by XcalableMP using two different programming models with the ordinary domain decomposition method, and its performances are measured on the K computer. Programs converted by the Omni XcalableMP compiler prevent native Fortran compiler optimizations and show lower performance than that of hand-coded MPI programs. Finally almost the same performances are obtained by using specific compiler options of the native Fortran compiler in the case of a global-view programming model, but performance degradation is not improved by specifying any native compiler options when the code is parallelized by a local-view programming model.
Zangerl, Peter, Peter Thoman, and Thomas Fahringer. "Compiler Generated Progress Estimation for OpenMP Programs." In Lecture Notes in Computer Science, 107–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25636-4_9.
Craig, Stephen-John, and Michael Leuschel. "A Compiler Generator for Constraint Logic Programs." In Lecture Notes in Computer Science, 148–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-39866-0_17.
Kiefer, Moritz, Vladimir Klebanov, and Mattias Ulbrich. "Relational Program Reasoning Using Compiler IR." In Lecture Notes in Computer Science, 149–65. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48869-1_12.
Xia, Songtao, and James Hook. "Certifying Temporal Properties for Compiled C Programs." In Lecture Notes in Computer Science, 161–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24622-0_15.
Vitek, Jan, R. Nigel Horspool, and James S. Uhl. "Compile-time analysis of object-oriented programs." In Lecture Notes in Computer Science, 236–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55984-1_22.
Subramanian, Ram, and Santosh Pande. "Efficient program partitioning based on compiler controlled communication." In Lecture Notes in Computer Science, 4–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/bfb0097884.
Scherer, Alex, Thomas Gross, and Willy Zwaenepoel. "Adaptive Parallelism for OpenMP Task Parallel Programs." In Languages, Compilers, and Run-Time Systems for Scalable Computers, 113–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-40889-4_9.
Belyaev, Nikolay, and Sergey Kireev. "LuNA-ICLU Compiler for Automated Generation of Iterative Fragmented Programs." In Lecture Notes in Computer Science, 10–17. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25636-4_2.
Heberle, Andreas, Thilo Gaul, Wolfgang Goerigk, Gerhard Goos, and Wolf Zimmermann. "Construction of Verified Compiler Front-Ends with Program-Checking." In Lecture Notes in Computer Science, 481–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-46562-6_43.
Conference papers on the topic "Compilers (Computer programs)":
Sun, Yu, and Wei Zhang. "On-Line Trace Based Automatic Parallelization of Java Programs on Multicore Platforms." In 2011 INTERACT-15: 15th Workshop on Interaction between Compilers and Computer Architectures. IEEE, 2011. http://dx.doi.org/10.1109/interact.2011.11.
Ward, A. C., and W. P. Seering. "Quantitative Inference in a Mechanical Design “Compiler”." In ASME 1989 Design Technical Conferences. American Society of Mechanical Engineers, 1989. http://dx.doi.org/10.1115/detc1989-0011.
Abstract This paper introduces the theory underlying a computer program that takes as input a schematic of a mechanical or hydraulic power transmission system, plus specifications and a utility function, and returns catalog numbers from predefined catalogs for the optimal selection of components implementing the design. Unlike programs for designing single components or systems, this program provides the designer with a high level “language“ in which to compose new designs. It then performs much of the detailed design process. The process of “compilation”, or transformation from a high to a low level description, is based on a formalization of quantitative inferences about hierarchically organized sets of artifacts and operating conditions. This allows design compilation without the exhaustive enumeration of alternatives. The paper introduces the formalism, illustrating its use with examples. It then outlines some differences from previous work, and summarizes early tests and conclusions.
"Program Committee." In 9th Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT'05). IEEE, 2005. http://dx.doi.org/10.1109/interact.2005.12.
Hsu, Wei C. "Message From The Program Chair." In Proceedings. Eighth Workshop on Interaction Between Compilers and Computer Architectures - INTERACT-8 2004. IEEE, 2004. http://dx.doi.org/10.1109/intera.2004.1299503.
Ravindar, Archana, and Y. N. Srikant. "Implications of Program Phase Behavior on Timing Analysis." In 2011 INTERACT-15: 15th Workshop on Interaction between Compilers and Computer Architectures. IEEE, 2011. http://dx.doi.org/10.1109/interact.2011.12.
Bluemke, Ilona, and Konrad Billewicz. "Aspects in the Maintenance of Compiled Programs." In 2008 Third International Conference on Dependability of Computer Systems DepCoS-RELCOMEX. IEEE, 2008. http://dx.doi.org/10.1109/depcos-relcomex.2008.15.
Schardl, Tao B., Tyler Denniston, Damon Doucet, Bradley C. Kuszmaul, I.-Ting Angelina Lee, and Charles E. Leiserson. "The CSI Framework for Compiler-Inserted Program Instrumentation." In SIGMETRICS '18: ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3219617.3219657.