Academic literature on the topic 'Runtime Optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Runtime Optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Runtime Optimization"

1

Merten, M. C., A. R. Trick, R. D. Barnes, et al. "An architectural framework for runtime optimization." IEEE Transactions on Computers 50, no. 6 (2001): 567–89. http://dx.doi.org/10.1109/12.931894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, D., D. Blaauw, and D. Sylvester. "Runtime Leakage Minimization Through Probability-Aware Optimization." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 14, no. 10 (2006): 1075–88. http://dx.doi.org/10.1109/tvlsi.2006.884149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Dekui, Zhenhua Duan, Cong Tian, Bohu Huang, and Nan Zhang. "A Runtime Optimization Approach for FPGA Routing." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 37, no. 8 (2018): 1706–10. http://dx.doi.org/10.1109/tcad.2017.2768416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gallardo, Esthela, Jérôme Vienne, Leonardo Fialho, Patricia Teller, and James Browne. "Employing MPI_T in MPI Advisor to optimize application performance." International Journal of High Performance Computing Applications 32, no. 6 (2017): 882–96. http://dx.doi.org/10.1177/1094342016684005.

Full text
Abstract:
MPI_T, the MPI Tool Information Interface, was introduced in the MPI 3.0 standard with the aim of enabling the development of more effective tools to support the Message Passing Interface (MPI), a standardized and portable message-passing system that is widely used in parallel programs. Most MPI optimization tools do not yet employ MPI_T and only describe the interactions between an application and an MPI library, thus requiring that users have expert knowledge to translate this information into optimizations. In contrast, MPI Advisor, a recently developed, easy-to-use methodology and tool for
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Zhengxin, and Yuren Zhou. "Runtime Analysis of Somatic Contiguous Hypermutation Operators in MOEA/D Framework." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (2020): 2359–66. http://dx.doi.org/10.1609/aaai.v34i03.5615.

Full text
Abstract:
Somatic contiguous hypermutation (CHM) operators are important variation operators in artificial immune systems. The few existing theoretical studies are only concerned with understanding the optimization behavior of CHM operators on solving single-objective optimization problems. The MOEA/D framework is one of the most popular strategies for solving multi-objective optimization problems (MOPs). In this paper, we present a runtime analysis of using two CHM operators in MOEA/D framework for solving five benchmark MOPs, including four bi-objective and one many-objective problems. Our analyses sh
APA, Harvard, Vancouver, ISO, and other styles
6

Singh Anjanaa, Parwat, N. Naga Maruthia, Sagar Gujjunooria, and Madhu Orugantib. "Runtime Parallelization of Static and Dynamic Irregular Array of Array References." International Journal of Engineering & Technology 7, no. 4.6 (2018): 150. http://dx.doi.org/10.14419/ijet.v7i4.6.20452.

Full text
Abstract:
The advancement of computer systems such as multi-core and multiprocessor systems resulted in much faster computing than earlier. However, the efficient utilization of these rich computing resources is still an emerging area. For efficient utilization of computing resources, many optimization techniques have been developed, some techniques at compile time and some at runtime. When all the information required for parallel execution is known at compile time, then optimization compilers can reasonably parallelize a sequential program. However, optimization compiler fails when it encounters compi
APA, Harvard, Vancouver, ISO, and other styles
7

Karapetyan, Daniel, and Gregory Gutin. "A New Approach to Population Sizing for Memetic Algorithms: A Case Study for the Multidimensional Assignment Problem." Evolutionary Computation 19, no. 3 (2011): 345–71. http://dx.doi.org/10.1162/evco_a_00026.

Full text
Abstract:
Memetic algorithms are known to be a powerful technique in solving hard optimization problems. To design a memetic algorithm, one needs to make a host of decisions. Selecting the population size is one of the most important among them. Most of the algorithms in the literature fix the population size to a certain constant value. This reduces the algorithm's quality since the optimal population size varies for different instances, local search procedures, and runtimes. In this paper we propose an adjustable population size. It is calculated as a function of the runtime of the whole algorithm and
APA, Harvard, Vancouver, ISO, and other styles
8

Tao, Jie, Martin Schulz, and Wolfgang Karl. "ARS: an adaptive runtime system for locality optimization." Future Generation Computer Systems 19, no. 5 (2003): 761–76. http://dx.doi.org/10.1016/s0167-739x(02)00183-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bononi, L., M. Conti, and E. Gregori. "Runtime optimization of IEEE 802.11 wireless lans performance." IEEE Transactions on Parallel and Distributed Systems 15, no. 1 (2004): 66–80. http://dx.doi.org/10.1109/tpds.2004.1264787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Covantes Osuna, Edgar, and Dirk Sudholt. "Runtime Analysis of Crowding Mechanisms for Multimodal Optimization." IEEE Transactions on Evolutionary Computation 24, no. 3 (2020): 581–92. http://dx.doi.org/10.1109/tevc.2019.2914606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Runtime Optimization"

1

Jacobs, Joshua 1979. "Improving memory performance through runtime optimization." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87215.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.<br>Includes bibliographical references (leaves 45-47).<br>by Joshua Jacobs.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
2

Hallou, Nabil. "Runtime optimization of binary through vectorization transformations." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S120/document.

Full text
Abstract:
Les applications ne sont pas toujours optimisées pour le matériel sur lequel elles s'exécutent, comme les logiciels distribués sous forme binaire, ou le déploiement des programmes dans des fermes de calcul. On se concentre sur la maximisation de l'efficacité du processeur pour les extensions SIMD. Nous montrons que de nombreuses boucles compilées pour x86 SSE peuvent être converties dynamiquement en versions AVX plus récentes et plus puissantes. Nous obtenons des accélérations conformes à celles d'un compilateur natif ciblant AVX. De plus, on vectorise en temps réel des boucles scalaires. Nous
APA, Harvard, Vancouver, ISO, and other styles
3

Ozen, Guray. "Compiler and runtime based parallelization & optimization for GPUs." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/664285.

Full text
Abstract:
Graphics Processing Units (GPU) have been widely adopted to accelerate the execution of HPC workloads due to their vast computational throughput, ability to execute a large number of threads inside SIMD groups in parallel and their use of hardware multithreading to hide long pipelining and memory access latencies. There are two APIs commonly used for native GPU programming: CUDA, which only targets NVIDIA GPUs and OpenCL, which targets all types of GPUs as well as other accelerators. However these APIs only expose low-level hardware characteristics to the programmer. So developing application
APA, Harvard, Vancouver, ISO, and other styles
4

Shen, Yilian. "Optimization of the runtime database of a BPEL engine." [S.l. : s.n.], 2006. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-28373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Whaley, John (John Craig) 1975. "Dynamic optimization through the use of automatic runtime specialization." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80139.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.<br>Includes bibliographical references (leaves 99-115).<br>by John Whaley.<br>S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
6

Lizarraga, Adrian, and Adrian Lizarraga. "Modeling and Optimization Frameworks for Runtime Adaptable Embedded Systems." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/620835.

Full text
Abstract:
The widespread adoption of embedded computing systems has resulted in the realization of numerous sensing, decision, and control applications with diverse application-specific requirements. However, such embedded systems applications are becoming increasingly difficult to design, simulate, and optimize due to the multitude of interdependent parameters that must be considered to achieve optimal, or near-optimal, performance that meets design constraints. This situation is further exacerbated for data-adaptable embedded systems (DAES) applications due to the dynamic characteristics of the deploy
APA, Harvard, Vancouver, ISO, and other styles
7

Västlund, Filip. "Video Flow Classification : A Runtime Performance Study." Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-56621.

Full text
Abstract:
Due to it being increasingly common that users' data is encrypted, the Internet service providers today find it difficult to adapt their service for the users' needs. Previously popular methods of classifying users data does not work as well today and new alternatives is therefore desired to give the users an optimal experience.This study focuses specifically on classifying data flows into video and non-video flows with the use of machine learning algorithms and with a focus on runtime performance. In this study the tested algorithms are created in Python and then exported into a C code implem
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Guoqing. "Analyzing Large-Scale Object-Oriented Software to Find and Remove Runtime Bloat." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1313168204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gotin, Manuel [Verfasser], and R. H. [Akademischer Betreuer] Reussner. "QoS-Based Optimization of Runtime Management of Sensing Cloud Applications / Manuel Gotin ; Betreuer: R. H. Reussner." Karlsruhe : KIT-Bibliothek, 2021. http://d-nb.info/1235072312/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Arora, Nitin. "High performance algorithms to improve the runtime computation of spacecraft trajectories." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49076.

Full text
Abstract:
Challenging science requirements and complex space missions are driving the need for fast and robust space trajectory design and simulation tools. The main aim of this thesis is to develop new and improved high performance algorithms and solution techniques for commonly encountered problems in astrodynamics. Five major problems are considered and their state-of-the art algorithms are systematically improved. Theoretical and methodological improvements are combined with modern computational techniques, resulting in increased algorithm robustness and faster runtime performance. The five selected
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Runtime Optimization"

1

Input/Output Intensive Massively Parallel Computing: Language Support, Automatic Parallelization, Advanced Optimization, and Runtime Systems (Lecture Notes in Computer Science). Springer, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Runtime Optimization"

1

Resende, Mauricio G. C., and Celso C. Ribeiro. "Runtime distributions." In Optimization by GRASP. Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4939-6530-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kistler, Thomas. "Dynamic runtime optimization." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-62599-2_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Köhler, Christian. "Optimization – Runtime Analysis." In Enhancing Embedded Systems Simulation. Vieweg+Teubner, 2011. http://dx.doi.org/10.1007/978-3-8348-9916-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sion, Radu, and Junichi Tatemura. "Runtime Web-Service Workflow Optimization." In Lecture Notes in Business Information Processing. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19294-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Doerr, Benjamin. "Better Runtime Guarantees via Stochastic Domination." In Evolutionary Computation in Combinatorial Optimization. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77449-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Doodi, Taru, Jonathan Peyton, Jim Cownie, et al. "OpenMP $$^{\textregistered }$$ Runtime Instrumentation for Optimization." In Scaling OpenMP for Exascale Performance and Portability. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-65578-9_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Burcea, Mihai, and Michael J. Voss. "A Runtime Optimization System for OpenMP." In OpenMP Shared Memory Parallel Programming. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-45009-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bagnères, Lénaïc, and Cédric Bastoul. "Switchable Scheduling for Runtime Adaptation of Optimization." In Lecture Notes in Computer Science. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09873-9_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Patel, Vivek, Piyush Mishra, J. C. Patni, and Parul Mittal. "Comparison of Runtime Performance Optimization Using Template-Metaprogramming." In Communications in Computer and Information Science. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8657-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gerostathopoulos, Ilias, and Alexander auf der Straße. "Online Experiment-Driven Learning and Adaptation." In Model-Based Engineering of Collaborative Embedded Systems. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62136-0_15.

Full text
Abstract:
AbstractThis chapter presents an approach for the online optimization of collaborative embedded systems (CESs) and collaborative system groups (CSGs). Such systems have to adapt and optimize their behavior at runtime to increase their utilities and respond to runtime situations. We propose to model such systems as black boxes of their essential input parameters and outputs, and search efficiently in the space of input parameters for values that optimize (maximize or minimize) the system’s outputs. Our optimization approach consists of three phases and combines online (Bayesian) optimization with statistical guarantees stemming from the use of statistical methods such as factorial ANOVA, binomial testing, and t-tests in different phases. We have applied our approach in a smart cars testbed with the goal of optimizing the routing of cars by tuning the configuration of their parametric router at runtime.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Runtime Optimization"

1

Stolze, Thomas, and Klaus-Dietrich Kramer. "Runtime Optimization of Generated Code." In InSITE 2014: Informing Science + IT Education Conference. Informing Science Institute, 2014. http://dx.doi.org/10.28945/2019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gorti, Naga Pavan Kumar, and Arun K. Somani. "Runtime optimization utilizing program structure." In 2012 18th Annual International Conference on Advanced Computing and Communications (ADCOM). IEEE, 2012. http://dx.doi.org/10.1109/adcom.2012.6563583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hertzberg, Ben, and Kunle Olukotun. "Runtime automatic speculative parallelization." In 2011 9th Annual IEEE/ACM International Symposium on Code Generation and Optimization (CGO). IEEE, 2011. http://dx.doi.org/10.1109/cgo.2011.5764675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Weihu, and Gang Huang. "Pattern-driven performance optimization at runtime." In the 9th International Workshop. ACM Press, 2010. http://dx.doi.org/10.1145/1891701.1891707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gabriel, Edgar, and Shuo Huang. "Runtime Optimization of Application Level Communication Patterns." In 2007 IEEE International Parallel and Distributed Processing Symposium. IEEE, 2007. http://dx.doi.org/10.1109/ipdps.2007.370406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Teller, Justin, Fusun Ozguner, and Robert Ewing. "Optimization at runtime on a nanoprocessor architecture." In 2008 51st IEEE International Midwest Symposium on Circuits and Systems (MWSCAS). IEEE, 2008. http://dx.doi.org/10.1109/mwscas.2008.4616941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tang, Xun, Xin Jin, and Tao Yang. "Cache-conscious runtime optimization for ranking ensembles." In SIGIR '14: The 37th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 2014. http://dx.doi.org/10.1145/2600428.2609525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Guangli, Lei Liu, and Xiaobing Feng. "Accelerating GPU Computing at Runtime with Binary Optimization." In 2019 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). IEEE, 2019. http://dx.doi.org/10.1109/cgo.2019.8661168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Porterfield, Allan, Rob Fowler, Sridutt Bhalachandra, Barry Rountree, Diptorup Deb, and Rob Lewis. "Application Runtime Variability and Power Optimization for Exascale Computers." In ROSS '15: International Workshop on Runtime and Operating Systems for Supercomputers. ACM, 2015. http://dx.doi.org/10.1145/2768405.2768408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Theocharides, Theocharis, Maria K. Michael, Marios Polycarpou, and Ajit Dingankar. "Towards embedded runtime system level optimization for MPSoCs." In the 19th ACM Great Lakes symposium. ACM Press, 2009. http://dx.doi.org/10.1145/1531542.1531573.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Runtime Optimization"

1

Badia, R., J. Ejarque, S. Böhm, C. Soriano, and R. Rossi. D4.4 API and runtime (complete with documentation and basic unit testing) for IO employing fast local storage. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.9.001.

Full text
Abstract:
This deliverable presents the activities performed on the ExaQUte project task 4.5 Development of interface to fast local storage. The activities have been focused in two aspects: reduction of the storage space used by applications and design and implementation of an interface that optimizes the use of fast local storage by MPI simulations involved in the project applications. In the rst case, for one of the environments involved in the project (PyCOMPSs) the default behavior is to keep all intermediate les until the end of the execution, in case these les are reused later by any additional ta
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!