Academic literature on the topic 'Performance tuning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Performance tuning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Performance tuning"

1

S.N, Kavitha. "Tuning SQL Queries for Better Performance." International Journal of Psychosocial Rehabilitation 24, no. 5 (2020): 7002–5. http://dx.doi.org/10.37200/ijpr/v24i5/pr2020703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huynh, Andy, Harshal A. Chaudhari, Evimaria Terzi, and Manos Athanassoulis. "Endure." Proceedings of the VLDB Endowment 15, no. 8 (2022): 1605–18. http://dx.doi.org/10.14778/3529337.3529345.

Full text
Abstract:
Log-Structured Merge trees (LSM trees) are increasingly used as the storage engines behind several data systems, frequently deployed in the cloud. Similar to other database architectures, LSM trees consider information about the expected workload (e.g., reads vs. writes, point vs. range queries) to optimize their performance via tuning. However, operating in a shared infrastructure like the cloud comes with workload uncertainty due to the fast-evolving nature of modern applications. Systems with static tuning discount the variability of such hybrid workloads and hence provide an inconsistent and overall suboptimal performance. To address this problem, we introduce Endure - a new paradigm for tuning LSM trees in the presence of workload uncertainty. Specifically, we focus on the impact of the choice of compaction policies, size ratio, and memory allocation on the overall performance. Endure considers a robust formulation of the throughput maximization problem and recommends a tuning that maximizes the worst-case throughput over the neighborhood of each expected workload. Additionally, an uncertainty tuning parameter controls the size of this neighborhood, thereby allowing the output tunings to be conservative or optimistic. Through both model-based and extensive experimental evaluations of Endure in the state-of-the-art LSM-based storage engine, RocksDB, we show that the robust tuning methodology consistently outperforms classical tuning strategies. The robust tunings output by Endure lead up to a 5X improvement in throughput in the presence of uncertainty. On the flip side, Endure tunings have negligible performance loss when the observed workload exactly matches the expected one.
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Chun, Jacqueline Chame, Yoonju Lee Nelson, Pedro Diniz, Mary Hall, and Robert Lucas. "Compiler-assisted performance tuning." Journal of Physics: Conference Series 78 (July 1, 2007): 012024. http://dx.doi.org/10.1088/1742-6596/78/1/012024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Atkins, M., and R. Subramaniam. "PC software performance tuning." Computer 29, no. 8 (1996): 47–54. http://dx.doi.org/10.1109/2.532045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yarbrough, Cornelia, Brant Karrick, and Steven J. Morrison. "Effect of Knowledge of Directional Mistunings on the Tuning Accuracy of Beginning and Intermediate Wind Players." Journal of Research in Music Education 43, no. 3 (1995): 232–41. http://dx.doi.org/10.2307/3345638.

Full text
Abstract:
The purpose of this research was to study the effect of knowledge of directional mis-tunings on the tuning accuracy of beginning and intermediate wind players. Subjects (N = 197) were instrumental wind players who tuned to either an For a B-flat with both their own instrument—a performance task—and the tuning knob of a variable-pitch keyboard—a perception task. The subjects were randomly assigned to one of three treatment groups: Group 1 knew that their instruments and the tuning knob were mis-tuned in the sharp direction; Group 2 knew that their instruments and the tuning knob were mistuned in the flat direction; and Group 3 had no information regarding direction of mistunings. Data demonstrated that only years of instruction significantly affected subjects' tuning accuracy. There were no significant differences due to treatment, instrument type, or tuning pitch. There were only 6 in-tune performance responses and 12 in-tune perception responses. Approaching the target pitch from above resulted in more sharp responses; approaching it from below resulted in more flat responses; and having no knowledge of direction of mistuning resulted in an equal number of sharp and flat responses. There were a greater number of flat responses in the first year of instruction and a greater number of sharp responses in the fourth year. Finally, there was consistent improvement from the first to the fourth year in both perception and performance tuning tasks.
APA, Harvard, Vancouver, ISO, and other styles
6

Tippabhotla, Srikanth Kumar. "Performance Tuning of Data Warehouse." International Journal of Computer Applications Technology and Research 6, no. 1 (2017): 38–41. http://dx.doi.org/10.7753/ijcatr0601.1007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shasha, Dennis. "Tuning databases for high performance." ACM Computing Surveys 28, no. 1 (1996): 113–15. http://dx.doi.org/10.1145/234313.234363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Singhal, A., and A. J. Goldberg. "Architectural support for performance tuning." ACM SIGARCH Computer Architecture News 22, no. 2 (1994): 48–59. http://dx.doi.org/10.1145/192007.192016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Qing-Guo Wang, Tong-Heng Lee, Ho-Wang Fung, Qiang Bi, and Yu Zhang. "PID tuning for improved performance." IEEE Transactions on Control Systems Technology 7, no. 4 (1999): 457–65. http://dx.doi.org/10.1109/87.772161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Piovoso, M. J., and J. Alpigini. "Controller tuning via performance maps." ISA Transactions 46, no. 4 (2007): 541–53. http://dx.doi.org/10.1016/j.isatra.2004.09.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Performance tuning"

1

Chung, I.-Hsin. "Towards automatic performance tuning." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/2037.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2004.<br>Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
2

Mohror, Kathryn Marie. "Infrastructure For Performance Tuning MPI Applications." PDXScholar, 2004. https://pdxscholar.library.pdx.edu/open_access_etds/2660.

Full text
Abstract:
Clusters of workstations are becoming increasingly popular as a low-budget alternative for supercomputing power. In these systems,message-passing is often used to allow the separate nodes to act as a single computing machine. Programmers of such systems face a daunting challenge in understanding the performance bottlenecks of their applications. This is largely due to the vast amount of performance data that is collected, and the time and expertise necessary to use traditional parallel performance tools to analyze that data. The goal of this project is to increase the level of performance tool support for message-passing application programmers on clusters of workstations. We added support for LAM/MPI into the existing parallel performance tool,P aradyn. LAM/MPI is a commonly used, freely-available implementation of the Message Passing Interface (MPI),and also includes several newer MPI features,such as dynamic process creation. In addition, we added support for non-shared filesystems into Paradyn and enhanced the existing support for the MPICH implementation of MPI. We verified that Paradyn correctly measures the performance of the majority of LAM/MPI programs on Linux clusters and show the results of those tests. In addition,we discuss MPI-2 features that are of interest to parallel performance tool developers and design support for these features for Paradyn.
APA, Harvard, Vancouver, ISO, and other styles
3

Sikaundi, Jaston. "The internal performance of iterative feedback tuning." Master's thesis, University of Cape Town, 2008. http://hdl.handle.net/11427/14700.

Full text
Abstract:
Includes bibliographical references (p. 113-115).<br>Under certain conditions Iterative Feedback Tuning (IFT) may produce a controller that cancels the poles of the process and as a result can give a closed loop that has poor internal performance. The disadvantage of this is that the closed loop will have poor input disturbance rejection. A solution for ensuring that IFT does not have poor internal performance is to make sure that the disturbance rejection is adequate. However an adequate input disturbance may lead to other undesirable dynamics in the closed loop performance. These are such as overshoot in the response for setpoint tracking and that for output disturbance rejection. On the other hand the advantage of pole shifting is that for a one degree of freedom control structure all the characteristic equations of the loop transfer functions will be the same. Four methods are proposed for avoiding pole-zero cancellation by concentrating on the input disturbance. These methods are using: a model for input disturbance rejection, time-weighted IFT for disturbance rejection, a setpoint-tracking model with overshoot and approximate pole placement IFT. Approximate pole placement IFT was chosen as the best method. The reason is that the dynamics of the closed loop can be specified with the choice of characteristic equation. This method was then investigated further to establish its feasibility on a physical system. After the evaluation of this method, it was applied on a DC motor for speed control to show that is viable in practice. Multiple experiments were done to show that this method does not produce a controller that cancels the process poles, confirming it as a good solution to prevent poor internal performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Cesar, Galobardes Eduardo. "Definition of Framework-based Performance Models for Dynamic Performance Tuning." Doctoral thesis, Universitat Autònoma de Barcelona, 2006. http://hdl.handle.net/10803/5760.

Full text
Abstract:
Parallel and distributed programming constitutes a highly promising approach to improving the performance of many applications. However, in comparison to sequential programming, many new problems arise in all phases of the development cycle of this kind of applications. <br/>For example, in the analysis phase of parallel/distributed programs, the programmer has to decompose the problem (data and/or code) to find the concurrency of the algorithm. In the design phase, the programmer has to be aware of the communication and synchronization conditions between tasks. In the implementation phase, the programmer has to learn how to use specific communication libraries and runtime environments but also to find a way of debugging programs. Finally, to obtain the best performance, the programmer has to tune the application by using monitoring tools, which collect information about the application's behavior. Tuning can be a very difficult task because it can be difficult to relate the information gathered by the monitor to the application's source code. Moreover, tuning can be even more difficult for those applications that change their behavior dynamically because, in this case, a problem might happen or not depending on the execution conditions.<br/>It can be seen that these issues require a high degree of expertise, which prevents the more widespread use of this kind of solution. One of the best ways to solve these problems would be to develop, as has been done in sequential programming, tools to support the analysis, design, coding, and tuning of parallel/distributed applications. <br/>In the particular case of performance analysis and/or tuning, it is important to note that the best way of analyzing and tuning parallel/distributed applications depends on some of their behavioral characteristics. If the application to be tuned behaves in a regular way then a static analysis (predictive or trace based) would be enough to find the application's performance bottlenecks and to indicate what should be done to overcome them. However, if the application changes its behavior from execution to execution or even dynamically changes its behavior in a single execution then the static analysis cannot offer efficient solutions for avoiding performance bottlenecks. <br/>In this case, dynamic monitoring and tuning techniques should be used instead. However, in dynamic monitoring and tuning, decisions must be taken efficiently, which means that the application's performance analysis outcome must be accurate and punctual in order to effectively tackle problems; at the same time, intrusion on the application must be minimized because the instrumentation inserted in the application in order to monitor and tune it alters its behavior and could introduce performance problems that were not there before the instrumentation. <br/>This is more difficult to achieve if there is no information about the structure and behavior of the application; therefore, blind automatic dynamic tuning approaches have limited success, whereas cooperative dynamic tuning approaches can cope with more complex problems at the cost of asking for user collaboration. We have proposed a third approach. If a programming tool, based on the use of skeletons or frameworks, has been used in the development of the application then much information about the structure and behavior of the application is available and a performance model associated to the structure of the application can be defined for use by the dynamic tuning tool. The resulting tuning tool should produce the outcome of a collaborative one while behaving like an automatic one from the point of view of the application developer.
APA, Harvard, Vancouver, ISO, and other styles
5

Collins, Alexander James. "Cooperative auto-tuning of parallel skeletons." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/15791.

Full text
Abstract:
Improving program performance through the use of multiple homogeneous processing elements, or cores, is common-place. However, these architectures increase the complexity required at the software level. Existing work is focused on optimising programs that run in isolation on these systems, but ignores the fact that, in reality, these systems run multiple parallel programs concurrently with programs competing for system resources. In order to improve performance in this shared environment, cooperative tuning of multiple, concurrently running parallel programs is required. Moreover, the set of programs running on the system – the system workload – is dynamic and rapidly changing. This makes cooperative tuning a challenge, as it must react rapidly to changes in the system workload. This thesis explores the scope for performance improvement from cooperatively tuning skeleton parallel programs, and techniques that can be used to cooperatively auto-tune parallel programs. Parallel skeletons provide a clear separation between algorithm description and implementation, and provide tuning knobs that the system can use to make high-level changes to a programs implementation. This work is in three parts: (i) how many threads should be allocated to each program running on the system, (ii) on which cores should a programs threads be executed and (iii) what values should be chosen for high-level parameters of the parallel skeletons. We demonstrate that significant performance improvements are available in each of these areas, compared to the current state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
6

Han, Xue. "CONFPROFITT: A CONFIGURATION-AWARE PERFORMANCE PROFILING, TESTING, AND TUNING FRAMEWORK." UKnowledge, 2019. https://uknowledge.uky.edu/cs_etds/84.

Full text
Abstract:
Modern computer software systems are complicated. Developers can change the behavior of the software system through software configurations. The large number of configuration option and their interactions make the task of software tuning, testing, and debugging very challenging. Performance is one of the key aspects of non-functional qualities, where performance bugs can cause significant performance degradation and lead to poor user experience. However, performance bugs are difficult to expose, primarily because detecting them requires specific inputs, as well as specific configurations. While researchers have developed techniques to analyze, quantify, detect, and fix performance bugs, many of these techniques are not effective in highly-configurable systems. To improve the non-functional qualities of configurable software systems, testing engineers need to be able to understand the performance influence of configuration options, adjust the performance of a system under different configurations, and detect configuration-related performance bugs. This research will provide an automated framework that allows engineers to effectively analyze performance-influence configuration options, detect performance bugs in highly-configurable software systems, and adjust configuration options to achieve higher long-term performance gains. To understand real-world performance bugs in highly-configurable software systems, we first perform a performance bug characteristics study from three large-scale opensource projects. Many researchers have studied the characteristics of performance bugs from the bug report but few have reported what the experience is when trying to replicate confirmed performance bugs from the perspective of non-domain experts such as researchers. This study is meant to report the challenges and potential workaround to replicate confirmed performance bugs. We also want to share a performance benchmark to provide real-world performance bugs to evaluate future performance testing techniques. Inspired by our performance bug study, we propose a performance profiling approach that can help developers to understand how configuration options and their interactions can influence the performance of a system. The approach uses a combination of dynamic analysis and machine learning techniques, together with configuration sampling techniques, to profile the program execution, analyze configuration options relevant to performance. Next, the framework leverages natural language processing and information retrieval techniques to automatically generate test inputs and configurations to expose performance bugs. Finally, the framework combines reinforcement learning and dynamic state reduction techniques to guide subject application towards achieving higher long-term performance gains.
APA, Harvard, Vancouver, ISO, and other styles
7

Desai, Harit S. "Evaluation and Tuning of Gigabit Ethernet performance on Clusters." Kent State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=kent1185819165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ozarde, Sarang Anil. "Performance understanding and tuning of iterative computation using profiling techniques." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34757.

Full text
Abstract:
Most applications spend a significant amount of time in the iterative parts of a computation. They typically iterate over the same set of operations with different values. These values either depend on inputs or values calculated in previous iterations. While loops capture some iterative behavior, in many cases such a behavior is spread over whole program sometimes through recursion. Understanding iterative behavior of the computation can be very useful to fine-tune it. In this thesis, we present a profiling based framework to understand and improve performance of iterative computation. We capture the state of iterations in two aspects 1) Algorithmic State 2) Program State. We demonstrate the applicability of our framework for capturing algorithmic state by applying it to the SAT Solvers and program state by applying it to a variety of benchmarks exhibiting completely parallelizable loops. Further, we show that such a performance characterization can be successfully used to improve the performance of the underlying application. Many high performance combinatorial optimization applications involve SAT solving. A variety of SAT solvers have been developed that employ different data structures and different propagation methods for converging on a fixed point for generating a satisfiable solution. The performance debugging and tuning of SAT solvers to a given domain is an important problem encountered in practice. Unfortunately not much work has been done to quantify the iterative efficiency of SAT solvers. In this work, we develop quantifiable measures for calculating convergence efficiency of SAT solvers. Here, we capture the Algorithmic state of the application by tracking the assignment of variables for each iteration. A compact representation of profile data is developed to track the rate of progress and convergence. The novelty of this approach is that it is independent of the specific strategies used in individual solvers, yet it gives key insights into the "progress" and "convergence behavior" of the solver in terms of a specific implementation at hand. An analysis tool is written to interpret the profile data and extract values of the following metrics such as: average convergence rate, efficiency of iteration and variable stabilization. Finally, using this system we produce a study of 4 well known SAT solvers to compare their iterative efficiency using random as well as industrial benchmarks. Using the framework, iterative inefficiencies that lead to slow convergence are identified. We also show how to fine-tune the solvers by adapting the key steps. We also show that the similar profile data representation can be easily applied to loops, in general, to capture their program state. One of the key attributes of the program state inside loops is their branch behavior. We demonstrate the applicability of the framework by profiling completely parallelizable loops (no cross-iteration dependence) and by storing the branching behavior of each iteration. The branch behavior across a group of iterations is important in devising the thread warps from parallel loops for efficient execution on GPUs. We show how some loops can be effectively parallelized on GPUs using this information.
APA, Harvard, Vancouver, ISO, and other styles
9

Argollo, de Oliveira Dias Júnior Eduardo. "Performance prediction and tuning in a multi-cluster environment." Doctoral thesis, Universitat Autònoma de Barcelona, 2006. http://hdl.handle.net/10803/5761.

Full text
Abstract:
Los clusters de computadores son una alternativa actual usada para el cómputo de aplicaciones científicas. Sin embargo las aplicaciones crecen en complejidad y necesitan más recursos. Unir estos clusters distribuidos usando Internet en un multi-cluster puede permitir lograrlo. <br/>Un problema que se introduce con esta colaboración es un incremento en la heterogeneidad tanto de cómputo como de comunicación, aumentando la complejidad de dicho sistema lo que dificulta su uso.<br/>El objetivo de esta tesis es lograr la reducción del tiempo de ejecución de aplicaciones, originalmente desarrolladas para un cluster, usando eficientemente un multi-cluster. <br/>Proponemos una arquitectura del sistema para lograr una máquina virtual multi-cluster transparente a la aplicación que además la dota de escalabilidad y robustez tolerando los problemas de la comunicación por Internet. Esta arquitectura propone un master-worker jerárquico en el que se introducen elementos claves como los gestores de comunicación que dotan al sistema de robustez, seguridad y transparencia en las comunicaciones entre clusters a través de Internet.<br/>Desarrollamos un modelo de prestaciones para poder hacer una estimación teórica del tiempo de ejecución y de la eficiencia de una aplicación ejecutándose en un multi-cluster. La precisión de las estimaciones es superior al 90%.<br/>Proponemos una metodología que da un procedimiento que define los pasos para realizar la predicción del tiempo de ejecución, para garantizar un umbral de eficiencia seleccionando los recursos adecuados y para guiar a la sintonización de la aplicación encontrando los cuellos de botella de la ejecución.<br>Clusters of computers represent an alternative for speeding up scientific applications. Nevertheless applications grow in complexity and need more resources. The joint of distributed clusters, using Internet, in a multi-cluster can allow the resources obtainment.<br/>A problem on reaching an effective collaboration from multiple clusters is the increase on the computation and communication heterogeneity. This factor increases the complexity of such a system bringing difficulties in its use.<br/>The target of this thesis is to attain the reduction of the execution time of applications, originally written for a single cluster, efficiently using a multi-cluster. In order to reach this goal we propose a system architecture, an analytical model and a performance and tuning methodology.<br/>The proposed system architecture aims to obtain a multi-cluster virtual machine, transparent to the application and that provides scalability and robustness, tolerating possible faults in the Internet communication between clusters. This architecture is organized around a hierarchical master-worker with communication managers. Communication managers are a key element responsible for the robustness, security and transparency in the communication between clusters using Internet.<br/>The analytical performance model was developed to estimate the execution time and efficiency of an application executing in a multi-cluster. The precision on the estimations are over 90%.<br/>The proposed performance prediction and application tuning methodology is a procedure that defines the steps to predict the execution time and efficiency, to guarantee an efficiency threshold and to guide on the application tuning, evaluating the execution bottlenecks.
APA, Harvard, Vancouver, ISO, and other styles
10

Grythe, Knut Auvor. "Automated tuning of MapReduce performance in Vespa Document Store." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9584.

Full text
Abstract:
<p>MapReduce is a programming model for distributed processing, originally designed by Google Inc. It is designed to simplify the implementation and deployment of distributed programs. Vespa Document Store (VDS) is a distributed document storage solution developed by Yahoo! Technologies Norway. VDS does not currently have any feature allowing distributed aggregation of data. Therefore, a prototype of the MapReduce distributed programming model was previously developed. However, the implementation requires manual tuning of several parameters before each deployment. The goal of this thesis is to allow as many as possible of these parameters to be either automatically configured or set to universally suitable defaults. We have created a working MapReduce implementation based on previous work, and a framework for monitoring of VDS nodes. Various VDS features have been documented in detail, this documentation has been used to analyse how the performance of these features may be improved. We have also performed various experiments to validate the analysis and gain additional insight. Numerous configuration options for either VDS in general or the MapReduce implementation have been considered, and recommended settings have been proposed. The propositions are either in the form of default values or algorithms for computing the most suitable setting. Finally, we provide a list of suggested further work, with suggestions for both general VDS improvements and MapReduce-specific research.</p>
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Performance tuning"

1

Harrison, Guy, and Michael Harrison. MongoDB Performance Tuning. Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6879-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Loukides, Michael Kosta. System performance tuning. O'Reilly, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Trudy, Pelzer, ed. SQL performance tuning. Addison-Wesley, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Roy, Shaibal. Sybase performance tuning. Prentice Hall, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Frank, Waters. AIX performance tuning. Prentice Hall PTR, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Loukides, Michael Kosta. System performance tuning. O'Reilly & Associates, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gurry, Mark. Oracle Performance Tuning. 2nd ed. O'Reilly & Associates, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Corrigan, Peter. Oracle Performance Tuning. O'Reilly & Associates, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Loukides, Mike. System performance tuning. O'Reilly, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kosta, Loukides Michael, and Loukides Michael Kosta, eds. System performance tuning. 2nd ed. O'Reilly, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Performance tuning"

1

Antonio, Cássio de Sousa. "Performance Tuning." In Pro React. Apress, 2015. http://dx.doi.org/10.1007/978-1-4842-1260-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schwichtenberg, Holger. "Performance Tuning." In Modern Data Access with Entity Framework Core. Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3552-2_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Natarajan, Jay, Rudi Bruchez, Scott Shaw, and Michael Coles. "Performance Tuning." In Pro T-SQL 2012 Programmer’s Guide. Apress, 2012. http://dx.doi.org/10.1007/978-1-4302-4597-1_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cebollero, Miguel, Jay Natarajan, and Michael Coles. "Performance Tuning." In Pro T-SQL Programmer's Guide. Apress, 2015. http://dx.doi.org/10.1007/978-1-4842-0145-9_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Klein, Scott, and Herve Roggero. "Performance Tuning." In Pro Sql Azure. Apress, 2010. http://dx.doi.org/10.1007/978-1-4302-2962-9_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chin, Stephen, Dean Iverson, Oswald Campesato, and Paul Trani. "Performance Tuning." In Pro Android Flash. Apress, 2011. http://dx.doi.org/10.1007/978-1-4302-3232-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pilgrim, Mark. "Performance Tuning." In Dive Into Python. Apress, 2004. http://dx.doi.org/10.1007/978-1-4302-0700-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Klein, Scott, and Herve Roggero. "Performance Tuning." In Pro SQL Database for Windows Azure. Apress, 2012. http://dx.doi.org/10.1007/978-1-4302-4396-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fröhlich, Lutz. "Performance Tuning." In PostgreSQL. Carl Hanser Verlag GmbH & Co. KG, 2022. http://dx.doi.org/10.3139/9783446473157.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fröhlich, Lutz. "Performance Tuning." In PostgreSQL 10. Carl Hanser Verlag GmbH & Co. KG, 2018. http://dx.doi.org/10.3139/9783446456419.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Performance tuning"

1

Evang, Jan Marius, Alojz Gomola, and Tarik Čičić. "Anycast Metrics and Performance Tuning." In 2024 International Conference on Software, Telecommunications and Computer Networks (SoftCOM). IEEE, 2024. http://dx.doi.org/10.23919/softcom62040.2024.10721670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Suetterlein, Joshua, Stephen J. Young, Jesun Firoz, et al. "HPC Network Simulation Tuning via Automatic Extraction of Hardware Parameters." In 2024 IEEE High Performance Extreme Computing Conference (HPEC). IEEE, 2024. https://doi.org/10.1109/hpec62836.2024.10938439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Johnson, Jeremy R. "Automated performance tuning." In the 4th International Workshop. ACM Press, 2010. http://dx.doi.org/10.1145/1837210.1837215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guillen, Carla, Carmen Navarrete, David Brayford, Wolfram Hesse, and Matthias Brehm. "DVFS automatic tuning plugin for energy related tuning objectives." In 2016 2nd International Conference on Green High Performance Computing (ICGHPC). IEEE, 2016. http://dx.doi.org/10.1109/icghpc.2016.7508061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Masterson, Rebecca, and David Miller. "Hardware Tuning for Dynamic Performance Through Isoperformance Tuning." In 47th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference
14th AIAA/ASME/AHS Adaptive Structures Conference
7th
. American Institute of Aeronautics and Astronautics, 2006. http://dx.doi.org/10.2514/6.2006-2277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hoefler, Torsten, William Gropp, William Kramer, and Marc Snir. "Performance modeling for systematic performance tuning." In State of the Practice Reports. ACM Press, 2011. http://dx.doi.org/10.1145/2063348.2063356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Paul, Sujni. "Tuning the Library Performance." In 2015 International Conference on Developments of E-Systems Engineering (DeSE). IEEE, 2015. http://dx.doi.org/10.1109/dese.2015.63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pautre, F., A. Hincelin, and N. Moller. "Energy Tuning: Methodology and Exploration." In Eighth EAGE High Performance Computing Workshop. European Association of Geoscientists & Engineers, 2024. https://doi.org/10.3997/2214-4609.2024636022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Campbell, Ian, Merfyn Owen, Clay Oliver, and Giorgio Provinciali. "Tuning of Appendages for an Imoca60 Yacht." In High Performance Yacht Design. RINA, 2012. http://dx.doi.org/10.3940/rina.hpyd.2012.16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mijakovic, Robert, Michael Firbach, and Michael Gerndt. "An architecture for flexible auto-tuning: The Periscope Tuning Framework 2.0." In 2016 2nd International Conference on Green High Performance Computing (ICGHPC). IEEE, 2016. http://dx.doi.org/10.1109/icghpc.2016.7508066.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Performance tuning"

1

Jones, Philip. Accelerator Performance Tuning for E3SM. Office of Scientific and Technical Information (OSTI), 2023. http://dx.doi.org/10.2172/1963614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hall, Mary. TUNE: Compiler-Directed Automatic Performance Tuning. Office of Scientific and Technical Information (OSTI), 2014. http://dx.doi.org/10.2172/1156961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mohror, Kathryn. Infrastructure For Performance Tuning MPI Applications. Portland State University Library, 2000. http://dx.doi.org/10.15760/etd.2661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chame, Jacqueline. Complier-Directed Automatic Performance Tuning (TUNE) Final Report. Office of Scientific and Technical Information (OSTI), 2013. http://dx.doi.org/10.2172/1082750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kolev, T. Performance tuning of CEED software and first wave apps. Office of Scientific and Technical Information (OSTI), 2018. http://dx.doi.org/10.2172/1845653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tierney, Brian, and Dan Gunter. NetLogger: A toolkit for distributed system performance tuning anddeb ugging. Office of Scientific and Technical Information (OSTI), 2002. http://dx.doi.org/10.2172/924785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zieb, Kristofer James. Simulation-Informed Performance Tuning for Monte Carlo Proton Transport on GPUs. Office of Scientific and Technical Information (OSTI), 2016. http://dx.doi.org/10.2172/1329825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kolev, T. Performance tuning of CEED software and 1st and 2nd wave apps. Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/1845636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Laros, James H. ,. III. Measuring and tuning energy efficiency on large scale high performance computing platforms. Office of Scientific and Technical Information (OSTI), 2011. http://dx.doi.org/10.2172/1035312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Montoya, Miguel A., Daniela Betancourt-Jiminez, Mohammad Notani, et al. Environmentally Tuning Asphalt Pavements Using Phase Change Materials. Purdue University, 2022. http://dx.doi.org/10.5703/1288284317369.

Full text
Abstract:
Environmental conditions are considered an important factor influencing asphalt pavement performance. The addition of modifiers, both to the asphalt binder and the asphalt mixture, has attracted considerable attention in potentially alleviating environmentally induced pavement performance issues. Although many solutions have been developed, and some deployed, many asphalt pavements continue to prematurely fail due to environmental loading. The research reported herein investigates the synthetization and characterization of biobased Phase Change Materials (PCMs) and inclusion of Microencapsulated PCM (μPCM) in asphalt binders and mixtures to help reduce environmental damage to asphalt pavements. In general, PCM substances are formulated to absorb and release thermal energy as the material liquify and solidify, depending on pavement temperature. As a result, PCMs can provide asphalt pavements with thermal energy storage capacities to reduce the impacts of drastic ambient temperature scenarios and minimize the appearance of critical temperatures within the pavement structure. By modifying asphalt pavement materials with PCMs, it may be possible to "tune" the pavement to the environment.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography