To see the other types of publications on this topic, follow the link: Benchmark functions.

Journal articles on the topic 'Benchmark functions'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Benchmark functions.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

DE JAEGHER, KRIS. "BENCHMARK TWO-GOOD UTILITY FUNCTIONS*." Manchester School 76, no. 1 (2007): 44–65. http://dx.doi.org/10.1111/j.1467-9957.2007.01049.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grambow, Martin, Christoph Laaber, Philipp Leitner, and David Bermbach. "Using application benchmark call graphs to quantify and improve the practical relevance of microbenchmark suites." PeerJ Computer Science 7 (May 28, 2021): e548. http://dx.doi.org/10.7717/peerj-cs.548.

Full text
Abstract:
Performance problems in applications should ideally be detected as soon as they occur, i.e., directly when the causing code modification is added to the code repository. To this end, complex and cost-intensive application benchmarks or lightweight but less relevant microbenchmarks can be added to existing build pipelines to ensure performance goals. In this paper, we show how the practical relevance of microbenchmark suites can be improved and verified based on the application flow during an application benchmark run. We propose an approach to determine the overlap of common function calls between application and microbenchmarks, describe a method which identifies redundant microbenchmarks, and present a recommendation algorithm which reveals relevant functions that are not covered by microbenchmarks yet. A microbenchmark suite optimized in this way can easily test all functions determined to be relevant by application benchmarks after every code change, thus, significantly reducing the risk of undetected performance problems. Our evaluation using two time series databases shows that, depending on the specific application scenario, application benchmarks cover different functions of the system under test. Their respective microbenchmark suites cover between 35.62% and 66.29% of the functions called during the application benchmark, offering substantial room for improvement. Through two use cases—removing redundancies in the microbenchmark suite and recommendation of yet uncovered functions—we decrease the total number of microbenchmarks and increase the practical relevance of both suites. Removing redundancies can significantly reduce the number of microbenchmarks (and thus the execution time as well) to ~10% and ~23% of the original microbenchmark suites, whereas recommendation identifies up to 26 and 14 newly, uncovered functions to benchmark to improve the relevance. By utilizing the differences and synergies of application benchmarks and microbenchmarks, our approach potentially enables effective software performance assurance with performance tests of multiple granularities.
APA, Harvard, Vancouver, ISO, and other styles
3

Abdulla, Hemin Sardar, Azad A. Ameen, Sarwar Ibrahim Saeed, Ismail Asaad Mohammed, and Tarik A. Rashid. "MRSO: Balancing Exploration and Exploitation through Modified Rat Swarm Optimization for Global Optimization." Algorithms 17, no. 9 (2024): 423. http://dx.doi.org/10.3390/a17090423.

Full text
Abstract:
The rapid advancement of intelligent technology has led to the development of optimization algorithms that leverage natural behaviors to address complex issues. Among these, the Rat Swarm Optimizer (RSO), inspired by rats’ social and behavioral characteristics, has demonstrated potential in various domains, although its convergence precision and exploration capabilities are limited. To address these shortcomings, this study introduces the Modified Rat Swarm Optimizer (MRSO), designed to enhance the balance between exploration and exploitation. The MRSO incorporates unique modifications to improve search efficiency and robustness, making it suitable for challenging engineering problems such as Welded Beam, Pressure Vessel, and Gear Train Design. Extensive testing with classical benchmark functions shows that the MRSO significantly improves performance, avoiding local optima and achieving higher accuracy in six out of nine multimodal functions and in all seven fixed-dimension multimodal functions. In the CEC 2019 benchmarks, the MRSO outperforms the standard RSO in six out of ten functions, demonstrating superior global search capabilities. When applied to engineering design problems, the MRSO consistently delivers better average results than the RSO, proving its effectiveness. Additionally, we compared our approach with eight recent and well-known algorithms using both classical and CEC-2019 benchmarks. The MRSO outperformed each of these algorithms, achieving superior results in six out of 23 classical benchmark functions and in four out of ten CEC-2019 benchmark functions. These results further demonstrate the MRSO’s significant contributions as a reliable and efficient tool for optimization tasks in engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Peterson, Kirk A., Angela K. Wilson, David E. Woon, and Thom H. Dunning Jr. "Benchmark calculations with correlated molecular wave functions." Theoretical Chemistry Accounts: Theory, Computation, and Modeling (Theoretica Chimica Acta) 97, no. 1-4 (1997): 251–59. http://dx.doi.org/10.1007/s002140050259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hussain, Kashif, Mohd Najib Mohd Salleh, Shi Cheng, and Rashid Naseem. "Common Benchmark Functions for Metaheuristic Evaluation: A Review." JOIV : International Journal on Informatics Visualization 1, no. 4-2 (2017): 218. http://dx.doi.org/10.30630/joiv.1.4-2.65.

Full text
Abstract:
In literature, benchmark test functions have been used for evaluating performance of metaheuristic algorithms. Algorithms that perform well on a set of numerical optimization problems are considered as effective methods for solving real-world problems. Different researchers choose different set of functions with varying configurations, as there exists no standard or universally agreed test-bed. This makes hard for researchers to select functions that can truly gauge the robustness of a metaheuristic algorithm which is being proposed. This review paper is an attempt to provide researchers with commonly used experimental settings, including selection of test functions with different modalities, dimensions, the number of experimental runs, and evaluation criteria. Hence, the proposed list of functions, based on existing literature, can be handily employed as an effective test-bed for evaluating either a new or modified variant of any existing metaheuristic algorithm. For embedding more complexity in the problems, these functions can be shifted or rotated for enhanced robustness.
APA, Harvard, Vancouver, ISO, and other styles
6

Arıcı, FerdaNur, and Ersin Kaya. "Comparison of Meta-heuristic Algorithms on Benchmark Functions." Academic Perspective Procedia 2, no. 3 (2019): 508–17. http://dx.doi.org/10.33793/acperpro.02.03.41.

Full text
Abstract:
Optimization is a process to search the most suitable solution for a problem within an acceptable time interval. The algorithms that solve the optimization problems are called as optimization algorithms. In the literature, there are many optimization algorithms with different characteristics. The optimization algorithms can exhibit different behaviors depending on the size, characteristics and complexity of the optimization problem. In this study, six well-known population based optimization algorithms (artificial algae algorithm - AAA, artificial bee colony algorithm - ABC, differential evolution algorithm - DE, genetic algorithm - GA, gravitational search algorithm - GSA and particle swarm optimization - PSO) were used. These six algorithms were performed on the CEC’17 test functions. According to the experimental results, the algorithms were compared and performances of the algorithms were evaluated.
APA, Harvard, Vancouver, ISO, and other styles
7

Abbas, Basim K., Qabas Abdal Zahraa Jabbar, and Rasha Talal Hameed. "Optimizing Benchmark Functions using Particle Swarm Optimization PSO." Al-Salam Journal for Engineering and Technology 4, no. 1 (2025): 192–98. https://doi.org/10.55145/ajest.2025.04.01.019.

Full text
Abstract:
Optimization is a very important step in many automated systems in different sectors because minimizing the search space to find the best solution and hence minimizing the time required by any automated system. This paper implements and evaluates Particle Swarm Optimization (PSO) on four benchmark optimization functions: Rastrigin, Sphere, Rosenbrock, and Ackley by selecting and tuning parameter and enhancing algorithm performance in several optimization circumstances. The PSO algorithm's performance is assessed based on the best solution, computational efficiency, and runtime to enhance theoretical knowledge of how the PSO algorithm interacts with mathematical landscapes by applying it to diverse scalar functions. The analyzing process uncovered the points of strength and weaknesses of the PSO algorithm to enhance the diverse applications. By comparing a specific swarm method and its applications with collection of functions, the study advances the mathematical understanding of the algorithm. The outcomes demonstrate that the PSO algorithm can effectively navigate complex search spaces and find optimal solutions for various optimization problems and the obtained result was fairly good by achieving fast speed up to 0.123 second.
APA, Harvard, Vancouver, ISO, and other styles
8

Naka, Edjola, Eris Zeqo, and Alsa Kaziu. "Performance Analysis of Metaheuristic Algorithms on Benchmark Functions." Interdisciplinary Journal of Research and Development 11, no. 2 (2024): 10. http://dx.doi.org/10.56345/ijrdv11n202.

Full text
Abstract:
The discipline of optimization can be used to maximize or minimize several problems. The use of metaheuristic algorithms is a strategy that often works well for global optimization. They are a type of stochastic algorithm that, via trial and error, finds workable solutions to difficult optimization problems in a reasonable amount of time, but they do not provide assurance that the answers are optimal. This paper aims to offer a comparative analysis of several metaheuristics in searching for the optimal solution. The selected metaheuristics are Artificial Bee Colony, Ant Lion Optimizer, Bat, Black Hole, Cuckoo Search, Cat Swarm Optimization, Dragonfly, Differential Evolution, Firefly, Genetic, Gravitational-Based Search, Grasshopper Optimization, Grey Wolf Optimizer, Harmony Search, Krill-Herd, Moth-Flame Optimizer, Particle Swarm Optimization, Sine Cosine, Shuffled Frog-Leaping, and Whale Optimization algorithms. For this evaluation, 18 benchmark test functions, categorized as unimodal, multimodal, and fixed-dimension multimodal are used to examine various properties, such as accuracy, escape from the local optimum, and convergence. As an indicator of how effectively these metaheuristics work, metrics like minimum, maximum, average, and standard deviation of fitness are provided. There are no optimization algorithms that are adequate for all problems, as the No Free Lunch theorem suggests, but the metaheuristics that are more effective than the others will be demonstrated. This study could be helpful for young researchers to identify the most prominent metaheuristics for achieving a better global optimum. Received: 8 June 2024 / Accepted: 25 July 2024 / Published: 29 July 2024Optimization, Metaheuristic algorithm, Benchmark function, Performance metric
APA, Harvard, Vancouver, ISO, and other styles
9

Bassel, Atheer, Hussein M. Haglan, and Akeel Sh. Mahmoud. "Local search algorithms based on benchmark test functions problem." IAES International Journal of Artificial Intelligence (IJ-AI) 9, no. 3 (2020): 529. http://dx.doi.org/10.11591/ijai.v9.i3.pp529-534.

Full text
Abstract:
<p>Optimization process is normally implemented to solve several objectives in the form of single or multi-objectives modes. Some traditional optimization techniques are computationally burdensome which required exhaustive computational times. Thus, many studies have invented new optimization techniques to address the issues. To realize the effectiveness of the proposed techniques, implementation on several benchmark functions is crucial. In solving benchmark test functions, local search algorithms have been rigorously examined and employed to diverse tasks. This paper highlights different algorithms implemented to solve several problems. The capacity of local search algorithms in the resolution of engineering optimization problem including benchmark test functions is reviewed. The use of local search algorithms, mainly Simulated Annealing (SA) and Great Deluge (GD) according to solve different problems is presented. Improvements and hybridization of the local search and global search algorithms are also reviewed in this paper. Consequently, benchmark test functions are proposed to those involved in local search algorithm.</p>
APA, Harvard, Vancouver, ISO, and other styles
10

Atheer, Bassel, M. Haglan Hussein, and Sh. Mahmoud Akeel. "Local search algorithms based on benchmark test functions problem." International Journal of Artificial Intelligence (IJ-AI) 9, no. 3 (2020): 529–34. https://doi.org/10.11591/ijai.v9.i3.pp529-534.

Full text
Abstract:
Optimization process is normally implemented to solve several objectives in the form of single or multi-objectives modes. Some traditional optimization techniques are computationally burdensome which required exhaustive computational times. Thus, many studies have invented new optimization techniques to address the issues. To realize the effectiveness of the proposed techniques, implementation on several benchmark functions is crucial. In solving benchmark test functions, local search algorithms have been rigorously examined and employed to diverse tasks. This paper highlights different algorithms implemented to solve several problems. The capacity of local search algorithms in the resolution of engineering optimization problem including benchmark test functions is reviewed. The use of local search algorithms, mainly Simulated Annealing (SA) and Great Deluge (GD) according to solve different problems is presented. Improvements and hybridization of the local search and global search algorithms are also reviewed in this paper. Consequently, benchmark test functions are proposed to those involved in local search algorithm.
APA, Harvard, Vancouver, ISO, and other styles
11

Lang, Ryan Dieter, and Andries Petrus Engelbrecht. "An Exploratory Landscape Analysis-Based Benchmark Suite." Algorithms 14, no. 3 (2021): 78. http://dx.doi.org/10.3390/a14030078.

Full text
Abstract:
The choice of which objective functions, or benchmark problems, should be used to test an optimization algorithm is a crucial part of the algorithm selection framework. Benchmark suites that are often used in the literature have been shown to exhibit poor coverage of the problem space. Exploratory landscape analysis can be used to quantify characteristics of objective functions. However, exploratory landscape analysis measures are based on samples of the objective function, and there is a lack of work on the appropriate choice of sample size needed to produce reliable measures. This study presents an approach to determine the minimum sample size needed to obtain robust exploratory landscape analysis measures. Based on reliable exploratory landscape analysis measures, a self-organizing feature map is used to cluster a comprehensive set of benchmark functions. From this, a benchmark suite that has better coverage of the single-objective, boundary-constrained problem space is proposed.
APA, Harvard, Vancouver, ISO, and other styles
12

Sharma, Avinash, Rajesh Kumar, Akash Saxena, and B. K. Panigrahi. "Structured Clanning-Based Ensemble Optimization Algorithm: A Novel Approach for Solving Complex Numerical Problems." Modelling and Simulation in Engineering 2018 (December 9, 2018): 1–19. http://dx.doi.org/10.1155/2018/1851275.

Full text
Abstract:
In this paper, a novel swarm intelligence-based ensemble metaheuristic optimization algorithm, called Structured Clanning-based Ensemble Optimization, is proposed for solving complex numerical optimization problems. The proposed algorithm is inspired by the complex and diversified behaviour present within the fission-fusion-based social structure of the elephant society. The population of elephants can consist of various groups with relationship between individuals ranging from mother-child bond, bond groups, independent males, and strangers. The algorithm tries to model this individualistic behaviour to formulate an ensemble-based optimization algorithm. To test the efficiency and utility of the proposed algorithm, various benchmark functions of different geometric properties are used. The algorithm performance on these test benchmarks is compared to various state-of-the-art optimization algorithms. Experiments clearly showcase the success of the proposed algorithm in optimizing the benchmark functions to better values.
APA, Harvard, Vancouver, ISO, and other styles
13

Wen, P. H., and M. H. Aliabadi. "Mixed-Mode Stress Intensity Factors by Mesh Free Galerkin Method." Key Engineering Materials 417-418 (October 2009): 957–60. http://dx.doi.org/10.4028/www.scientific.net/kem.417-418.957.

Full text
Abstract:
An element-free Galerkin method is developed using radial basis interpolation functions to evaluate static and dynamic mixed-mode stress intensity factors. For dynamic problems, the Laplace transform technique is used to transform the time domain problem to frequency domain. The so-called enriched radial basis functions are introduced to accurately capture the singularity of stress at crack tip. The accuracy and convergence of mesh free Galerkin method with enriched radial basis functions for the two-dimensional static and dynamic fracture mechanics are demonstrated through several benchmark examples. Comparisons have been made with benchmarks and solutions obtained by the boundary element method.
APA, Harvard, Vancouver, ISO, and other styles
14

Song, Qi, Yourui Huang, Wenhao Lai, Tao Han, Shanyong XU, and Xue Rong. "Multi-membrane search algorithm." PLOS ONE 16, no. 12 (2021): e0260512. http://dx.doi.org/10.1371/journal.pone.0260512.

Full text
Abstract:
This research proposes a new multi-membrane search algorithm (MSA) based on cell biological behavior. Cell secretion protein behavior and cell division and fusion strategy are the main inspirations for the algorithm. In order to verify the performance of the algorithm, we used 19 benchmark functions to compare the MSA test results with MVO, GWO, MFO and ALO. The number of iterations of each algorithm on each benchmark function is 100, the population number is 10, and the running is repeated 50 times, and the average and standard deviation of the results are recorded. Tests show that the MSA is competitive in unimodal benchmark functions and multi-modal benchmark functions, and the results in composite benchmark functions are all superior to MVO, MFO, ALO, and GWO algorithms. This paper also uses MSA to solve two classic engineering problems: welded beam design and pressure vessel design. The result of welded beam design is 1.7252, and the result of pressure vessel design is 5887.7052, which is better than other comparison algorithms. Statistical experiments show that MSA is a high-performance algorithm that is competitive in unimodal and multimodal functions, and its performance in compound functions is significantly better than MVO, MFO, ALO, and GWO algorithms.
APA, Harvard, Vancouver, ISO, and other styles
15

Barraza, Juan, Luis Rodríguez, Oscar Castillo, Patricia Melin, and Fevrier Valdez. "An Enhanced Fuzzy Hybrid of Fireworks and Grey Wolf Metaheuristic Algorithms." Axioms 13, no. 7 (2024): 424. http://dx.doi.org/10.3390/axioms13070424.

Full text
Abstract:
This research work envisages addressing fuzzy adjustment of parameters into a hybrid optimization algorithm for solving mathematical benchmark function problems. The problem of benchmark mathematical functions consists of finding the minimal values. In this study, we considered function optimization. We are presenting an enhanced Fuzzy Hybrid Algorithm, which is called Enhanced Fuzzy Hybrid Fireworks and Grey Wolf Metaheuristic Algorithm, and denoted as EF-FWA-GWO. The fuzzy adjustment of parameters is achieved using Fuzzy Inference Systems. For this work, we implemented two variants of the Fuzzy Systems. The first variant utilizes Triangular membership functions, and the second variant employs Gaussian membership functions. Both variants are of a Mamdani Fuzzy Inference Type. The proposed method was applied to 22 mathematical benchmark functions, divided into two parts: the first part consists of 13 functions that can be classified as unimodal and multimodal, and the second part consists of the 9 fixed-dimension multimodal benchmark functions. The proposed method presents better performance with 60 and 90 dimensions, averaging 51% and 58% improvement in the benchmark functions, respectively. And then, a statistical comparison between the conventional hybrid algorithm and the Fuzzy Enhanced Hybrid Algorithm is presented to complement the conclusions of this research. Finally, we also applied the Fuzzy Hybrid Algorithm in a control problem to test its performance in designing a Fuzzy controller for a mobile robot.
APA, Harvard, Vancouver, ISO, and other styles
16

Hajizadeh, Ouraman, Tamer Boz, Axel Maas, and Jon-Ivar Skullerud. "Gluon and ghost correlation functions of 2-color QCD at finite density." EPJ Web of Conferences 175 (2018): 07012. http://dx.doi.org/10.1051/epjconf/201817507012.

Full text
Abstract:
2-color QCD, i. e. QCD with the gauge group SU(2), is the simplest non-Abelian gauge theory without sign problem at finite quark density. Therefore its study on the lattice is a benchmark for other non-perturbative approaches at finite density. To provide such benchmarks we determine the minimal-Landau-gauge 2-point and 3-gluon correlation functions of the gauge sector and the running gauge coupling at finite density. We observe no significant effects, except for some low-momentum screening of the gluons at and above the supposed high-density phase transition.
APA, Harvard, Vancouver, ISO, and other styles
17

Diniz, Leonardo G., Alexander Alijah, and José R. Mohallem. "Benchmark Linelists and Radiative Cooling Functions for LiH Isotopologues." Astrophysical Journal Supplement Series 235, no. 2 (2018): 35. http://dx.doi.org/10.3847/1538-4365/aab431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Singh, Harpreet. "A COMPARISON OF OPTIMIZATION ALGORITHMS FOR STANDARD BENCHMARK FUNCTIONS." International Journal of Advanced Research in Computer Science 8, no. 7 (2017): 1249–54. http://dx.doi.org/10.26483/ijarcs.v8i7.4581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Dutta, Narendra Nath, and Konrad Patkowski. "Improving “Silver-Standard” Benchmark Interaction Energies with Bond Functions." Journal of Chemical Theory and Computation 14, no. 6 (2018): 3053–70. http://dx.doi.org/10.1021/acs.jctc.8b00204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Siva Sathya, S., and M. V. Radhika. "Convergence of nomadic genetic algorithm on benchmark mathematical functions." Applied Soft Computing 13, no. 5 (2013): 2759–66. http://dx.doi.org/10.1016/j.asoc.2012.11.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Özyön, Serdar, Celal Yaşar, and Hasan Temurtaş. "Incremental gravitational search algorithm for high-dimensional benchmark functions." Neural Computing and Applications 31, no. 8 (2018): 3779–803. http://dx.doi.org/10.1007/s00521-017-3334-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Naim, Amany A., and Neveen I. Ghali. "SELECTION OF THE CONSTRICTION FACTOR FOR VENUS FLYTRAP OPTIMIZATION." Journal of Southwest Jiaotong University 56, no. 5 (2021): 595–600. http://dx.doi.org/10.35741/issn.0258-2724.56.5.54.

Full text
Abstract:
This paper proposes venus flytrap optimization (VFO) with constriction factor (VFO-CF) for improving the convergence of the algorithm. The constriction factor has a significant impact on the performance of VFO-CF; its impact was inspected based on benchmark functions. Herein, the property of the constriction factor and the guidelines for determining the optimal parameter values are defined. The proposed method is tested on benchmark functions, and the obtained results are compared with existing VFO results. The water supply rate is tested in the range [4.1, 4.2], which is generally reasonable for the benchmark functions.
APA, Harvard, Vancouver, ISO, and other styles
23

G.Lakshmi, Kameswari. "Optimality Test cases of Particle Swarm optimization of single objective functions." Journal of Applied Mathematics and Statistical Analysis 1, no. 1 (2020): 1–9. https://doi.org/10.5281/zenodo.4010485.

Full text
Abstract:
<em>Optimization problems are classified into continuous, discrete, constrained, unconstrained deterministic, stochastic, single objective and multi-objective optimization problems. Deterministic, Heuristics and Meta-Heuristic technique, mostly dominate the solution set of small and medium scale problems, whereas for large data class optimization problems, Evolutionary techniques ( mostly derivative -free) are used to address the near -optimal solution of these class of P, N-P, N-P Hard problems. In the present paper, evolutionary algorithmic approach without evolutionary operators, mimicking the biology of swarms, with artificial intelligence is introduced. The above search technique is called Particle swarm optimization (PSO) method, used in solving many N-P hard problems with respect to position, and velocity vector approach for random search to obtain near optimal solution. The method is discussed in detail with flow chart and various optimization benchmark problems of single objective function are solved using the developed PSO Code and compared with global minima of benchmark problems.</em>
APA, Harvard, Vancouver, ISO, and other styles
24

Kaur, Gaganpreet, and Sankalap Arora. "Chaotic whale optimization algorithm." Journal of Computational Design and Engineering 5, no. 3 (2018): 275–84. http://dx.doi.org/10.1016/j.jcde.2017.12.006.

Full text
Abstract:
Abstract The Whale Optimization Algorithm (WOA) is a recently developed meta-heuristic optimization algorithm which is based on the hunting mechanism of humpback whales. Similarly to other meta-heuristic algorithms, the main problem faced by WOA is slow convergence speed. So to enhance the global convergence speed and to get better performance, this paper introduces chaos theory into WOA optimization process. Various chaotic maps are considered in the proposed chaotic WOA (CWOA) methods for tuning the main parameter of WOA which helps in controlling exploration and exploitation. The proposed CWOA methods are benchmarked on twenty well-known test functions. The results prove that the chaotic maps (especially Tent map) are able to improve the performance of WOA. Highlights Chaos has been introduced into WOA to improve its performance. Ten chaotic maps have been investigated to tune the key parameter ‘ p’ of WOA. The proposed CWOA is validated on a set of twenty benchmark functions. The proposed CWOA is validated on a set of twenty benchmark functions. Statistical results suggest that CWOA has better reliability of global optimality.
APA, Harvard, Vancouver, ISO, and other styles
25

HIRAYAMA, TAKASHI, and YASUAKI NISHITANI. "EXACT MINIMIZATION OF AND–EXOR EXPRESSIONS OF PRACTICAL BENCHMARK FUNCTIONS." Journal of Circuits, Systems and Computers 18, no. 03 (2009): 465–86. http://dx.doi.org/10.1142/s0218126609005356.

Full text
Abstract:
We propose faster-computing methods for the minimization algorithm of AND–EXOR expressions, or exclusive-or sum-of-products expressions (ESOPs), and obtain the exact minimum ESOPs of benchmark functions. These methods improve the search procedure for ESOPs, which is the most time-consuming part of the original algorithm. For faster computation, the search space for ESOPs is reduced by checking the upper and lower bounds on the size of ESOPs. Experimental results to demonstrate the effectiveness of these methods are presented. The exact minimum ESOPs of many practical benchmark functions have been revealed by this improved algorithm.
APA, Harvard, Vancouver, ISO, and other styles
26

Agushaka, Jeffrey O., and Absalom E. Ezugwu. "Advanced arithmetic optimization algorithm for solving mechanical engineering design problems." PLOS ONE 16, no. 8 (2021): e0255703. http://dx.doi.org/10.1371/journal.pone.0255703.

Full text
Abstract:
The distributive power of the arithmetic operators: multiplication, division, addition, and subtraction, gives the arithmetic optimization algorithm (AOA) its unique ability to find the global optimum for optimization problems used to test its performance. Several other mathematical operators exist with the same or better distributive properties, which can be exploited to enhance the performance of the newly proposed AOA. In this paper, we propose an improved version of the AOA called nAOA algorithm, which uses the high-density values that the natural logarithm and exponential operators can generate, to enhance the exploratory ability of the AOA. The addition and subtraction operators carry out the exploitation. The candidate solutions are initialized using the beta distribution, and the random variables and adaptations used in the algorithm have beta distribution. We test the performance of the proposed nAOA with 30 benchmark functions (20 classical and 10 composite test functions) and three engineering design benchmarks. The performance of nAOA is compared with the original AOA and nine other state-of-the-art algorithms. The nAOA shows efficient performance for the benchmark functions and was second only to GWO for the welded beam design (WBD), compression spring design (CSD), and pressure vessel design (PVD).
APA, Harvard, Vancouver, ISO, and other styles
27

Fan, Jiahao, Ying Li, and Tan Wang. "An improved African vultures optimization algorithm based on tent chaotic mapping and time-varying mechanism." PLOS ONE 16, no. 11 (2021): e0260725. http://dx.doi.org/10.1371/journal.pone.0260725.

Full text
Abstract:
Metaheuristic optimization algorithms are one of the most effective methods for solving complex engineering problems. However, the performance of a metaheuristic algorithm is related to its exploration ability and exploitation ability. Therefore, to further improve the African vultures optimization algorithm (AVOA), a new metaheuristic algorithm, an improved African vultures optimization algorithm based on tent chaotic mapping and time-varying mechanism (TAVOA), is proposed. First, a tent chaotic map is introduced for population initialization. Second, the individual’s historical optimal position is recorded and applied to individual location updating. Third, a time-varying mechanism is designed to balance the exploration ability and exploitation ability. To verify the effectiveness and efficiency of TAVOA, TAVOA is tested on 23 basic benchmark functions, 28 CEC 2013 benchmark functions and 3 common real-world engineering design problems, and compared with AVOA and 5 other state-of-the-art metaheuristic optimization algorithms. According to the results of the Wilcoxon rank-sum test with 5%, among the 23 basic benchmark functions, the performance of TAVOA has significantly better than that of AVOA on 13 functions. Among the 28 CEC 2013 benchmark functions, the performance of TAVOA on 9 functions is significantly better than AVOA, and on 17 functions is similar to AVOA. Besides, compared with the six metaheuristic optimization algorithms, TAVOA also shows good performance in real-world engineering design problems.
APA, Harvard, Vancouver, ISO, and other styles
28

Hezam, Ibrahim M., Osama Abdel Raouf, and Mohey M. Hadhoud. "A New Compound Swarm Intelligence Algorithms for Solving Global Optimization Problems." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 10, no. 9 (2013): 2010–20. http://dx.doi.org/10.24297/ijct.v10i9.1389.

Full text
Abstract:
This paper proposes a new hybrid swarm intelligence algorithm that encompasses the feature of three major swarm algorithms. It combines the fast convergence of the Cuckoo Search (CS), the dynamic root change of the Firefly Algorithm (FA), and the continuous position update of the Particle Swarm Optimization (PSO). The Compound Swarm Intelligence Algorithm (CSIA) will be used to solve a set of standard benchmark functions. The research study compares the performance of CSIA with that of CS, FA, and PSO, using the same set of benchmark functions. The comparison aims to test if the performance of CSIA is Competitive to that of the CS, FA, and PSO algorithms denoting the solution results of the benchmark functions.
APA, Harvard, Vancouver, ISO, and other styles
29

Kumar, Dr Dharmender. "Optimization of Benchmark Functions Using Artificial Bee Colony (ABC) Algorithm." IOSR Journal of Engineering 3, no. 10 (2013): 09–14. http://dx.doi.org/10.9790/3021-031040914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Asyikin Zainal, Nurul, Fakhrud Din, and Kamal Z. Zamli. "Assessing the Symbiotic Organism Search Variants using Standard Benchmark Functions." Journal of Physics: Conference Series 1830, no. 1 (2021): 012014. http://dx.doi.org/10.1088/1742-6596/1830/1/012014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Dubey, Shiv Ram, Satish Kumar Singh, and Bidyut Baran Chaudhuri. "Activation functions in deep learning: A comprehensive survey and benchmark." Neurocomputing 503 (September 2022): 92–108. http://dx.doi.org/10.1016/j.neucom.2022.06.111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Jamil, Momin, and Xin She Yang. "A literature survey of benchmark functions for global optimisation problems." International Journal of Mathematical Modelling and Numerical Optimisation 4, no. 2 (2013): 150. http://dx.doi.org/10.1504/ijmmno.2013.055204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Dunstan, R., and R. L. Chambers. "ESTIMATING DISTRIBUTION FUNCTIONS FROM SURVEY DATA WITH LIMITED BENCHMARK INFORMATION." Australian Journal of Statistics 31, no. 1 (1989): 1–11. http://dx.doi.org/10.1111/j.1467-842x.1989.tb00493.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Qu, B. Y., J. J. Liang, Z. Y. Wang, Q. Chen, and P. N. Suganthan. "Novel benchmark functions for continuous multimodal optimization with comparative results." Swarm and Evolutionary Computation 26 (February 2016): 23–34. http://dx.doi.org/10.1016/j.swevo.2015.07.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Düvelmeyer, Dana, Bernd Hofmann, and Masahiro Yamamoto. "Range Inclusions and Approximate Source Conditions with General Benchmark Functions." Numerical Functional Analysis and Optimization 28, no. 11-12 (2007): 1245–61. http://dx.doi.org/10.1080/01630560701749649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Jack, I., D. R. T. Jones та A. F. Kord. "Supersymmetric β-functions, Benchmark Points and Soft R-parity Violation". Nuclear Physics B - Proceedings Supplements 135 (жовтень 2004): 300–304. http://dx.doi.org/10.1016/j.nuclphysbps.2004.09.033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

van Rijn, Sander, and Sebastian Schmitt. "MF2: A Collection of Multi-Fidelity Benchmark Functions in Python." Journal of Open Source Software 5, no. 52 (2020): 2049. http://dx.doi.org/10.21105/joss.02049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Dieterich, Johannes M., and Bernd Hartke. "Empirical Review of Standard Benchmark Functions Using Evolutionary Global Optimization." Applied Mathematics 03, no. 10 (2012): 1552–64. http://dx.doi.org/10.4236/am.2012.330215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

UZER, Mustafa Serter, and Onur İNAN. "COMBINING GREY WOLF OPTIMIZATION AND WHALE OPTIMIZATION ALGORITHM FOR BENCHMARK TEST FUNCTIONS." Kahramanmaraş Sütçü İmam Üniversitesi Mühendislik Bilimleri Dergisi 26, no. 2 (2023): 462–75. http://dx.doi.org/10.17780/ksujes.1213693.

Full text
Abstract:
Many optimization problems have been successfully addressed using metaheuristic approaches. These approaches are frequently able to choose the best answer fast and effectively. Recently, the use of swarm-based optimization algorithms, a kind of metaheuristic approach, has become more common. In this study, a hybrid swarm-based optimization method called WOAGWO is proposed by combining the Whale Optimization Algorithm (WOA) and Grey Wolf Optimization (GWO). This method aims to realize a more effective hybrid algorithm by using the positive aspects of the two algorithms. 23 benchmark test functions were utilized to assess the WOAGWO. By running the proposed approach 30 times, the mean fitness and standard deviation values were computed. These results were compared to WOA, GWO, Ant Lion Optimization algorithm (ALO), Particle Swarm Optimization (PSO), and Improved ALO (IALO) in the literature. The WOAGWO algorithm, when compared to these algorithms in the literature, produced the optimal results in 5 of 7 unimodal benchmark functions, 4 of 6 multimodal benchmark functions, and 9 of 10 fixed-dimension multimodal benchmark functions. Therefore, the suggested approach generally outperforms the findings in the literature. The proposed WOAGWO seems to be promising and it has a wide range of uses.
APA, Harvard, Vancouver, ISO, and other styles
40

Md, Mainul Islam, Shareef Hussain, Nagrial Mahmood, Rizk Jamal, Hellany Ali, and Nizam Khalid Saiful. "Performance comparison of various probability gate assisted binary lightning search algorithm." International Journal of Artificial Intelligence (IJ-AI) 8, no. 3 (2019): 228–36. https://doi.org/10.11591/ijai.v8.i3.pp228-236.

Full text
Abstract:
Recently, many new nature-inspired optimization algorithms have been introduced to further enhance the computational intelligence optimization algorithms. Among them, lightning search algorithm (LSA) is a recent heuristic optimization method for resolving continuous problems. It mimics the natural phenomenon of lightning to find out the global optimal solution around the search space. In this paper, a suitable technique to formulate binary version of lightning search algorithm (BLSA) is presented. Three common probability transfer functions, namely, logistic sigmoid, tangent hyperbolic sigmoid and quantum bit rotating gate are investigated to be utilized in the original LSA. The performances of three transfer functions based BLSA is evaluated using various standard functions with different features and the results are compared with other four famous heuristic optimization techniques. The comparative study clearly reveals that tangent hyperbolic transfer function is the most suitable function that can be utilized in the binary version of LSA.
APA, Harvard, Vancouver, ISO, and other styles
41

He, Cheng, Ye Tian, Handing Wang, and Yaochu Jin. "A repository of real-world datasets for data-driven evolutionary multiobjective optimization." Complex & Intelligent Systems 6, no. 1 (2019): 189–97. http://dx.doi.org/10.1007/s40747-019-00126-2.

Full text
Abstract:
Abstract Many real-world optimization applications have more than one objective, which are modeled as multiobjective optimization problems. Generally, those complex objective functions are approximated by expensive simulations rather than cheap analytic functions, which have been formulated as data-driven multiobjective optimization problems. The high computational costs of those problems pose great challenges to existing evolutionary multiobjective optimization algorithms. Unfortunately, there have not been any benchmark problems reflecting those challenges yet. Therefore, we carefully select seven benchmark multiobjective optimization problems from real-world applications, aiming to promote the research on data-driven evolutionary multiobjective optimization by suggesting a set of benchmark problems extracted from various real-world optimization applications.
APA, Harvard, Vancouver, ISO, and other styles
42

Wen, P. H., and M. H. Aliabadi. "Evaluation of mixed-mode stress intensity factors by the mesh-free Galerkin method: Static and dynamic." Journal of Strain Analysis for Engineering Design 44, no. 4 (2009): 273–86. http://dx.doi.org/10.1243/03093247jsa509.

Full text
Abstract:
Based on the variational principle of the potential energy, the element-free Galerkin method is developed using radial basis interpolation functions to evaluate static and dynamic mixed-mode stress intensity factors. For dynamic problems, the Laplace transform technique is used to transform the time domain problem to the frequency domain. The so-called enriched radial basis functions are introduced to capture accurately the singularity of stress at crack tip. In this approach, connectivity of the mesh in the domain or integrations with fundamental or particular solutions are not required. The accuracy and convergence of the mesh-free Galerkin method with enriched radial basis functions for the two-dimensional static and dynamic fracture mechanics are demonstrated through several benchmark examples. Comparisons have been made with benchmarks and solutions obtained by the boundary element method.
APA, Harvard, Vancouver, ISO, and other styles
43

Xiao, Jing, Hong-Fei Ren, and Xiao-Ke Xu. "Constructing Real-Life Benchmarks for Community Detection by Rewiring Edges." Complexity 2020 (April 7, 2020): 1–16. http://dx.doi.org/10.1155/2020/7096230.

Full text
Abstract:
In order to make the performance evaluation of community detection algorithms more accurate and deepen our analysis of community structures and functional characteristics of real-life networks, a new benchmark constructing method is designed from the perspective of directly rewiring edges in a real-life network instead of building a model. Based on the method, two kinds of novel benchmarks with special functions are proposed. The first kind can accurately approximate the microscale and mesoscale structural characteristics of the original network, providing ideal proxies for real-life networks and helping to realize performance analysis of community detection algorithms when a real network varies characteristics at multiple scales. The second kind is able to independently vary the community intensity in each generated benchmark and make the robustness evaluation of community detection algorithms more accurate. Experimental results prove the effectiveness and superiority of our proposed method. It enables more real-life networks to be used to construct benchmarks and helps to deepen our analysis of community structures and functional characteristics of real-life networks.
APA, Harvard, Vancouver, ISO, and other styles
44

Mashwani, Wali Khan, Zia Ur Rehman, Maharani A. Bakar, Ismail Koçak, and Muhammad Fayaz. "A Customized Differential Evolutionary Algorithm for Bounded Constrained Optimization Problems." Complexity 2021 (March 10, 2021): 1–24. http://dx.doi.org/10.1155/2021/5515701.

Full text
Abstract:
Bound-constrained optimization has wide applications in science and engineering. In the last two decades, various evolutionary algorithms (EAs) were developed under the umbrella of evolutionary computation for solving various bound-constrained benchmark functions and various real-world problems. In general, the developed evolutionary algorithms (EAs) belong to nature-inspired algorithms (NIAs) and swarm intelligence (SI) paradigms. Differential evolutionary algorithm is one of the most popular and well-known EAs and has secured top ranks in most of the EA competitions in the special session of the IEEE Congress on Evolutionary Computation. In this paper, a customized differential evolutionary algorithm is suggested and applied on twenty-nine large-scale bound-constrained benchmark functions. The suggested C-DE algorithm has obtained promising numerical results in its 51 independent runs of simulations. Most of the 2013 IEEE-CEC benchmark functions are tackled efficiently in terms of proximity and diversity.
APA, Harvard, Vancouver, ISO, and other styles
45

Lin, Jun Lin, Chun Wei Cho, and Hung Chjh Chuan. "Imperialist Competitive Algorithms with Perturbed Moves for Global Optimization." Applied Mechanics and Materials 284-287 (January 2013): 3135–39. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.3135.

Full text
Abstract:
Imperialist Competitive Algorithm (ICA) is a new population-based evolutionary algorithm. Previous works have shown that ICA converges quickly but often to a local optimum. To overcome this problem, this work proposed two modifications to ICA: perturbed assimilation move and boundary bouncing. The proposed modifications were applied to ICA and tested using six well-known benchmark functions with 30 dimensions. The experimental results indicate that these two modifications significantly improve the performance of ICA on all six benchmark functions.
APA, Harvard, Vancouver, ISO, and other styles
46

Console, Francesca, Giuseppe D’Aquanno, Giuseppe Antonio Di Luna, and Leonardo Querzoni. "BinBench: a benchmark for x64 portable operating system interface binary function representations." PeerJ Computer Science 9 (June 1, 2023): e1286. http://dx.doi.org/10.7717/peerj-cs.1286.

Full text
Abstract:
In this article we propose the first multi-task benchmark for evaluating the performances of machine learning models that work on low level assembly functions. While the use of multi-task benchmark is a standard in the natural language processing (NLP) field, such practice is unknown in the field of assembly language processing. However, in the latest years there has been a strong push in the use of deep neural networks architectures borrowed from NLP to solve problems on assembly code. A first advantage of having a standard benchmark is the one of making different works comparable without effort of reproducing third part solutions. The second advantage is the one of being able to test the generality of a machine learning model on several tasks. For these reasons, we propose BinBench, a benchmark for binary function models. The benchmark includes various binary analysis tasks, as well as a dataset of binary functions on which tasks should be solved. The dataset is publicly available and it has been evaluated using baseline models.
APA, Harvard, Vancouver, ISO, and other styles
47

Das, Ishapathik. "Robust benchmark dose estimation using an infinite family of response functions." Human and Ecological Risk Assessment: An International Journal 24, no. 8 (2018): 2054–69. http://dx.doi.org/10.1080/10807039.2018.1438172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Cohen, Akiba A. "Benchmark: Israelis and foreign news: Perceptions of interest, functions, and newsworthiness." Journal of Broadcasting & Electronic Media 37, no. 3 (1993): 337–47. http://dx.doi.org/10.1080/08838159309364226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Yan, Minyi Su, Zhihai Liu, et al. "Assessing protein–ligand interaction scoring functions with the CASF-2013 benchmark." Nature Protocols 13, no. 4 (2018): 666–80. http://dx.doi.org/10.1038/nprot.2017.114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Cencek, Wojciech, Jacek Komasa, and Jacek Rychlewski. "Benchmark calculations for two-electron systems using explicitly correlated Gaussian functions." Chemical Physics Letters 246, no. 4-5 (1995): 417–20. http://dx.doi.org/10.1016/0009-2614(95)01146-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!