Academic literature on the topic 'Best-fit greedy algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Best-fit greedy algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Best-fit greedy algorithm"

1

Arpita, Shah, and Patel Narendra. "Efficient and scalable multitenant placement approach for in-memory database over supple architecture." Computer Science and Information Technologies 1, no. 2 (2020): 39–46. https://doi.org/10.11591/csit.v1i2.p39-46.

Full text
Abstract:
Of late Multitenant model with In-Memory database has become prominent area for research. The paper has used advantages of multitenancy to reduce the cost for hardware, labor and make availability of storage by sharing database memory and file execution. The purpose of this paper is to give overview of proposed Supple architecture for implementing in-memory database backend and multitenancy, applicable in public and private cloud settings. Backend in-memory database uses column-oriented approach with dictionary based compression technique. We used dedicated sample benchmark for the workload processing and also adopt the SLA penalty model. In particular, we present two approximation algorithms, Multi-tenant placement (MTP) and Best-fit Greedy to show the quality of tenant placement. The experimental results show that Multi-tenant placement (MTP) algorithm is scalable and efficient in comparison with Best-fit Greedy Algorithm over proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
2

Shah, Arpita, and Narendra Patel. "Efficient and scalable multitenant placement approach for in-memory database over supple architecture." Computer Science and Information Technologies 1, no. 2 (2020): 39–46. http://dx.doi.org/10.11591/csit.v1i2.p39-46.

Full text
Abstract:
Of late Multitenant model with In-Memory database has become prominent area for research. The paper has used advantages of multitenancy to reduce the cost for hardware, labor and make availability of storage by sharing database memory and file execution. The purpose of this paper is to give overview of proposed Supple architecture for implementing in-memory database backend and multitenancy, applicable in public and private cloud settings. Backend in-memory database uses column-oriented approach with dictionary based compression technique. We used dedicated sample benchmark for the workload processing and also adopt the SLA penalty model. In particular, we present two approximation algorithms, Multi-tenant placement (MTP) and Best-fit Greedy to show the quality of tenant placement. The experimental results show that Multi-tenant placement (MTP) algorithm is scalable and efficient in comparison with Best-fit Greedy Algorithm over proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
3

Shah, Arpita, and Narendra Patel. "Efficient and scalable multitenant placement approach for in-memory database over supple architecture." Computer Science and Information Technologies 1, no. 2 (2020): 39–46. http://dx.doi.org/10.11591/csit.v1i2.pp39-46.

Full text
Abstract:
Of late Multitenant model with In-Memory database has become prominent area for research. The paper has used advantages of multitenancy to reduce the cost for hardware, labor and make availability of storage by sharing database memory and file execution. The purpose of this paper is to give overview of proposed Supple architecture for implementing in-memory database backend and multitenancy, applicable in public and private cloud settings. Backend in-memory database uses column-oriented approach with dictionary based compression technique. We used dedicated sample benchmark for the workload processing and also adopt the SLA penalty model. In particular, we present two approximation algorithms, multi-tenant placement (MTP) and best-fit greedy to show the quality of tenant placement. The experimental results show that MTP algorithm is scalable and efficient in comparison with best-fit greedy algorithm over proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Yulong, Weizhe Zhang, Hui He, and Yawei Liu. "A New Method of Priority Assignment for Real-Time Flows in the WirelessHART Network by the TDMA Protocol." Sensors 18, no. 12 (2018): 4242. http://dx.doi.org/10.3390/s18124242.

Full text
Abstract:
WirelessHART is a wireless sensor network that is widely used in real-time demand analyses. A key challenge faced by WirelessHART is to ensure the character of real-time data transmission in the network. Identifying a priority assignment strategy that reduces the delay in flow transmission is crucial in ensuring real-time network performance and the schedulability of real-time network flows. We study the priority assignment of real-time flows in WirelessHART on the basis of the multi-channel time division multiple access (TDMA) protocol to reduce the delay and improve the ratio of scheduled. We provide three kinds of methods: (1) worst fit, (2) best fit, and (3) first fit and choose the most suitable one, namely the worst-fit method for allocating flows to each channel. More importantly, we propose two heuristic algorithms—a priority assignment algorithm based on the greedy strategy for C (WF-C) and a priority assignment algorithm based on the greedy strategy for U(WF-U)—for assigning priorities to the flows in each channel, whose time complexity is O ( m a x ( N ∗ m ∗ l o g ( m ) , ( N − m ) 2 ) ) . We then build a new simulation model to simulate the transmission of real-time flows in WirelessHART. Finally, we compare our two algorithms with WF-D and HLS algorithms in terms of the average value of the total end-to-end delay of flow sets, the ratio of schedulable flow sets, and the calculation time of the schedulability analysis. The optimal algorithm WF-C reduces the delay by up to 44.18 % and increases the schedulability ratio by up to 70.7 % , and it reduces the calculation time compared with the HLS algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Jayakumar, L., R. Jothi Chitra, J. Sivasankari, et al. "QoS Analysis for Cloud-Based IoT Data Using Multicriteria-Based Optimization Approach." Computational Intelligence and Neuroscience 2022 (September 7, 2022): 1–12. http://dx.doi.org/10.1155/2022/7255913.

Full text
Abstract:
This work explains why and how QoS modeling has been used within a multicriteria optimization approach. The parameters and metrics defined are intended to provide a broader and, at the same time, more precise analysis of the issues highlighted in the work dedicated to placement algorithms in the cloud. In order to find the optimal solution to a placement problem which is impractical in polynomial time, as in more particular cases, meta-heuristics more or less approaching the optimal solution are used in order to obtain a satisfactory solution. First, a model by a genetic algorithm is proposed. This genetic algorithm dedicated to the problem of placing virtual machines in the cloud has been implemented in two different versions. The former only considers elementary services, while the latter uses compound services. These two versions of the genetic algorithm are presented, and also, two greedy algorithms, round-robin and best-fit sorted, were used in order to allow a comparison with the genetic algorithm. The characteristics of these two algorithms are presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Bevilacqua, Vitoantonio, Giuseppe Mastronardi, Filippo Menolascina, Paolo Pannarale, and Giuseppe Romanazzi. "Bayesian Gene Regulatory Network Inference Optimization by means of Genetic Algorithms." JUCS - Journal of Universal Computer Science 15, no. (4) (2009): 826–39. https://doi.org/10.3217/jucs-015-04-0826.

Full text
Abstract:
Inferring gene regulatory networks from data requires the development of algorithms devoted to structure extraction. When time-course data is available, gene interactions may be modeled by a Bayesian Network (BN). Given a structure, that models the conditional independence between genes, we can tune the parameters in a way that maximize the likelihood of the observed data. The structure that best fit the observed data reflects the real gene network's connections. Well known learning algorithms (greedy search and simulated annealing) devoted to BN structure learning have been used in literature. We enhanced the fundamental step of structure learning by means of a classical evolutionary algorithm, named GA (Genetic algorithm), to evolve a set of candidate BN structures and found the model that best fits data, without prior knowledge of such structure. In the context of genetic algorithms, we proposed various initialization and evolutionary strategies suitable for the task. We tested our choices using simulated data drawn from a gene simulator, which has been used in the literature for benchmarking [Yu et al. (2002)]. We assessed the inferred models against this reference, calculating the performance indicators used for network reconstruction. The performances of the different evolutionary algorithms have been compared against the traditional search algorithms used so far (greedy search and simulated annealing). Finally we individuated as best candidate an evolutionary approach enhanced by Crossover-Two Point and Selection Roulette Wheel for the learning of gene regulatory networks with BN. We show that this approach outperforms classical structure learning methods in elucidating the original model of the simulated dataset. Finally we tested the GA approach on a real dataset where it reach 62% of recovered connections (sensitivity) and 64% of direct connections (precision), outperforming the other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Soumplis, P., G. Kontos, P. Kokkinos, et al. "Performance Optimization Across the Edge-Cloud Continuum: A Multi-agent Rollout Approach for Cloud-Native Application Workload Placement." SN Computer Science 5, no. 3 (2024): 318. https://doi.org/10.1007/s42979-024-02630-w.

Full text
Abstract:
The advancements in virtualization technologies and distributed computing infrastructures have sparked the development of cloud-native applications. This is grounded in the breakdown of a monolithic application into smaller, loosely connected components, often referred to as microservices, enabling enhancements in the application's performance, flexibility, and resilience, along with better resource utilization. When optimizing the performance of cloud-native applications, specific demands arise in terms of application latency and communication delays between microservices that are not taken into consideration by generic orchestration algorithms. In this work, we propose mechanisms for automating the allocation of computing resources to optimize the service delivery of cloud-native applications over the edge-cloud continuum. We initially introduce the problem's Mixed Integer Linear Programming (MILP) formulation. Given the potentially overwhelming execution time for real-sized problems, we propose a greedy algorithm, which allocates resources sequentially in a best-fit manner. To further improve the performance, we introduce a multi-agent rollout mechanism that evaluates the immediate effect of decisions but also leverages the underlying greedy heuristic to simulate the decisions anticipated from other agents, encapsulating this in a Reinforcement Learning framework. This approach allows us to effectively manage the performance–execution time trade-off and enhance performance by controlling the exploration of the Rollout mechanism. This flexibility ensures that the system remains adaptive to varied scenarios, making the most of the available computational resources while still ensuring high-quality decisions. © The Author(s) 2024.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Jing, Kim Anh Do, Sijin Wen, et al. "Merging Microarray Data, Robust Feature Selection, and Predicting Prognosis in Prostate Cancer." Cancer Informatics 2 (January 2006): 117693510600200. http://dx.doi.org/10.1177/117693510600200009.

Full text
Abstract:
Motivation Individual microarray studies searching for prognostic biomarkers often have few samples and low statistical power; however, publicly accessible data sets make it possible to combine data across studies. Method We present a novel approach for combining microarray data across institutions and platforms. We introduce a new algorithm, robust greedy feature selection (RGFS), to select predictive genes. Results We combined two prostate cancer microarray data sets, confirmed the appropriateness of the approach with the Kolmogorov-Smirnov goodness-of-fit test, and built several predictive models. The best logistic regression model with stepwise forward selection used 7 genes and had a misclassification rate of 31%. Models that combined LDA with different feature selection algorithms had misclassification rates between 19% and 33%, and the sets of genes in the models varied substantially during cross-validation. When we combined RGFS with LDA, the best model used two genes and had a misclassification rate of 15%. Availability Affymetrix U95Av2 array data are available at http://www.broad.mit.edu/cgi-bin/cancer/datasets.cgi . The cDNA microarray data are available through the Stanford Microarray Database ( http://cmgm.stanford.edu/pbrown/ ). GeneLink software is freely available at http://bioinformatics.mdanderson.org/GeneLink/ . DNA-Chip Analyzer software is publicly available at http://biosun1.harvard.edu/complab/dchip/ .
APA, Harvard, Vancouver, ISO, and other styles
9

Sulaiman, Adel, Marium Sadiq, Yasir Mehmood, Muhammad Akram, and Ghassan Ahmed Ali. "Fitness-Based Acceleration Coefficients Binary Particle Swarm Optimization (FACBPSO) to Solve the Discounted Knapsack Problem." Symmetry 14, no. 6 (2022): 1208. http://dx.doi.org/10.3390/sym14061208.

Full text
Abstract:
The discounted {0-1} knapsack problem (D{0-1}KP) is a multi-constrained optimization and an extended form of the 0-1 knapsack problem. The DKP is composed of a set of item batches where each batch has three items and the objective is to maximize profit by selecting at most one item from each batch. Therefore, the D{0-1}KP is complex and has found many applications in real economic problems and other areas where the concept of promotional discounts exists. As DKP belongs to a binary class problem, so the novel binary particle swarm optimization variant with modifications is proposed in this paper. The acceleration coefficients are important parameters of the particle swarm optimization algorithm that keep the balance between exploration and exploitation. In conventional binary particle swarm optimization (BPSO), the acceleration coefficients of each particle remain the same in iteration, whereas in the proposed variant, fitness-based acceleration coefficient binary particle swarm optimization (FACBPSO), values of acceleration coefficients are based on the fitness of each particle. This modification enforces the least fit particles to move fast and best fit accordingly, which accelerates the convergence speed and reduces the computing time. Experiments were conducted on four instances of DKP having 10 datasets of each instance and the results of FACBPSO were compared with conventional BPSO and the new exact algorithm using a greedy repair strategy. The results demonstrate that the proposed algorithm outperforms PSO-GRDKP and the new exact algorithm in solving four instances of D{0-1}KP, with improved convergence speed and feasible solution time.
APA, Harvard, Vancouver, ISO, and other styles
10

Shreyas, J., N. V. Priya, P. K. Udayprasad, N. N. Srinidhi, Chouhan Dharmendra, and Kumar S. M. Dilip. "Opportunistic Routing for Large Scale IoT Network to Reduce Transmission Overhead." Journal of Advancement in Parallel Computing 5, no. 1 (2022): 1–8. https://doi.org/10.5281/zenodo.6379125.

Full text
Abstract:
<em>Increase in popularity of sensor electronics have gained much attention for wireless sensor technologies and demands many IoT (Internet of things) applications for real time and industrial applications. In IoT the proliferation of devices which are able to directly connected to internet and can be monitored. Sensed data from the device has to be forwarded to base station or end user (EU) which is achieved by efficient routing protocols to improve data transmission for large scale IoT. During routing process redundant data may be forwarded by nodes causing more overhead which may lead to congestion. There exist many challenges like low power links, multiple disjoint path, and energy while designing efficient communication protocol. In this paper we propose an enhanced opportunistic routing (e-OR) protocol for self-disciplined and self-healing large scale IoT devices. Enhanced opportunistic routing protocol uses best fit traversing algorithm to find optimal and reliable routes. The e- OR estimates link quality of nodes to avoid frequent disconnections. During route discovery process e-OR adapts greedy behaviour for finding optimal and shortest routes. Further we integrate congestion avoidance using clear channel assignment (CCA) for better channel availability to avoid packet loss and achieve QoS.</em>
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Best-fit greedy algorithm"

1

Hassan, Nazar. "The Innovative Pro-poor Environmentally Green Engineering Systems (PEGES) Standard: Towards Accelerated Sustainable Development in Developing Countries." In ACCELERATE THE PACE OF AGENDA 2030 IMPLEMENTATION. World Association for Sustainable Development, 2024. https://doi.org/10.47556/b.outlook2024.22.12.

Full text
Abstract:
PURPOSE: This research aims to quantitatively and qualitatively solve the Sustainable Development Decision Problem in developing countries by seeking smart solutions towards the implementation of the 17 SDGs as one package, while observing the consensus core values (i) Collective Benefit; (ii) Equity and Fairness; (iii) Quality and Integrity; (iv) Inclusiveness; and (v) Sustainability (Weaver et al., 1997). DESIGN/METHODOLOGY/APPROACH: The Modified Dynamic Strategic Fit (MDSF) Algorithm (Hassan, 2003) was designed and developed to solve the following major development requirements in the targeted countries: a) appropriate infrastructure; b) social protection measures and systems; c) basic services; d) sound and practical policy framework; e) robust multi-stakeholder partnerships covering the different target circles of SDG 17 (finance, technology, capacity building, governance, institutional policy coherence; multi-stakeholder partnerships, trade) (Griggs et al., 2017). FINDINGS: The resulting algorithm was able to develop a new conceptual framework to solve Agenda 2030’s SDG problem by reducing the 169 expected results into only 48 expected results, making the problem and its holistic solution feasible and tractable. ORIGINALITY/VALUE OF THE PAPER: The international community is herewith called upon to undertake a paradigm shift to facilitate the practical and accelerated implementation of Agenda 2030 in developing economies by focusing and putting SDG 1 (No poverty) at the core of the proposed solution framework. The proposed framework calls first for the utilisation of a set of well-designed Poverty Reduction and Social Protection (PRSP) policies; this is part of the innovative and newly proposed Aristotle Framework for Action (AFA) Initiative. The AFA initiative proves the power of the integrated whole system design (IWSD) approach, utilising a newly developed Pro-Poor Environmentally Green Engineering Systems (PEGES) standard, when the correct associated science, technology and innovation (STI) innovative technological applications and programmes and the proper set of values are jointly promoted. RESEARCH LIMITATIONS: The Sustainable Development Decision (SDD) problem is a special form of ill-defined complex decision problems. This type of problem is inherently unique and should be treated as a one-shot operation, while there is no structured or standard manner for their formulation, and there is no one single criterion of optimality. The complexity of these problems is rooted within many disciplines, their modelling requires a wide range of tools, and the solution necessitates the design and utilisation of a new conceptual framework to allow for the reduction of the problem into a tractable format (Chevallier, 2009). PRACTICAL IMPLICATIONS: Achieving sustainable development in developing countries necessitates a huge level of human and financial resources. As we base the solution on the best scenario, the financial implications skyrocket, making it almost financially infeasible unless there is an innovative out-of-the-box financial model that can fully engage the private sector on a high return on investment (ROI) basis and away from public or international assistance models. At the same time, however, the other requirements for human resources and capacity building become vivid and within reach to achieve. The model therefore identifies the knowledge and technological base required and the targeted population and groups to make it work. KEYWORDS: Poverty Reduction; Pro-poor Coherent Policies; Conceptual Framework; Sustainable Consumption and Production; Infrastructure; Basic Services. CITATION: Hassan, N.M. (2024): The Innovative Pro-poor Environmentally Green Engineering Systems (PEGES) Standard: Towards Accelerated Sustainable Development in Developing Countries. In Ahmed, A. (Ed.): World Sustainable Development Outlook 2024, Vol. 20, pp.159–174. WASD: London, United Kingdom.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Best-fit greedy algorithm"

1

Han, Chung-Kyun, and Shih-Fen Cheng. "An Exact Single-Agent Task Selection Algorithm for the Crowdsourced Logistics." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/600.

Full text
Abstract:
The trend of moving online in the retail industry has created great pressure for the logistics industry to catch up both in terms of volume and response time. On one hand, volume is fluctuating at greater magnitude, making peaks higher; on the other hand, customers are also expecting shorter response time. As a result, logistics service providers are pressured to expand and keep up with the demands. Expanding fleet capacity, however, is not sustainable as capacity built for the peak seasons would be mostly vacant during ordinary days. One promising solution is to engage crowdsourced workers, who are not employed full-time but would be willing to help with the deliveries if their schedules permit. The challenge, however, is to choose appropriate sets of tasks that would not cause too much disruption from their intended routes, while satisfying each delivery task's delivery time window requirement. In this paper, we propose a decision-support algorithm to select delivery tasks for a single crowdsourced worker that best fit his/her upcoming route both in terms of additional travel time and the time window requirements at all stops along his/her route, while at the same time satisfies tasks' delivery time windows. Our major contributions are in the formulation of the problem and the design of an efficient exact algorithm based on the branch-and-cut approach. The major innovation we introduce is the efficient generation of promising valid inequalities via our separation heuristics. In all numerical instances we study, our approach manages to reach optimality yet with much fewer computational resource requirement than the plain integer linear programming formulation. The greedy heuristic, while efficient in time, only achieves around 40-60% of the optimum in all cases. To illustrate how our solver could help in advancing the sustainability objective, we also quantify the reduction in the carbon footprint.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography