To see the other types of publications on this topic, follow the link: Best-fit greedy algorithm.

Journal articles on the topic 'Best-fit greedy algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 18 journal articles for your research on the topic 'Best-fit greedy algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Arpita, Shah, and Patel Narendra. "Efficient and scalable multitenant placement approach for in-memory database over supple architecture." Computer Science and Information Technologies 1, no. 2 (2020): 39–46. https://doi.org/10.11591/csit.v1i2.p39-46.

Full text
Abstract:
Of late Multitenant model with In-Memory database has become prominent area for research. The paper has used advantages of multitenancy to reduce the cost for hardware, labor and make availability of storage by sharing database memory and file execution. The purpose of this paper is to give overview of proposed Supple architecture for implementing in-memory database backend and multitenancy, applicable in public and private cloud settings. Backend in-memory database uses column-oriented approach with dictionary based compression technique. We used dedicated sample benchmark for the workload processing and also adopt the SLA penalty model. In particular, we present two approximation algorithms, Multi-tenant placement (MTP) and Best-fit Greedy to show the quality of tenant placement. The experimental results show that Multi-tenant placement (MTP) algorithm is scalable and efficient in comparison with Best-fit Greedy Algorithm over proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
2

Shah, Arpita, and Narendra Patel. "Efficient and scalable multitenant placement approach for in-memory database over supple architecture." Computer Science and Information Technologies 1, no. 2 (2020): 39–46. http://dx.doi.org/10.11591/csit.v1i2.p39-46.

Full text
Abstract:
Of late Multitenant model with In-Memory database has become prominent area for research. The paper has used advantages of multitenancy to reduce the cost for hardware, labor and make availability of storage by sharing database memory and file execution. The purpose of this paper is to give overview of proposed Supple architecture for implementing in-memory database backend and multitenancy, applicable in public and private cloud settings. Backend in-memory database uses column-oriented approach with dictionary based compression technique. We used dedicated sample benchmark for the workload processing and also adopt the SLA penalty model. In particular, we present two approximation algorithms, Multi-tenant placement (MTP) and Best-fit Greedy to show the quality of tenant placement. The experimental results show that Multi-tenant placement (MTP) algorithm is scalable and efficient in comparison with Best-fit Greedy Algorithm over proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
3

Shah, Arpita, and Narendra Patel. "Efficient and scalable multitenant placement approach for in-memory database over supple architecture." Computer Science and Information Technologies 1, no. 2 (2020): 39–46. http://dx.doi.org/10.11591/csit.v1i2.pp39-46.

Full text
Abstract:
Of late Multitenant model with In-Memory database has become prominent area for research. The paper has used advantages of multitenancy to reduce the cost for hardware, labor and make availability of storage by sharing database memory and file execution. The purpose of this paper is to give overview of proposed Supple architecture for implementing in-memory database backend and multitenancy, applicable in public and private cloud settings. Backend in-memory database uses column-oriented approach with dictionary based compression technique. We used dedicated sample benchmark for the workload processing and also adopt the SLA penalty model. In particular, we present two approximation algorithms, multi-tenant placement (MTP) and best-fit greedy to show the quality of tenant placement. The experimental results show that MTP algorithm is scalable and efficient in comparison with best-fit greedy algorithm over proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Yulong, Weizhe Zhang, Hui He, and Yawei Liu. "A New Method of Priority Assignment for Real-Time Flows in the WirelessHART Network by the TDMA Protocol." Sensors 18, no. 12 (2018): 4242. http://dx.doi.org/10.3390/s18124242.

Full text
Abstract:
WirelessHART is a wireless sensor network that is widely used in real-time demand analyses. A key challenge faced by WirelessHART is to ensure the character of real-time data transmission in the network. Identifying a priority assignment strategy that reduces the delay in flow transmission is crucial in ensuring real-time network performance and the schedulability of real-time network flows. We study the priority assignment of real-time flows in WirelessHART on the basis of the multi-channel time division multiple access (TDMA) protocol to reduce the delay and improve the ratio of scheduled. We provide three kinds of methods: (1) worst fit, (2) best fit, and (3) first fit and choose the most suitable one, namely the worst-fit method for allocating flows to each channel. More importantly, we propose two heuristic algorithms—a priority assignment algorithm based on the greedy strategy for C (WF-C) and a priority assignment algorithm based on the greedy strategy for U(WF-U)—for assigning priorities to the flows in each channel, whose time complexity is O ( m a x ( N ∗ m ∗ l o g ( m ) , ( N − m ) 2 ) ) . We then build a new simulation model to simulate the transmission of real-time flows in WirelessHART. Finally, we compare our two algorithms with WF-D and HLS algorithms in terms of the average value of the total end-to-end delay of flow sets, the ratio of schedulable flow sets, and the calculation time of the schedulability analysis. The optimal algorithm WF-C reduces the delay by up to 44.18 % and increases the schedulability ratio by up to 70.7 % , and it reduces the calculation time compared with the HLS algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Jayakumar, L., R. Jothi Chitra, J. Sivasankari, et al. "QoS Analysis for Cloud-Based IoT Data Using Multicriteria-Based Optimization Approach." Computational Intelligence and Neuroscience 2022 (September 7, 2022): 1–12. http://dx.doi.org/10.1155/2022/7255913.

Full text
Abstract:
This work explains why and how QoS modeling has been used within a multicriteria optimization approach. The parameters and metrics defined are intended to provide a broader and, at the same time, more precise analysis of the issues highlighted in the work dedicated to placement algorithms in the cloud. In order to find the optimal solution to a placement problem which is impractical in polynomial time, as in more particular cases, meta-heuristics more or less approaching the optimal solution are used in order to obtain a satisfactory solution. First, a model by a genetic algorithm is proposed. This genetic algorithm dedicated to the problem of placing virtual machines in the cloud has been implemented in two different versions. The former only considers elementary services, while the latter uses compound services. These two versions of the genetic algorithm are presented, and also, two greedy algorithms, round-robin and best-fit sorted, were used in order to allow a comparison with the genetic algorithm. The characteristics of these two algorithms are presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Bevilacqua, Vitoantonio, Giuseppe Mastronardi, Filippo Menolascina, Paolo Pannarale, and Giuseppe Romanazzi. "Bayesian Gene Regulatory Network Inference Optimization by means of Genetic Algorithms." JUCS - Journal of Universal Computer Science 15, no. (4) (2009): 826–39. https://doi.org/10.3217/jucs-015-04-0826.

Full text
Abstract:
Inferring gene regulatory networks from data requires the development of algorithms devoted to structure extraction. When time-course data is available, gene interactions may be modeled by a Bayesian Network (BN). Given a structure, that models the conditional independence between genes, we can tune the parameters in a way that maximize the likelihood of the observed data. The structure that best fit the observed data reflects the real gene network's connections. Well known learning algorithms (greedy search and simulated annealing) devoted to BN structure learning have been used in literature. We enhanced the fundamental step of structure learning by means of a classical evolutionary algorithm, named GA (Genetic algorithm), to evolve a set of candidate BN structures and found the model that best fits data, without prior knowledge of such structure. In the context of genetic algorithms, we proposed various initialization and evolutionary strategies suitable for the task. We tested our choices using simulated data drawn from a gene simulator, which has been used in the literature for benchmarking [Yu et al. (2002)]. We assessed the inferred models against this reference, calculating the performance indicators used for network reconstruction. The performances of the different evolutionary algorithms have been compared against the traditional search algorithms used so far (greedy search and simulated annealing). Finally we individuated as best candidate an evolutionary approach enhanced by Crossover-Two Point and Selection Roulette Wheel for the learning of gene regulatory networks with BN. We show that this approach outperforms classical structure learning methods in elucidating the original model of the simulated dataset. Finally we tested the GA approach on a real dataset where it reach 62% of recovered connections (sensitivity) and 64% of direct connections (precision), outperforming the other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Soumplis, P., G. Kontos, P. Kokkinos, et al. "Performance Optimization Across the Edge-Cloud Continuum: A Multi-agent Rollout Approach for Cloud-Native Application Workload Placement." SN Computer Science 5, no. 3 (2024): 318. https://doi.org/10.1007/s42979-024-02630-w.

Full text
Abstract:
The advancements in virtualization technologies and distributed computing infrastructures have sparked the development of cloud-native applications. This is grounded in the breakdown of a monolithic application into smaller, loosely connected components, often referred to as microservices, enabling enhancements in the application's performance, flexibility, and resilience, along with better resource utilization. When optimizing the performance of cloud-native applications, specific demands arise in terms of application latency and communication delays between microservices that are not taken into consideration by generic orchestration algorithms. In this work, we propose mechanisms for automating the allocation of computing resources to optimize the service delivery of cloud-native applications over the edge-cloud continuum. We initially introduce the problem's Mixed Integer Linear Programming (MILP) formulation. Given the potentially overwhelming execution time for real-sized problems, we propose a greedy algorithm, which allocates resources sequentially in a best-fit manner. To further improve the performance, we introduce a multi-agent rollout mechanism that evaluates the immediate effect of decisions but also leverages the underlying greedy heuristic to simulate the decisions anticipated from other agents, encapsulating this in a Reinforcement Learning framework. This approach allows us to effectively manage the performance–execution time trade-off and enhance performance by controlling the exploration of the Rollout mechanism. This flexibility ensures that the system remains adaptive to varied scenarios, making the most of the available computational resources while still ensuring high-quality decisions. © The Author(s) 2024.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Jing, Kim Anh Do, Sijin Wen, et al. "Merging Microarray Data, Robust Feature Selection, and Predicting Prognosis in Prostate Cancer." Cancer Informatics 2 (January 2006): 117693510600200. http://dx.doi.org/10.1177/117693510600200009.

Full text
Abstract:
Motivation Individual microarray studies searching for prognostic biomarkers often have few samples and low statistical power; however, publicly accessible data sets make it possible to combine data across studies. Method We present a novel approach for combining microarray data across institutions and platforms. We introduce a new algorithm, robust greedy feature selection (RGFS), to select predictive genes. Results We combined two prostate cancer microarray data sets, confirmed the appropriateness of the approach with the Kolmogorov-Smirnov goodness-of-fit test, and built several predictive models. The best logistic regression model with stepwise forward selection used 7 genes and had a misclassification rate of 31%. Models that combined LDA with different feature selection algorithms had misclassification rates between 19% and 33%, and the sets of genes in the models varied substantially during cross-validation. When we combined RGFS with LDA, the best model used two genes and had a misclassification rate of 15%. Availability Affymetrix U95Av2 array data are available at http://www.broad.mit.edu/cgi-bin/cancer/datasets.cgi . The cDNA microarray data are available through the Stanford Microarray Database ( http://cmgm.stanford.edu/pbrown/ ). GeneLink software is freely available at http://bioinformatics.mdanderson.org/GeneLink/ . DNA-Chip Analyzer software is publicly available at http://biosun1.harvard.edu/complab/dchip/ .
APA, Harvard, Vancouver, ISO, and other styles
9

Sulaiman, Adel, Marium Sadiq, Yasir Mehmood, Muhammad Akram, and Ghassan Ahmed Ali. "Fitness-Based Acceleration Coefficients Binary Particle Swarm Optimization (FACBPSO) to Solve the Discounted Knapsack Problem." Symmetry 14, no. 6 (2022): 1208. http://dx.doi.org/10.3390/sym14061208.

Full text
Abstract:
The discounted {0-1} knapsack problem (D{0-1}KP) is a multi-constrained optimization and an extended form of the 0-1 knapsack problem. The DKP is composed of a set of item batches where each batch has three items and the objective is to maximize profit by selecting at most one item from each batch. Therefore, the D{0-1}KP is complex and has found many applications in real economic problems and other areas where the concept of promotional discounts exists. As DKP belongs to a binary class problem, so the novel binary particle swarm optimization variant with modifications is proposed in this paper. The acceleration coefficients are important parameters of the particle swarm optimization algorithm that keep the balance between exploration and exploitation. In conventional binary particle swarm optimization (BPSO), the acceleration coefficients of each particle remain the same in iteration, whereas in the proposed variant, fitness-based acceleration coefficient binary particle swarm optimization (FACBPSO), values of acceleration coefficients are based on the fitness of each particle. This modification enforces the least fit particles to move fast and best fit accordingly, which accelerates the convergence speed and reduces the computing time. Experiments were conducted on four instances of DKP having 10 datasets of each instance and the results of FACBPSO were compared with conventional BPSO and the new exact algorithm using a greedy repair strategy. The results demonstrate that the proposed algorithm outperforms PSO-GRDKP and the new exact algorithm in solving four instances of D{0-1}KP, with improved convergence speed and feasible solution time.
APA, Harvard, Vancouver, ISO, and other styles
10

Shreyas, J., N. V. Priya, P. K. Udayprasad, N. N. Srinidhi, Chouhan Dharmendra, and Kumar S. M. Dilip. "Opportunistic Routing for Large Scale IoT Network to Reduce Transmission Overhead." Journal of Advancement in Parallel Computing 5, no. 1 (2022): 1–8. https://doi.org/10.5281/zenodo.6379125.

Full text
Abstract:
<em>Increase in popularity of sensor electronics have gained much attention for wireless sensor technologies and demands many IoT (Internet of things) applications for real time and industrial applications. In IoT the proliferation of devices which are able to directly connected to internet and can be monitored. Sensed data from the device has to be forwarded to base station or end user (EU) which is achieved by efficient routing protocols to improve data transmission for large scale IoT. During routing process redundant data may be forwarded by nodes causing more overhead which may lead to congestion. There exist many challenges like low power links, multiple disjoint path, and energy while designing efficient communication protocol. In this paper we propose an enhanced opportunistic routing (e-OR) protocol for self-disciplined and self-healing large scale IoT devices. Enhanced opportunistic routing protocol uses best fit traversing algorithm to find optimal and reliable routes. The e- OR estimates link quality of nodes to avoid frequent disconnections. During route discovery process e-OR adapts greedy behaviour for finding optimal and shortest routes. Further we integrate congestion avoidance using clear channel assignment (CCA) for better channel availability to avoid packet loss and achieve QoS.</em>
APA, Harvard, Vancouver, ISO, and other styles
11

Kalopesa, Eleni, Konstantinos Karyotis, Nikolaos Tziolas, Nikolaos Tsakiridis, Nikiforos Samarinas, and George Zalidis. "Estimation of Sugar Content in Wine Grapes via In Situ VNIR–SWIR Point Spectroscopy Using Explainable Artificial Intelligence Techniques." Sensors 23, no. 3 (2023): 1065. http://dx.doi.org/10.3390/s23031065.

Full text
Abstract:
Spectroscopy is a widely used technique that can contribute to food quality assessment in a simple and inexpensive way. Especially in grape production, the visible and near infrared (VNIR) and the short-wave infrared (SWIR) regions are of great interest, and they may be utilized for both fruit monitoring and quality control at all stages of maturity. The aim of this work was the quantitative estimation of the wine grape ripeness, for four different grape varieties, by using a highly accurate contact probe spectrometer that covers the entire VNIR–SWIR spectrum (350–2500 nm). The four varieties under examination were Chardonnay, Malagouzia, Sauvignon-Blanc, and Syrah and all the samples were collected over the 2020 and 2021 harvest and pre-harvest phenological stages (corresponding to stages 81 through 89 of the BBCH scale) from the vineyard of Ktima Gerovassiliou located in Northern Greece. All measurements were performed in situ and a refractometer was used to measure the total soluble solids content (°Brix) of the grapes, providing the ground truth data. After the development of the grape spectra library, four different machine learning algorithms, namely Partial Least Squares regression (PLS), Random Forest regression, Support Vector Regression (SVR), and Convolutional Neural Networks (CNN), coupled with several pre-treatment methods were applied for the prediction of the °Brix content from the VNIR–SWIR hyperspectral data. The performance of the different models was evaluated using a cross-validation strategy with three metrics, namely the coefficient of the determination (R2), the root mean square error (RMSE), and the ratio of performance to interquartile distance (RPIQ). High accuracy was achieved for Malagouzia, Sauvignon-Blanc, and Syrah from the best models developed using the CNN learning algorithm (R2&gt;0.8, RPIQ≥4), while a good fit was attained for the Chardonnay variety from SVR (R2=0.63, RMSE=2.10, RPIQ=2.24), proving that by using a portable spectrometer the in situ estimation of the wine grape maturity could be provided. The proposed methodology could be a valuable tool for wine producers making real-time decisions on harvest time and with a non-destructive way.
APA, Harvard, Vancouver, ISO, and other styles
12

Poufinas, Thomas, Periklis Gogas, Theophilos Papadimitriou, and Emmanouil Zaganidis. "Machine Learning in Forecasting Motor Insurance Claims." Risks 11, no. 9 (2023): 164. http://dx.doi.org/10.3390/risks11090164.

Full text
Abstract:
Accurate forecasting of insurance claims is of the utmost importance for insurance activity as the evolution of claims determines cash outflows and the pricing, and thus the profitability, of the underlying insurance coverage. These are used as inputs when the insurance company drafts its business plan and determines its risk appetite, and the respective solvency capital required (by the regulators) to absorb the assumed risks. The conventional claim forecasting methods attempt to fit (each of) the claims frequency and severity with a known probability distribution function and use it to project future claims. This study offers a fresh approach in insurance claims forecasting. First, we introduce two novel sets of variables, i.e., weather conditions and car sales, and second, we employ a battery of Machine Learning (ML) algorithms (Support Vector Machines—SVM, Decision Trees, Random Forests, and Boosting) to forecast the average (mean) insurance claim per insured car per quarter. Finally, we identify the variables that are the most influential in forecasting insurance claims. Our dataset comes from the motor portfolio of an insurance company operating in Athens, Greece and spans a period from 2008 to 2020. We found evidence that the three most informative variables pertain to the new car sales with a 3-quarter and 1-quarter lag and the minimum temperature of Elefsina (one of the weather stations in Athens) with a 3-quarter lag. Among the models tested, Random Forest with limited depth and XGboost run on the 15 most informative variables, and these exhibited the best performance. These findings can be useful in the hands of insurers as they can consider the weather conditions and the new car sales among the parameters that are considered to perform claims forecasting.
APA, Harvard, Vancouver, ISO, and other styles
13

Tranter, Morgan, Svenja Steding, Christopher Otto, et al. "Environmental hazard quantification toolkit based on modular numerical simulations." Advances in Geosciences 58 (November 22, 2022): 67–76. http://dx.doi.org/10.5194/adgeo-58-67-2022.

Full text
Abstract:
Abstract. Quantifying impacts on the environment and human health is a critical requirement for geological subsurface utilisation projects. In practice, an easily accessible interface for operators and regulators is needed so that risks can be monitored, managed, and mitigated. The primary goal of this work was to create an environmental hazards quantification toolkit as part of a risk assessment for in-situ coal conversion at two European study areas: the Kardia lignite mine in Greece and the Máza-Váralja hard coal deposit in Hungary, with complex geological settings. A substantial rock volume is extracted during this operation, and a contaminant pool is potentially left behind, which may put the freshwater aquifers and existing infrastructure at the surface at risk. The data-driven, predictive tool is outlined exemplary in this paper for the Kardia contaminant transport model. Three input parameters were varied in a previous scenario analysis: the hydraulic conductivity, as well as the solute dispersivity and retardation coefficient. Numerical models are computationally intensive, so the number of simulations that can be performed for scenario analyses is limited. The presented approach overcomes these limitations by instead using surrogate models to determine the probability and severity of each hazard. Different surrogates based on look-up tables or machine learning algorithms were tested for their simplicity, goodness of fit, and efficiency. The best performing surrogate was then used to develop an interactive dashboard for visualising the hazard probability distributions. The machine learning surrogates performed best on the data with coefficients of determination R2&gt;0.98, and were able to make the predictions quasi-instantaneously. The retardation coefficient was identified as the most influential parameter, which was also visualised using the toolkit dashboard. It showed that the median values for the contaminant concentrations in the nearby aquifer varied by five orders of magnitude depending on whether the lower or upper retardation range was chosen. The flexibility of this approach to update parameter uncertainties as needed can significantly increase the quality of predictions and the value of risk assessments. In principle, this newly developed tool can be used as a basis for similar hazard quantification activities.
APA, Harvard, Vancouver, ISO, and other styles
14

Paraskeva, E., A. Z. Bonanos, A. Liakos, Z. T. Spetsieri, and J. R. Maund. "First systematic high-precision survey of bright supernovae." Astronomy & Astrophysics 643 (October 27, 2020): A35. http://dx.doi.org/10.1051/0004-6361/202037664.

Full text
Abstract:
Rapid variability before and near the maximum brightness of supernovae has the potential to provide a better understanding of nearly every aspect of supernovae, from the physics of the explosion up to their progenitors and the circumstellar environment. Thanks to modern time-domain optical surveys, which are discovering supernovae in the early stage of their evolution, we have the unique opportunity to capture their intraday behavior before maximum. We present high-cadence photometric monitoring (on the order of seconds-minutes) of the optical light curves of three Type Ia and two Type II SNe over several nights before and near maximum light, using the fast imagers available on the 2.3 m Aristarchos telescope at Helmos Observatory and the 1.2 m telescope at Kryoneri Observatory in Greece. We applied differential aperture photometry techniques using optimal apertures and we present reconstructed light curves after implementing a seeing correction and the Trend Filtering Algorithm (TFA, Kovács et al. 2005, MNRAS, 356, 557). TFA yielded the best results, achieving a typical precision between 0.01 and 0.04 mag. We did not detect significant bumps with amplitudes greater than 0.05 mag in any of the SNe targets in the VR-, R-, and I-bands light curves obtained. We measured the intraday slope for each light curve, which ranges between −0.37−0.36 mag day−1 in broadband VR, −0.19−0.31 mag day−1 in R band, and −0.13−0.10 mag day−1 in I band. We used SNe light curve fitting templates for SN 2018gv, SN 2018hgc and SN 2018hhn to photometrically classify the light curves and to calculate the time of maximum. We provide values for the maximum of SN 2018zd after applying a low-order polynomial fit and SN 2018hhn for the first time. We conclude that optimal aperture photometry in combination with TFA provides the highest-precision light curves for SNe that are relatively well separated from the centers of their host galaxies. This work aims to inspire the use of ground-based, high-cadence and high-precision photometry to study SNe with the purpose of revealing clues and properties of the explosion environment of both core-collapse and Type Ia supernovae, the explosion mechanisms, binary star interaction and progenitor channels. We suggest monitoring early supernovae light curves in hotter (bluer) bands with a cadence of hours as a promising way of investigating the post-explosion photometric behavior of the progenitor stars.
APA, Harvard, Vancouver, ISO, and other styles
15

COŞAR, Batuhan Mustafa, Bilge SAY, and Tansel DÖKEROĞLU. "A New Greedy Algorithm for the Curriculum-based Course Timetabling Problem." Düzce Üniversitesi Bilim ve Teknoloji Dergisi, April 30, 2023. http://dx.doi.org/10.29130/dubited.1113519.

Full text
Abstract:
This study describes a novel greedy algorithm for optimizing the well-known Curriculum-Based Course Timetabling (CB-CTT) problem. Greedy algorithms are a good alternative to brute-force and evolutionary algorithms, which take a long time to execute in order to find the best solution. Rather than employing a single heuristic, as many greedy algorithms do, we define and apply 120 new heuristics to the same problem instance. To assign courses to available rooms, our proposed greedy algorithm employs the Largest-First, Smallest-First, Best-Fit, Average-weight first, and Highest Unavailable course-first heuristics. Extensive experiments are carried out on 21 problem instances from the benchmark set of the Second International Timetabling Competition (ITC-2007). For 18 problems with significantly reduced soft-constraint values, the proposed greedy algorithm can report zero hard constraint violations (feasible solutions). The proposed algorithm outperforms state-of-the-art greedy heuristics in terms of performance.
APA, Harvard, Vancouver, ISO, and other styles
16

Soumplis, Polyzois, Georgios Kontos, Panagiotis Kokkinos, et al. "Performance Optimization Across the Edge-Cloud Continuum: A Multi-agent Rollout Approach for Cloud-Native Application Workload Placement." SN Computer Science 5, no. 3 (2024). http://dx.doi.org/10.1007/s42979-024-02630-w.

Full text
Abstract:
AbstractThe advancements in virtualization technologies and distributed computing infrastructures have sparked the development of cloud-native applications. This is grounded in the breakdown of a monolithic application into smaller, loosely connected components, often referred to as microservices, enabling enhancements in the application’s performance, flexibility, and resilience, along with better resource utilization. When optimizing the performance of cloud-native applications, specific demands arise in terms of application latency and communication delays between microservices that are not taken into consideration by generic orchestration algorithms. In this work, we propose mechanisms for automating the allocation of computing resources to optimize the service delivery of cloud-native applications over the edge-cloud continuum. We initially introduce the problem’s Mixed Integer Linear Programming (MILP) formulation. Given the potentially overwhelming execution time for real-sized problems, we propose a greedy algorithm, which allocates resources sequentially in a best-fit manner. To further improve the performance, we introduce a multi-agent rollout mechanism that evaluates the immediate effect of decisions but also leverages the underlying greedy heuristic to simulate the decisions anticipated from other agents, encapsulating this in a Reinforcement Learning framework. This approach allows us to effectively manage the performance–execution time trade-off and enhance performance by controlling the exploration of the Rollout mechanism. This flexibility ensures that the system remains adaptive to varied scenarios, making the most of the available computational resources while still ensuring high-quality decisions.
APA, Harvard, Vancouver, ISO, and other styles
17

"Performance Evaluation of Contiguous and Noncontiguous Processor Allocation based on Common Communication Patterns for 2D Mesh Interconnection Network." International Journal of Cloud Applications and Computing 12, no. 1 (2022): 0. http://dx.doi.org/10.4018/ijcac.295239.

Full text
Abstract:
Several processor allocation studies show that the performance of noncontiguous allocation is dramatically better than that of contiguous allocation, but this is not always true. The communication pattern may have a great effect on the performance of processor allocation algorithms. In this paper, the performance of well-known allocation algorithms is re-considered based on several communication patterns, including Near Neighbor, Ring, All-to-All, Divide and Conquer Binomial Tree (DQBT), Fast Fourier Transform (FFT), One-to-All, All-to-One, and Random. The allocation algorithms investigated include the contiguous First Fit (FF) and Best Fit (BF) and the noncontiguous Paging(0), Greedy Available Busy List (GABL) and Multiple Buddy Strategy (MBS). In near neighbor, FFT and DQBT, the simulation results show that the performance of contiguous allocation is dramatically better than that of the noncontiguous allocation in terms of response time; except for MBS in DQBT. In All-to-All, the results show that the performance of contiguous FF and BF is better than that of the noncontiguous MBS.
APA, Harvard, Vancouver, ISO, and other styles
18

Eleni, Kalopesa, Karyotis Konstantinos, Tziolas Nikolaos, Tsakiridis Nikolaos, Samarinas Nikiforos, and Zalidis George. "Estimation of Sugar Content in Wine Grapes via In Situ VNIR–SWIR Point Spectroscopy Using Explainable Artificial Intelligence Techniques." January 17, 2023. https://doi.org/10.3390/s23031065.

Full text
Abstract:
Spectroscopy is a widely used technique that can contribute to food quality assessment in a simple and inexpensive way. Especially in grape production, the visible and near infrared (VNIR) and the short-wave infrared (SWIR) regions are of great interest, and they may be utilized for both fruit monitoring and quality control at all stages of maturity. The aim of this work was the quantitative estimation of the wine grape ripeness, for four different grape varieties, by using a highly accurate contact probe spectrometer that covers the entire VNIR&ndash;SWIR spectrum (350&ndash;2500 nm). The four varieties under examination were Chardonnay, Malagouzia, Sauvignon-Blanc, and Syrah and all the samples were collected over the 2020 and 2021 harvest and pre-harvest phenological stages (corresponding to stages 81 through 89 of the BBCH scale) from the vineyard of Ktima Gerovassiliou located in Northern Greece. All measurements were performed in situ and a refractometer was used to measure the total soluble solids content (&deg;Brix) of the grapes, providing the ground truth data. After the development of the grape spectra library, four different machine learning algorithms, namely Partial Least Squares regression (PLS), Random Forest regression, Support Vector Regression (SVR), and Convolutional Neural Networks (CNN), coupled with several pre-treatment methods were applied for the prediction of the &deg;Brix content from the VNIR&ndash;SWIR hyperspectral data. The performance of the different models was evaluated using a cross-validation strategy with three metrics, namely the coefficient of the determination (R2), the root mean square error (RMSE), and the ratio of performance to interquartile distance (RPIQ). High accuracy was achieved for Malagouzia, Sauvignon-Blanc, and Syrah from the best models developed using the CNN learning algorithm&nbsp;while a good fit was attained for the Chardonnay variety from SVR&nbsp;proving that by using a portable spectrometer the in situ estimation of the wine grape maturity could be provided. The proposed methodology could be a valuable tool for wine producers making real-time decisions on harvest time and with a non-destructive way.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography