To see the other types of publications on this topic, follow the link: Tree-based optimization tool.

Journal articles on the topic 'Tree-based optimization tool'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Tree-based optimization tool.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Soukaina, Mihi, Ait Ben Ali Brahim, El Bazi Ismail, Arezki Sara, and Laachfoubi Nabil. "Dialectal Arabic sentiment analysis based on tree-based pipeline optimization tool." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 4 (2022): 4195–205. https://doi.org/10.11591/ijece.v12i4.pp4195-4205.

Full text
Abstract:
The heavy involvement of the Arabic internet users resulted in spreading data written in the Arabic language and creating a vast research area regarding natural language processing (NLP). Sentiment analysis is a growing field of research that is of great importance to everyone considering the high added potential for decision-making and predicting upcoming actions using the texts produced in social networks. Arabic used in microblogging websites, especially Twitter, is highly informal. It is not compliant with neither standards nor spelling regulations making it quite challenging for automatic machine-learning techniques. In this paper’s scope, we propose a new approach based on AutoML methods to improve the efficiency of the sentiment classification process for dialectal Arabic. This approach was validated through benchmarks testing on three different datasets that represent three vernacular forms of Arabic. The obtained results show that the presented framework has significantly increased accuracy than similar works in the literature.
APA, Harvard, Vancouver, ISO, and other styles
2

Fati, Suliman Mohamed, Amgad Muneer, Nur Arifin Akbar, and Shakirah Mohd Taib. "A Continuous Cuffless Blood Pressure Estimation Using Tree-Based Pipeline Optimization Tool." Symmetry 13, no. 4 (2021): 686. http://dx.doi.org/10.3390/sym13040686.

Full text
Abstract:
High blood pressure (BP) may lead to further health complications if not monitored and controlled, especially for critically ill patients. Particularly, there are two types of blood pressure monitoring, invasive measurement, whereby a central line is inserted into the patient’s body, which is associated with infection risks. The second measurement is cuff-based that monitors BP by detecting the blood volume change at the skin surface using a pulse oximeter or wearable devices such as a smartwatch. This paper aims to estimate the blood pressure using machine learning from photoplethysmogram (PPG) signals, which is obtained from cuff-based monitoring. To avoid the issues associated with machine learning such as improperly choosing the classifiers and/or not selecting the best features, this paper utilized the tree-based pipeline optimization tool (TPOT) to automate the machine learning pipeline to select the best regression models for estimating both systolic BP (SBP) and diastolic BP (DBP) separately. As a pre-processing stage, notch filter, band-pass filter, and zero phase filtering were applied by TPOT to eliminate any potential noise inherent in the signal. Then, the automated feature selection was performed to select the best features to estimate the BP, including SBP and DBP features, which are extracted using random forest (RF) and k-nearest neighbors (KNN), respectively. To train and test the model, the PhysioNet global dataset was used, which contains 32.061 million samples for 1000 subjects. Finally, the proposed approach was evaluated and validated using the mean absolute error (MAE). The results obtained were 6.52 mmHg for SBS and 4.19 mmHg for DBP, which show the superiority of the proposed model over the related works.
APA, Harvard, Vancouver, ISO, and other styles
3

Kuk, Edyta, Jerzy Stopa, Michał Kuk, Damian Janiga, and Paweł Wojnarowski. "Petroleum Reservoir Control Optimization with the Use of the Auto-Adaptive Decision Trees." Energies 14, no. 18 (2021): 5702. http://dx.doi.org/10.3390/en14185702.

Full text
Abstract:
The global increase in energy demand and the decreasing number of newly discovered hydrocarbon reservoirs caused by the relatively low oil price means that it is crucial to exploit existing reservoirs as efficiently as possible. Optimization of the reservoir control may increase the technical and economic efficiency of the production. In this paper, a novel algorithm that automatically determines the intelligent control maximizing the NPV of a given production process was developed. The idea is to build an auto-adaptive parameterized decision tree that replaces the arbitrarily selected limit values for the selected attributes of the decision tree with parameters. To select the optimal values of the decision tree parameters, an AI-based optimization tool called SMAC (Sequential Model-based Algorithm Configuration) was used. In each iteration, the generated control sequence is introduced into the reservoir simulator to compute the NVP, which is then utilized by the SMAC tool to vary the limit values to generate a better control sequence, which leads to an improved NPV. A new tool connecting the parameterized decision tree with the reservoir simulator and the optimization tool was developed. Its application on a simulation model of a real reservoir for which the CCS-EOR process was considered allowed oil production to be increased by 3.5% during the CO2-EOR phase, reducing the amount of carbon dioxide injected at that time by 16%. Hence, the created tool allowed revenue to be increased by 49%.
APA, Harvard, Vancouver, ISO, and other styles
4

Arjun, Mantri. "Intelligent Automation of ETL Processes for LLM Deployment: A Comparative Study of Dataverse and TPOT." European Journal of Advances in Engineering and Technology 11, no. 4 (2024): 154–58. https://doi.org/10.5281/zenodo.12755714.

Full text
Abstract:
This paper presents Dataverse, a unified open-source Extract-Transform-Load (ETL) pipeline designed for large language models (LLMs). Additionally, it compares Dataverse with TPOT (Tree-based Pipeline Optimization Tool), an automated machine learning (AutoML) tool, to highlight their respective strengths and use cases. Dataverse aims to address the challenges associated with data processing at scale by providing a user-friendly and automated solution. TPOT, on the other hand, focuses on automating the machine learning pipeline optimization process. This paper discusses the architecture, features, and benefits of both tools, highlighting their roles in improving productivity and data quality in data-driven enterprises. It presents Dataverse's capabilities in data ingestion, cleaning, quality enhancement, distributed processing, and data quality control tailored for LLMs. The paper also explains TPOT's tree-based pipeline optimization approach using genetic programming for automated machine learning pipeline design. A comparative analysis is provided, highlighting the distinct use cases, automation approaches, customization capabilities, and scalability aspects of Dataverse and TPOT.
APA, Harvard, Vancouver, ISO, and other styles
5

Seda, Milos. "Steiner Tree Problem in Graphs and Mixed Integer Linear Programming-Based Approach in GAMS." WSEAS TRANSACTIONS ON COMPUTERS 21 (July 2, 2022): 257–62. http://dx.doi.org/10.37394/23205.2022.21.31.

Full text
Abstract:
The Steiner tree problem in graphs involves finding a minimum cost tree which connects a defined subset of the vertices. This problem generalises the minimum spanning tree problem, in contrast, it is NP-complete and is usually solved for large instances by deterministic or stochastic heuristic methods and approximate algorithms. In this paper, however, we focus on a different approach, based on the formulation of a mixed integer programming model and its modification for solving in the professional optimization tool GAMS, which is now capable of solving even large instances of problems of exponential complexity.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Jianyin, Liuying Ma, Yuan Sun, Xin Shan, and Ying Liu. "Optimization of Leakage Risk and Maintenance Cost for a Subsea Production System Based on Uncertain Fault Tree." Axioms 12, no. 2 (2023): 194. http://dx.doi.org/10.3390/axioms12020194.

Full text
Abstract:
Traditional fault tree analysis is an effective tool used to evaluate system risk if the required data are sufficient. Unfortunately, the operation and maintenance data of some complex systems are difficult to obtain due to economic or technical reasons. The solution is to invite experts to evaluate some critical aspect of the performance of the system. In this study, the belief degrees of the occurrence of basic events evaluated by experts are measured by an uncertain measure. Then, a system risk assessment method based on an uncertain fault tree is proposed, based on which two general optimization models are established. Furthermore, the genetic algorithm (GA) and the nondominated sorting genetic algorithm II (NSGA-II) are applied to solve the two optimization models, separately. In addition, the proposed risk assessment method is applied for the leakage risk evaluation of a subsea production system, and the two general optimization models are used to optimize the leakage risk and maintenance cost of the subsea production system. The optimization results provide a theoretical basis for practitioners to guarantee the safety of subsea production system.
APA, Harvard, Vancouver, ISO, and other styles
7

Mikita, Tomáš, and Petr Balogh. "Usage of Geoprocessing Services in Precision Forestry for Wood Volume Calculation and Wind Risk Assessment." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 63, no. 3 (2015): 793–801. http://dx.doi.org/10.11118/actaun201563030793.

Full text
Abstract:
This paper outlines the idea of a precision forestry tool for optimizing clearcut size and shape within the process of forest recovery and its publishing in the form of a web processing service for forest owners on the Internet. The designed tool titled COWRAS (Clearcut Optimization and Wind Risk Assessment) is developed for optimization of clearcuts (their location, shape, size, and orientation) with subsequent wind risk assessment. The tool primarily works with airborne LiDAR data previously processed to the form of a digital surface model (DSM) and a digital elevation model (DEM). In the first step, the growing stock on the planned clearcut determined by its location and area in feature class is calculated (by the method of individual tree detection). Subsequently tree heights from canopy height model (CHM) are extracted and then diameters at breast height (DBH) and wood volume using the regressions are calculated. Information about wood volume of each tree in the clearcut is exported and summarized in a table. In the next step, all trees in the clearcut are removed and a new DSM without trees in the clearcut is generated. This canopy model subsequently serves as an input for evaluation of wind risk damage by the MAXTOPEX tool (Mikita et al., 2012). In the final raster, predisposition of uncovered forest stand edges (around the clearcut) to wind risk is calculated based on this analysis. The entire tool works in the background of ArcGIS server as a spatial decision support system for foresters.
APA, Harvard, Vancouver, ISO, and other styles
8

Morel, Benoit, Alexey M. Kozlov, Alexandros Stamatakis, and Gergely J. Szöllősi. "GeneRax: A Tool for Species-Tree-Aware Maximum Likelihood-Based Gene Family Tree Inference under Gene Duplication, Transfer, and Loss." Molecular Biology and Evolution 37, no. 9 (2020): 2763–74. http://dx.doi.org/10.1093/molbev/msaa141.

Full text
Abstract:
Abstract Inferring phylogenetic trees for individual homologous gene families is difficult because alignments are often too short, and thus contain insufficient signal, while substitution models inevitably fail to capture the complexity of the evolutionary processes. To overcome these challenges, species-tree-aware methods also leverage information from a putative species tree. However, only few methods are available that implement a full likelihood framework or account for horizontal gene transfers. Furthermore, these methods often require expensive data preprocessing (e.g., computing bootstrap trees) and rely on approximations and heuristics that limit the degree of tree space exploration. Here, we present GeneRax, the first maximum likelihood species-tree-aware phylogenetic inference software. It simultaneously accounts for substitutions at the sequence level as well as gene level events, such as duplication, transfer, and loss relying on established maximum likelihood optimization algorithms. GeneRax can infer rooted phylogenetic trees for multiple gene families, directly from the per-gene sequence alignments and a rooted, yet undated, species tree. We show that compared with competing tools, on simulated data GeneRax infers trees that are the closest to the true tree in 90% of the simulations in terms of relative Robinson–Foulds distance. On empirical data sets, GeneRax is the fastest among all tested methods when starting from aligned sequences, and it infers trees with the highest likelihood score, based on our model. GeneRax completed tree inferences and reconciliations for 1,099 Cyanobacteria families in 8 min on 512 CPU cores. Thus, its parallelization scheme enables large-scale analyses. GeneRax is available under GNU GPL at https://github.com/BenoitMorel/GeneRax (last accessed June 17, 2020).
APA, Harvard, Vancouver, ISO, and other styles
9

Uthayasuriyan, Agash, Ugochukwu Ilozurike Duru, Angela Nwachukwu, Thangavelu Shunmugasundaram, and Jeyakumar Gurusamy. "FLOW PATTERN PREDICTION IN HORIZONTAL AND INCLINED PIPES USING TREE-BASED AUTOMATED MACHINE LEARNING." Rudarsko-geološko-naftni zbornik 39, no. 4 (2024): 153–66. http://dx.doi.org/10.17794/rgn.2024.4.12.

Full text
Abstract:
In the oil and gas industry, understanding two-phase (gas-liquid) flow is pivotal, as it directly influences equipment design, quality control, and operational efficiency. Flow pattern determination is thus fundamental to industrial engineering and management. This study utilizes the Tree-based Pipeline Optimization Tool (TPOT), an Automated Machine Learning (AutoML) framework that employs genetic programming, in obtaining the best machine learning model for a provided dataset. This paper presents the design of flow pattern prediction models using the TPOT. The TPOT was applied to predict flow patterns in 2.5 cm and 5.1 cm diameter pipes, using datasets from existing literature. The datasets went through handling of imbalanced data, standardization, and one-hot encoding as data preparation techniques before being fed into TPOT. The models designed for the 2.5 cm and 5.1 cm datasets were named as FPTL_TPOT_2.5 and FPTL_TPOT_5.1, respectively. A comparative analysis of these models alongside other standard supervised machine learning models and similar state-of-the-art similar two-phase flow prediction models was carried out and the insights on the performance of these TPOT designed models were discussed. The results demonstrated that models designed with TPOT achieve remarkable accuracy, scoring 97.66% and 98.09%, for the 2.5 cm and 5.1 cm datasets respectively. Furthermore, the FPTL_TPOT_2.5 and FPTL_TPOT_5.1 models outperformed other counterpart machine learning models in terms of performance, underscoring TPOT’s effectiveness in designing machine learning models for flow pattern prediction. The findings of this research carry significant implications for enhancing efficiency and optimizing industrial processes in the oil and gas sector.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, ZiZheng, LiuChen Dai, YiMing Wang, HanLin Qin, JInPing Zhang, and XinRan Yin. "Flight Technology Evaluation Based on Flight Parameters." Advances in Computer and Engineering Technology Research 1, no. 2 (2024): 79. http://dx.doi.org/10.61935/acetr.2.1.2024.p79.

Full text
Abstract:
Based on the flight safety, this paper develops a flight technology evaluation method based on the gradient boosting decision tree ( GBDT ) model by collecting and analyzing flight data. This method comprehensively considers flight parameters and provides a more accurate pilot flight technology assessment tool through the analysis of flight records. It is expected to improve the training plan and enhance the technical level of pilots to further improve the safety and sustainable development of air transportation. By introducing the deep learning network structure optimization evaluation method, the flight safety is further enhanced.
APA, Harvard, Vancouver, ISO, and other styles
11

Obreja, Claudiu, Gheorghe Stan, Lucian Adrian Mihaila, and Marius Pascu. "Application of Tree Graph Method for Reducing the Total Time of Tool Changing in Milling and Boring Machine Tools." Applied Mechanics and Materials 371 (August 2013): 431–35. http://dx.doi.org/10.4028/www.scientific.net/amm.371.431.

Full text
Abstract:
With a view of increasing the productivity on CNC machine tools one of the main solution is to reduce, as much as possible, the auxiliary time consumed with the set-up and replacement of the tools and work pieces engaged in the machining process. Reducing the total time of the tool changing process by the automatic tool changer system can be also achieved through minimizing the number of movements needed for the actual exchange of the tool, from the tool magazine to the machine spindle (the optimization of the tool changing sequences). This paper presents a new design method based on the tree-graph theory. We consider an existing automatic tool changing system, mounted on the milling and boring machining centre, and by applying the new method we obtain all the possible configurations to minimize the tool changing sequence of the automatic tool changer system. By making use of the method proposed we obtain the tool changing sequences with minimum necessary movements needed to exchange the tool. Reconfiguring an existing machine tool provided with an automatic tool changer system by making use of the proposed method leads to obtaining the smallest changing time and thus high productivity.
APA, Harvard, Vancouver, ISO, and other styles
12

Noor, Maher, and A. Yousif Suhad. "An automated machine learning model for diagnosing COVID-19 infection." International Journal of Artificial Intelligence (IJ-AI) 12, no. 3 (2023): 1360–69. https://doi.org/10.11591/ijai.v12.i3.pp1360-1369.

Full text
Abstract:
The coronavirus disease 2019 (COVID-19) epidemic still impacts every facet of life and necessitates a fast and accurate diagnosis. The need for an effective, rapid, and precise way to reduce radiologists' workload in diagnosing suspected cases has emerged. This study used the tree-based pipeline optimization tool (TPOT) and many machine learning (ML) algorithms. TPOT is an open-source genetic programming-based AutoML system that optimizes a set of feature preprocessors and ML models to maximize classification accuracy on a supervised classification problem. A series of trials and comparisons with the results of ML and earlier studies discovered that most of the AutoML beat traditional ML in terms of accuracy. A blood test dataset that has 111 variables and 5644 cases were used. In TPOT, 450 pipelines were used, and the best pipeline selected consisted of radial basis function (RBF) Sampler preprocessing and Gradient boosting classifier as the best algorithm with a 99% accuracy rate.
APA, Harvard, Vancouver, ISO, and other styles
13

Radzi, Siti Fairuz Mat, Muhammad Khalis Abdul Karim, M. Iqbal Saripan, Mohd Amiruddin Abd Rahman, Iza Nurzawani Che Isa, and Mohammad Johari Ibahim. "Hyperparameter Tuning and Pipeline Optimization via Grid Search Method and Tree-Based AutoML in Breast Cancer Prediction." Journal of Personalized Medicine 11, no. 10 (2021): 978. http://dx.doi.org/10.3390/jpm11100978.

Full text
Abstract:
Automated machine learning (AutoML) has been recognized as a powerful tool to build a system that automates the design and optimizes the model selection machine learning (ML) pipelines. In this study, we present a tree-based pipeline optimization tool (TPOT) as a method for determining ML models with significant performance and less complex breast cancer diagnostic pipelines. Some features of pre-processors and ML models are defined as expression trees and optimal gene programming (GP) pipelines, a stochastic search system. Features of radiomics have been presented as a guide for the ML pipeline selection from the breast cancer data set based on TPOT. Breast cancer data were used in a comparative analysis of the TPOT-generated ML pipelines with the selected ML classifiers, optimized by a grid search approach. The principal component analysis (PCA) random forest (RF) classification was proven to be the most reliable pipeline with the lowest complexity. The TPOT model selection technique exceeded the performance of grid search (GS) optimization. The RF classifier showed an outstanding outcome amongst the models in combination with only two pre-processors, with a precision of 0.83. The grid search optimized for support vector machine (SVM) classifiers generated a difference of 12% in comparison, while the other two classifiers, naïve Bayes (NB) and artificial neural network—multilayer perceptron (ANN-MLP), generated a difference of almost 39%. The method’s performance was based on sensitivity, specificity, accuracy, precision, and receiver operating curve (ROC) analysis.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Chao, Lv Zhou, Xuejian Li, et al. "Optimizing the Spatial Structure of Metasequoia Plantation Forest Based on UAV-LiDAR and Backpack-LiDAR." Remote Sensing 15, no. 16 (2023): 4090. http://dx.doi.org/10.3390/rs15164090.

Full text
Abstract:
Optimizing the spatial structure of forests is important for improving the quality of forest ecosystems. Light detection and ranging (LiDAR) could accurately extract forest spatial structural parameters, which has significant advantages in spatial optimization and resource monitoring. In this study, we used unmanned aerial vehicle LiDAR (UAV-LiDAR) and backpack-LiDAR to acquire point cloud data of Metasequoia plantation forests from different perspectives. Then the parameters, such as diameter at breast height and tree height, were extracted based on the point cloud data, while the accuracy was verified using ground-truth data. Finally, a single-tree-level thinning tool was developed to optimize the spatial structure of the stand based on multi-objective planning and the Monte Carlo algorithm. The results of the study showed that the accuracy of LiDAR-based extraction was (R2 = 0.96, RMSE = 3.09 cm) for diameter at breast height, and the accuracy of R2 and RMSE for tree height extraction were 0.85 and 0.92 m, respectively. Thinning improved stand objective function value Q by 25.40%, with the most significant improvement in competition index CI and openness K of 17.65% and 22.22%, respectively, compared to the pre-optimization period. The direct effects of each spatial structure parameter on the objective function values were ranked as follows: openness K (1.18) > aggregation index R (0.67) > competition index CI (0.42) > diameter at breast height size ratio U (0.06). Additionally, the indirect effects were ranked as follows: aggregation index R (0.86) > diameter at breast height size ratio U (0.48) > competition index CI (0.33). The study realized the optimization of stand spatial structure based on double LiDAR data, providing a new reference for forest management and structure optimization.
APA, Harvard, Vancouver, ISO, and other styles
15

Du, Shenglan, Roderik Lindenbergh, Hugo Ledoux, Jantien Stoter, and Liangliang Nan. "AdTree: Accurate, Detailed, and Automatic Modelling of Laser-Scanned Trees." Remote Sensing 11, no. 18 (2019): 2074. http://dx.doi.org/10.3390/rs11182074.

Full text
Abstract:
Laser scanning is an effective tool for acquiring geometric attributes of trees and vegetation, which lays a solid foundation for 3-dimensional tree modelling. Existing studies on tree modelling from laser scanning data are vast. However, some works cannot guarantee sufficient modelling accuracy, while some other works are mainly rule-based and therefore highly depend on user inputs. In this paper, we propose a novel method to accurately and automatically reconstruct detailed 3D tree models from laser scans. We first extract an initial tree skeleton from the input point cloud by establishing a minimum spanning tree using the Dijkstra shortest-path algorithm. Then, the initial tree skeleton is pruned by iteratively removing redundant components. After that, an optimization-based approach is performed to fit a sequence of cylinders to approximate the geometry of the tree branches. Experiments on various types of trees from different data sources demonstrate the effectiveness and robustness of our method. The overall fitting error (i.e., the distance between the input points and the output model) is less than 10 cm. The reconstructed tree models can be further applied in the precise estimation of tree attributes, urban landscape visualization, etc. The source code of this work is freely available at https://github.com/tudelft3d/adtree.
APA, Harvard, Vancouver, ISO, and other styles
16

Selvan Chenni Chetty, Thirumalai, Vadim Bolshev, Siva Shankar Subramanian, et al. "Optimized Hierarchical Tree Deep Convolutional Neural Network of a Tree-Based Workload Prediction Scheme for Enhancing Power Efficiency in Cloud Computing." Energies 16, no. 6 (2023): 2900. http://dx.doi.org/10.3390/en16062900.

Full text
Abstract:
Workload prediction is essential in cloud data centers (CDCs) for establishing scalability and resource elasticity. However, the workload prediction accuracy in the cloud data center could be better due to noise, redundancy, and low performance for workload prediction. This paper designs a hierarchical tree-based deep convolutional neural network (T-CNN) model with sheep flock optimization (SFO) to enhance CDCs’ power efficiency and workload prediction. The kernel method is used to preprocess historical information from the CDCs. Additionally, T-CNN model weight parameters are optimized using SFO. The suggested TCNN-SFO technology has successfully reduced excessive power consumption while correctly forecasting the incoming demand. Further, the proposed model is assessed using two benchmark datasets: Saskatchewan HTTP traces and NASA. The developed model is executed in a Java tool. Therefore, associated with existing methods, the developed technique has achieved higher accuracy of 20.75%, 19.06%, 29.09%, 23.8%, and 20.5%, as well as lower energy consumption of 20.84%, 18.03%, 28.64%, 30.72%, and 33.74% when validating the Saskatchewan HTTP traces dataset. It has also achieved higher accuracy of 32.95%, 12.05%, 32.65%, and 26.54%.
APA, Harvard, Vancouver, ISO, and other styles
17

Castelli, Mauro, Diego Costa Pinto, Saleh Shuqair, Davide Montali, and Leonardo Vanneschi. "The Benefits of Automated Machine Learning in Hospitality: A Step-By-Step Guide and AutoML Tool." Emerging Science Journal 6, no. 6 (2022): 1237–54. http://dx.doi.org/10.28991/esj-2022-06-06-02.

Full text
Abstract:
The manuscript presents a tool to estimate and predict data accuracy in hospitality by means of automated machine learning (AutoML). It uses a tree-based pipeline optimization tool (TPOT) as a methodological framework. The TPOT is an AutoML framework based on genetic programming, and it is particularly useful to generate classification models, for regression analysis, and to determine the most accurate algorithms and hyperparameters in hospitality. To demonstrate the presented tool’s real usefulness, we show that the TPOT findings provide further improvement, using a real-world dataset to convert key hospitality variables (customer satisfaction, loyalty) to revenue, with up to 93% prediction accuracy on unseen data. Doi: 10.28991ESJ-2022-06-06-02 Full Text: PDF
APA, Harvard, Vancouver, ISO, and other styles
18

Yu, Ke, Minguk Kim, and Jun Rim Choi. "Memory-Tree Based Design of Optical Character Recognition in FPGA." Electronics 12, no. 3 (2023): 754. http://dx.doi.org/10.3390/electronics12030754.

Full text
Abstract:
As one of the fields of Artificial Intelligence (AI), Optical Character Recognition (OCR) systems have wide application in both industrial production and daily life. Conventional OCR systems are commonly designed and implement data computation on the basis of microprocessors; the performance of the processor relates to the effect of the computation. However, due to the “Memory-wall” problem and Von Neumann bottlenecks, the drawbacks of traditional processor-based computing for OCR systems are gradually becoming apparent. In this paper, an approach based on the Memory-Centric Computing and “Memory-Tree” algorithm has been proposed to perform hardware optimization of traditional OCR systems. The proposed algorithm was first designed in software implementation using C/C++ and OpenCV to verify the feasibility of the idea and then the RTL conversion of the algorithm was done using the Xilinx Vitis High Level Synthesis (HLS) tool to implement the hardware. This work chose Xilinx Alveo U50 FPGA Accelerator to complete the hardware design, which can be connected to the x86 CPU in the PC by PCIe to form heterogeneous computing. The results of the hardware implementation show that the system this work designed can recognize characters of English capital letters and numbers within 34.24 us. The power of FPGA is 18.59 W, which saves 77.87% of energy consumption compared to the 84 W of the processor in PC.
APA, Harvard, Vancouver, ISO, and other styles
19

Kán, Peter, Andrija Kurtic, Mohamed Radwan, and Jorge M. Loáiciga Rodríguez. "Automatic Interior Design in Augmented Reality Based on Hierarchical Tree of Procedural Rules." Electronics 10, no. 3 (2021): 245. http://dx.doi.org/10.3390/electronics10030245.

Full text
Abstract:
Augmented reality has a high potential in interior design due to its capability of visualizing numerous prospective designs directly in a target room. In this paper, we present our research on utilization of augmented reality for interactive and personalized furnishing. We propose a new algorithm for automated interior design which generates sensible and personalized furniture configurations. This algorithm is combined with mobile augmented reality system to provide a user with an interactive interior design try-out tool. Personalized design is achieved via a recommender system which uses user preferences and room data as input. We conducted three user studies to explore different aspects of our research. The first study investigated the user preference between augmented reality and on-screen visualization for interactive interior design. In the second user study, we studied the user preference between our algorithm for automated interior design and optimization-based algorithm. Finally, the third study evaluated the probability of sensible design generation by the compared algorithms. The main outcome of our research suggests that augmented reality is viable technology for interactive home furnishing.
APA, Harvard, Vancouver, ISO, and other styles
20

Gupta, Deepti, B. Kezia Rani, Indu Verma, et al. "Metaheuristic Machine Learning Algorithms for Liver Disease Prediction." International Research Journal of Multidisciplinary Scope 05, no. 04 (2024): 651–60. http://dx.doi.org/10.47857/irjms.2024.v05i04.01204.

Full text
Abstract:
In machine learning, optimizing solutions is critical for improving performance. This study explores the use of metaheuristic algorithms to enhance key processes such as hyperparameter tuning, feature selection, and model optimization. Specifically, we integrate the Artificial Bee Colony (ABC) algorithm with Random Forest and Decision Tree models to improve the accuracy and efficiency of disease prediction. Machine learning has the potential to uncover complex patterns in medical data, offering transformative capabilities in disease diagnosis. However, selecting the optimal algorithm for model optimization presents a significant challenge. In this work, we employ Random Forest, Decision Tree models, and the ABC algorithm—based on the foraging behaviours of honeybees—to predict liver disease using a dataset from Indian medical records. Our experiments demonstrate that the Random Forest model achieves an accuracy of 85.12%, the Decision Tree model 76.89%, and the ABC algorithm 80.45%. These findings underscore the promise of metaheuristic approaches in machine learning, with the ABC algorithm proving to be a valuable tool in improving predictive accuracy. In conclusion, the integration of machine learning models with metaheuristic techniques, such as the ABC algorithm, represents a significant advancement in disease prediction, driving progress in data-driven healthcare.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Ranran, Jun Zhang, Yijun Lu, Shisong Ren, and Jiandong Huang. "Towards a Reliable Design of Geopolymer Concrete for Green Landscapes: A Comparative Study of Tree-Based and Regression-Based Models." Buildings 14, no. 3 (2024): 615. http://dx.doi.org/10.3390/buildings14030615.

Full text
Abstract:
The design of geopolymer concrete must meet more stringent requirements for the landscape, so understanding and designing geopolymer concrete with a higher compressive strength challenging. In the performance prediction of geopolymer concrete compressive strength, machine learning models have the advantage of being more accurate and faster. However, only a single machine learning model is usually used at present, there are few applications of ensemble learning models, and model optimization processes is lacking. Therefore, this paper proposes to use the Firefly Algorithm (AF) as an optimization tool to perform hyperparameter tuning on Logistic Regression (LR), Multiple Logistic Regression (MLR), decision tree (DT), and Random Forest (RF) models. At the same time, the reliability and efficiency of four integrated learning models were analyzed. The model was used to analyze the influencing factors of geopolymer concrete and determine the strength of their influencing ability. According to the experimental data, the RF-AF model had the lowest RMSE value. The RMSE value of the training set and test set were 4.0364 and 8.7202, respectively. The R value of the training set and test set were 0.9774 and 0.8915, respectively. Therefore, compared with the other three models, RF-AF has a stronger generalization ability and higher prediction accuracy. In addition, the molar concentration of NaOH was the most important influencing factors, and its influence was far greater than the other possible factors including NaOH content. Therefore, it is necessary to pay more attention to NaOH molarity when designing geopolymer concrete.
APA, Harvard, Vancouver, ISO, and other styles
22

Dąbal, Agata, and Marcin Łyszczarz. "LCA analyses for roads and bridges as a tool for detailed and comprehensive environmental impact assessment." Budownictwo i Architektura 15, no. 1 (2016): 041–50. http://dx.doi.org/10.24358/bud-arch_16_151_04.

Full text
Abstract:
The article presents possibilities of using LCA analyses for roads and bridges determining core boundary conditions and functional unit. The examples of characteristic process trees were studied and fundamental for further analyses. The environmental impact assessment methods, based on LCA analyses, illustrating roads and bridges influence on environmental elements such as climate or ozone layer, were presented. The influence of boundary conditions and the process tree expansion on the results obtained was described with its reference to the scale on environmental impact. It showed how the interpretation of the results make an optimization of a design possible in order to minimize the environmental impact.
APA, Harvard, Vancouver, ISO, and other styles
23

Cheng, Xiaoyu, Shanshan Liu, Wei He, et al. "A Model for Flywheel Fault Diagnosis Based on Fuzzy Fault Tree Analysis and Belief Rule Base." Machines 10, no. 2 (2022): 73. http://dx.doi.org/10.3390/machines10020073.

Full text
Abstract:
In the fault diagnosis of the flywheel system, the input information of the system is uncertain. This uncertainty is mainly caused by the interference of environmental factors and the limited cognitive ability of experts. The BRB (belief rule base) shows a good ability for dealing with problems of information uncertainty and small sample data. However, the initialization of the BRB relies on expert knowledge, and it is difficult to obtain the accurate knowledge of flywheel faults when constructing BRB models. Therefore, this paper proposes a new BRB model, called the FFBRB (fuzzy fault tree analysis and belief rule base), which can effectively solve the problems existing in the BRB. The FFBRB uses the Bayesian network as a bridge, uses an FFTA (fuzzy fault tree analysis) mechanism to build the BRB’s expert knowledge, uses ER (evidential reasoning) as its reasoning tool, and uses P-CMA-ES (projection covariance matrix adaptation evolutionary strategies) as its optimization model algorithm. The feasibility and superiority of the proposed method are verified by an example of a flywheel friction torque fault tree.
APA, Harvard, Vancouver, ISO, and other styles
24

Rakholia, Shrey, Reuven Yosef, Neelesh Yadav, Laura Karimloo, Michaela Pleitner, and Ritvik Kothari. "TreeGrid: A Spatial Planning Tool Integrating Tree Species Traits for Biodiversity Enhancement in Urban Landscapes." Animals 15, no. 13 (2025): 1844. https://doi.org/10.3390/ani15131844.

Full text
Abstract:
Urbanization, habitat fragmentation, and intensifying urban heat island (UHI) effects accelerate biodiversity loss and diminish ecological resilience in cities, particularly in climate-vulnerable regions. To address these challenges, we developed TreeGrid, a functionality-based spatial tree planning tool designed specifically for urban settings in the Northern Plains of India. The tool integrates species trait datasets, ecological scoring metrics, and spatial simulations to optimize tree placement for enhanced ecosystem service delivery, biodiversity support, and urban cooling. Developed within an R Shiny framework, TreeGrid dynamically computes biodiversity indices, faunal diversity potential, canopy shading, carbon sequestration, and habitat connectivity while simulating localized reductions in land surface temperature (LST). Additionally, we trained a deep neural network (DNN) model using tool-generated data to predict bird habitat suitability across diverse urban contexts. The tool’s spatial optimization capabilities are also applicable to post-fire restoration planning in wildland–urban interfaces by guiding the selection of appropriate endemic species for revegetation. This integrated framework supports the development of scalable applications in other climate-impacted regions, highlighting the utility of participatory planning, predictive modeling, and ecosystem service assessments in designing biodiversity-inclusive and thermally resilient urban landscapes.
APA, Harvard, Vancouver, ISO, and other styles
25

Bordug, Aleksandr, and Aleksandr Zheleznyak. "Parametric optimization of a marine DC motor rotational speed automatic control system." Energy Safety and Energy Economy 4 (August 2021): 22–30. http://dx.doi.org/10.18635/2071-2219-2021-4-22-30.

Full text
Abstract:
DC motor speed control can be done either manually or by using an automatic controlling tool. In order to perform parametric optimization of a DC motor rotational speed automatic control system, we have developed a simplified methodology. The said methodology is able to obtain a transfer function for an automatic control system being based on several initially given system parameters. It is a tree-step way according to the research goals. We offer model patterns where parameters can be designated. Time constant for a PI controller is being presented by a process time constant for a controlled DC motor for the purpose of this particular research.
APA, Harvard, Vancouver, ISO, and other styles
26

Shegay, Maksim V., Vytas K. Švedas, Vladimir V. Voevodin, Dmitry A. Suplatov, and Nina N. Popova. "Guide tree optimization with genetic algorithm to improve multiple protein 3D-structure alignment." Bioinformatics 38, no. 4 (2021): 985–89. http://dx.doi.org/10.1093/bioinformatics/btab798.

Full text
Abstract:
Abstract Motivation With the increasing availability of 3D-data, the focus of comparative bioinformatic analysis is shifting from protein sequence alignments toward more content-rich 3D-alignments. This raises the need for new ways to improve the accuracy of 3D-superimposition. Results We proposed guide tree optimization with genetic algorithm (GA) as a universal tool to improve the alignment quality of multiple protein 3D-structures systematically. As a proof of concept, we implemented the suggested GA-based approach in popular Matt and Caretta multiple protein 3D-structure alignment (M3DSA) algorithms, leading to a statistically significant improvement of the TM-score quality indicator by up to 220–1523% on ‘SABmark Superfamilies’ (in 49–77% of cases) and ‘SABmark Twilight’ (in 59–80% of cases) datasets. The observed improvement in collections of distant homologies highlights the potentials of GA to optimize 3D-alignments of diverse protein superfamilies as one plausible tool to study the structure–function relationship. Availability and implementation The source codes of patched gaCaretta and gaMatt programs are available open-access at https://github.com/n-canter/gamaps. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
27

Ying, Shenshun, Yicheng Sun, Fuhua Zhou, and Lvgao Lin. "ShuffleNet v2.3-StackedBiLSTM-Based Tool Wear Recognition Model for Turbine Disc Fir-Tree Slot Broaching." Machines 11, no. 1 (2023): 92. http://dx.doi.org/10.3390/machines11010092.

Full text
Abstract:
At present, deep learning technology shows great market potential in broaching tool wear state recognition based on vibration signals. However, traditional single neural network structure is difficult to extract a variety of different features simultaneously and has low robustness, so the accuracy of wear status recognition is not high. In view of the above problems, a broaching tool wear recognition model based on ShuffleNet v2.3-StackedBiLSTM is proposed in this paper. The model integrates ShuffleNet v2.3, which has been channel shuffling, and StackedBiLSTM, a long and short-term memory network, to effectively extract spatial and temporal features for tool wear state recognition. Based on the innovative recognition model, the turbine disc fir-tree slot broaching experiment is designed, and the performance index system based on confusion matrix is adopted. The experimental research and results show that the model has outstanding accuracy, precision, recall, and F1 value, and the accuracy rate reaches 99.37%, which is significantly better than ShuffleNet v2.3 and StackedBiLSTM models. The recognition speed of a single sample was improved to 8.67 ms, which is 90.32% less than that of the StackedBiLSTM model.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Jianping, Qigao Feng, Jianwei Ma, and Yikun Feng. "FL-SDUAN: A Fuzzy Logic-Based Routing Scheme for Software-Defined Underwater Acoustic Networks." Applied Sciences 13, no. 2 (2023): 944. http://dx.doi.org/10.3390/app13020944.

Full text
Abstract:
In underwater acoustic networks, the accurate estimation of routing weights is NP-hard due to the time-varying environment. Fuzzy logic is a powerful tool for dealing with vague problems. Software-defined networking (SDN) is a promising technology that enables flexible management by decoupling the data plane from the control plane. Inspired by this, we proposed a fuzzy logic-based software-defined routing scheme for underwater acoustic networks (FL-SDUAN). Specifically, we designed a software-defined underwater acoustic network architecture. Based on fuzzy path optimization (FPO-MST) and fuzzy cut-set optimization (FCO-MST), two minimum spanning tree algorithms under different network scales were proposed. In addition, we compared the proposed algorithms to state-of-the-art methods regarding packet delivery rate, end-to-end latency, and throughput in different underwater acoustic network scenarios. Extensive experiments demonstrated that a trade-off between performance and complexity was achieved in our work.
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Yuebo, and Aimin Wang. "Machining Error Compensation Method for Thin Plate Parts Based on In-machine Measurement." Journal of Physics: Conference Series 2183, no. 1 (2022): 012001. http://dx.doi.org/10.1088/1742-6596/2183/1/012001.

Full text
Abstract:
Abstract Aiming at the problem of datum drift and dimension error compensation of machine tool in the machining process of plate parts.This paper presents a machining error compensation method for plate parts based on in-machine measurement.The parts are measured in situ in the clamping state by machine measuring equipment;The key features were reconstructed based on the measurement results, and the parts were divided into four quadrant regions, and the scaling errors of knife points in different quadrants were compensated according to different directions.Then, the reconstructed key feature model is discretized into point cloud data model.Based on binary tree and improved iterative nearest point algorithm, the margin optimization and datum correction rotation matrix and translation vector are obtained.The accuracy and quality of the machine tool are improved by adjusting the rotation matrix and translational vector according to the procedure of compensating the error of scaling.
APA, Harvard, Vancouver, ISO, and other styles
30

Szczupak, Ewelina, Marcin Małysza, Dorota Wilk-Kołodziejczyk, et al. "Decision Support Tool in the Selection of Powder for 3D Printing." Materials 17, no. 8 (2024): 1873. http://dx.doi.org/10.3390/ma17081873.

Full text
Abstract:
The work presents a tool enabling the selection of powder for 3D printing. The project focused on three types of powders, such as steel, nickel- and cobalt-based and aluminum-based. An important aspect during the research was the possibility of obtaining the mechanical parameters. During the work, the possibility of using the selected algorithm based on artificial intelligence like Random Forest, Decision Tree, K-Nearest Neighbors, Fuzzy K-Nearest Neighbors, Gradient Boosting, XGBoost, AdaBoost was also checked. During the work, tests were carried out to check which algorithm would be best for use in the decision support system being developed. Cross-validation was used, as well as hyperparameter tuning using different evaluation sets. In both cases, the best model turned out to be Random Forest, whose F1 metric score is 98.66% for cross-validation and 99.10% after tuning on the test set. This model can be considered the most promising in solving this problem. The first result is a more accurate estimate of how the model will behave for new data, while the second model talks about possible improvement after optimization or possible overtraining to the parameters.
APA, Harvard, Vancouver, ISO, and other styles
31

Ahmed, Muhammad, Sardar Usman, Nehad Ali Shah, et al. "AAQAL: A Machine Learning-Based Tool for Performance Optimization of Parallel SPMV Computations Using Block CSR." Applied Sciences 12, no. 14 (2022): 7073. http://dx.doi.org/10.3390/app12147073.

Full text
Abstract:
The sparse matrix–vector product (SpMV), considered one of the seven dwarfs (numerical methods of significance), is essential in high-performance real-world scientific and analytical applications requiring solution of large sparse linear equation systems, where SpMV is a key computing operation. As the sparsity patterns of sparse matrices are unknown before runtime, we used machine learning-based performance optimization of the SpMV kernel by exploiting the structure of the sparse matrices using the Block Compressed Sparse Row (BCSR) storage format. As the structure of sparse matrices varies across application domains, optimizing the block size is important for reducing the overall execution time. Manual allocation of block sizes is error prone and time consuming. Thus, we propose AAQAL, a data-driven, machine learning-based tool that automates the process of data distribution and selection of near-optimal block sizes based on the structure of the matrix. We trained and tested the tool using different machine learning methods—decision tree, random forest, gradient boosting, ridge regressor, and AdaBoost—and nearly 700 real-world matrices from 43 application domains, including computer vision, robotics, and computational fluid dynamics. AAQAL achieved 93.47% of the maximum attainable performance with a substantial difference compared to in practice manual or random selection of block sizes. This is the first attempt at exploiting matrix structure using BCSR, to select optimal block sizes for the SpMV computations using machine learning techniques.
APA, Harvard, Vancouver, ISO, and other styles
32

Kiala, Zolo, John Odindi, and Onisimo Mutanga. "Determining the Capability of the Tree-Based Pipeline Optimization Tool (TPOT) in Mapping Parthenium Weed Using Multi-Date Sentinel-2 Image Data." Remote Sensing 14, no. 7 (2022): 1687. http://dx.doi.org/10.3390/rs14071687.

Full text
Abstract:
The Tree-based Pipeline Optimization Tool (TPOT) is a state-of-the-art automated machine learning (AutoML) approach that automatically generates and optimizes tree-based pipelines using a genetic algorithm. Although it has been proven to outperform commonly used machine techniques, its capability to handle high-dimensional datasets has not been investigated. In vegetation mapping and analysis, multi-date images are generally high-dimensional datasets that contain embedded information, such as phenological and canopy structural properties, known to enhance mapping accuracy. However, without the implementation of a robust classification algorithm or a feature selection tool, the large sets and the presence of redundant variables in multi-date images can impede accurate and efficient landscape classification. Hence, this study sought to test the efficacy of the TPOT on a multi-date Sentinel-2 image to optimize the classification accuracies of a landscape infested by a noxious invasive plant species, the parthenium weed (Parthenium hysterophorus). Specifically, the models created from the multi-date image, using the TPOT and an algorithm system that combines feature selection and the TPOT, dubbed “ReliefF-Svmb-EXT-TPOT”, were compared. The results showed that the TPOT could perform well on data with large feature sets, but at a computational cost. The overall accuracies were 91.9% and 92.6% using the TPOT and ReliefF-Svmb-EXT-TPOT models, respectively. The study findings are crucial for automated and accurate mapping of parthenium weed using high-dimensional geospatial datasets with limited human intervention.
APA, Harvard, Vancouver, ISO, and other styles
33

Azni Haslizan Ab Halim, Farida Ridzuan, Nur Hafiza Zakaria, et al. "SAKTI©: Secured Chatting Tool Through Forward Secrecy." Journal of Advanced Research in Applied Sciences and Engineering Technology 49, no. 1 (2024): 54–62. http://dx.doi.org/10.37934/araset.49.1.5462.

Full text
Abstract:
The critical issue of academic misconduct is of utmost importance in the field of education and understanding whistleblowing behaviour can be a potential measure to effectively address this issue. This paper highlights the benefits of using the Tree-based Pipeline Optimization (TPOT) framework as a user-friendly tool for implementing machine learning techniques in studying whistleblowing behaviour among students in universities in Indonesia and Malaysia. The paper demonstrates the ease of implementing TPOT, making it accessible to inexpert computing scientists, and showcases highly promising results from the whistleblowing classification models trained with TPOT. Performance metrics such as Area Under Curve (AUC) are used to measure the reliability of the TPOT framework, with some models achieving AUC values above 90%, and the best AUC was 99% by TPOT with a Genetic Programming population size of 40. The paper’s main contribution lies in the empirical demonstration and findings that resulted in achieving the optimal outcomes from the whistleblowing case study. This paper sheds light on the potential of TPOT as an easy and rapid implementation tool for AI in the field of education, addressing the challenges of academic misconduct and showcasing promising results in the context of whistleblowing classification.
APA, Harvard, Vancouver, ISO, and other styles
34

Ying, Shenshun, Yicheng Sun, Chentai Fu, Lvgao Lin, and Shunqi Zhang. "Grey wolf optimization based support vector machine model for tool wear recognition in fir-tree slot broaching of aircraft turbine discs." Journal of Mechanical Science and Technology 36, no. 12 (2022): 6261–73. http://dx.doi.org/10.1007/s12206-022-1139-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Xie, Zixuan, Xueyu Huang, and Wenwen Liu. "Subpopulation Particle Swarm Optimization with a Hybrid Mutation Strategy." Computational Intelligence and Neuroscience 2022 (February 23, 2022): 1–19. http://dx.doi.org/10.1155/2022/9599417.

Full text
Abstract:
With the large-scale optimization problems in the real world becoming more and more complex, they also require different optimization algorithms to keep pace with the times. Particle swarm optimization algorithm is a good tool that has been proved to deal with various optimization problems. Conventional particle swarm optimization algorithms learn from two particles, namely, the best position of the current particle and the best position of all particles. This particle swarm optimization algorithm is simple to implement, simple, and easy to understand, but it has a fatal defect. It is hard to find the global optimal solution quickly and accurately. In order to deal with these defects of standard particle swarm optimization, this paper proposes a particle swarm optimization algorithm (SHMPSO) based on the hybrid strategy of seed swarm optimization (using codes available from https://gitee.com/mr-xie123234/code/tree/master/). In SHMPSO, a subpopulation coevolution particle swarm optimization algorithm is adopted. In SHMPSO, an elastic candidate-based strategy is used to find a candidate and realize information sharing and coevolution among populations. The mean dimension learning strategy can be used to make the population converge faster and improve the solution accuracy of SHMPSO. Twenty-one benchmark functions and six industries-recognized particle swarm optimization variants are used to verify the advantages of SHMPSO. The experimental results show that SHMPSO has good convergence speed and good robustness and can obtain high-precision solutions.
APA, Harvard, Vancouver, ISO, and other styles
36

Talpur, Fauzia, Imtiaz Ali Korejo, Aftab Ahmed Chandio, Ali Ghulam, and Mir Sajjad Hussain Talpur. "ML-Based Detection of DDoS Attacks Using Evolutionary Algorithms Optimization." Sensors 24, no. 5 (2024): 1672. http://dx.doi.org/10.3390/s24051672.

Full text
Abstract:
The escalating reliance of modern society on information and communication technology has rendered it vulnerable to an array of cyber-attacks, with distributed denial-of-service (DDoS) attacks emerging as one of the most prevalent threats. This paper delves into the intricacies of DDoS attacks, which exploit compromised machines numbering in the thousands to disrupt data services and online commercial platforms, resulting in significant downtime and financial losses. Recognizing the gravity of this issue, various detection techniques have been explored, yet the quantity and prior detection of DDoS attacks has seen a decline in recent methods. This research introduces an innovative approach by integrating evolutionary optimization algorithms and machine learning techniques. Specifically, the study proposes XGB-GA Optimization, RF-GA Optimization, and SVM-GA Optimization methods, employing Evolutionary Algorithms (EAs) Optimization with Tree-based Pipelines Optimization Tool (TPOT)-Genetic Programming. Datasets pertaining to DDoS attacks were utilized to train machine learning models based on XGB, RF, and SVM algorithms, and 10-fold cross-validation was employed. The models were further optimized using EAs, achieving remarkable accuracy scores: 99.99% with the XGB-GA method, 99.50% with RF-GA, and 99.99% with SVM-GA. Furthermore, the study employed TPOT to identify the optimal algorithm for constructing a machine learning model, with the genetic algorithm pinpointing XGB-GA as the most effective choice. This research significantly advances the field of DDoS attack detection by presenting a robust and accurate methodology, thereby enhancing the cybersecurity landscape and fortifying digital infrastructures against these pervasive threats.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Xiaoyu, Tengyuan Wang, Jiaxu Li, Yong Tian, and Jindong Tian. "Energy Consumption Estimation for Electric Buses Based on a Physical and Data-Driven Fusion Model." Energies 15, no. 11 (2022): 4160. http://dx.doi.org/10.3390/en15114160.

Full text
Abstract:
The energy consumption of electric vehicles is closely related to the problems of charging station planning and vehicle route optimization. However, due to various factors, such as vehicle performance, driving habits and environmental conditions, it is difficult to estimate vehicle energy consumption accurately. In this work, a physical and data-driven fusion model was designed for electric bus energy consumption estimation. The basic energy consumption of the electric bus was modeled by a simplified physical model. The effects of rolling drag, brake consumption and air-conditioning consumption are considered in the model. Taking into account the fluctuation in energy consumption caused by multiple factors, a CatBoost decision tree model was constructed. Finally, a fusion model was built. Based on the analysis of electric bus data on the big data platform, the performance of the energy consumption model was verified. The results show that the model has high accuracy with an average relative error of 6.1%. The fusion model provides a powerful tool for the optimization of the energy consumption of electric buses, vehicle scheduling and the rational layout of charging facilities.
APA, Harvard, Vancouver, ISO, and other styles
38

Tuan Le, Minh, Minh Thanh Vo, Nhat Tan Pham, and Son V.T Dao. "Predicting heart failure using a wrapper-based feature selection." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 3 (2021): 1530. http://dx.doi.org/10.11591/ijeecs.v21.i3.pp1530-1539.

Full text
Abstract:
In the current health system, it is very difficult for medical practitioners/physicians to diagnose the effectiveness of heart contraction. In this research, we proposed a machine learning model to predict heart contraction using an artificial neural network (ANN). We also proposed a novel wrapper-based feature selection utilizing a grey wolf optimization (GWO) to reduce the number of required input attributes. In this work, we compared the results achieved using our method and several conventional machine learning algorithms approaches such as support vector machine, decision tree, K-nearest neighbor, naïve bayes, random forest, and logistic regression. Computational results show not only that much fewer features are needed, but also higher prediction accuracy can be achieved around 87%. This work has the potential to be applicable to clinical practice and become a supporting tool for doctors/physicians.
APA, Harvard, Vancouver, ISO, and other styles
39

Le, Minh Tuan, Minh Thanh Vo, Nhat Tan Pham, and Son V.T Dao. "Predicting heart failure using a wrapper-based feature selection." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 3 (2021): 1530–39. https://doi.org/10.11591/ijeecs.v21.i3.pp1530-1539.

Full text
Abstract:
In the current health system, it is very difficult for medical practitioners/ physicians to diagnose the effectiveness of heart contraction. In this research, we proposed a machine learning model to predict heart contraction using an artificial neural network (ANN). We also proposed a novel wrapper-based feature selection utilizing a grey wolf optimization (GWO) to reduce the number of required input attributes. In this work, we compared the results achieved using our method and several conventional machine learning algorithms approaches such as support vector machine, decision tree, Knearest neighbor, naïve bayes, random forest, and logistic regression. Computational results show not only that much fewer features are needed, but also higher prediction accuracy can be achieved around 87%. This work has the potential to be applicable to clinical practice and become a supporting tool for doctors/physicians.
APA, Harvard, Vancouver, ISO, and other styles
40

Ferentinos, V., B. Geelen, G. Lafruit, et al. "Optimized memory requirements for wavelet-based scalable multimedia codecs." Journal of Embedded Computing 1, no. 3 (2005): 363–80. https://doi.org/10.3233/emc-2005-00039.

Full text
Abstract:
Powerful multimedia applications are running more and more on very compact and resource-scarce, portable systems. As a consequence, system design optimization, its associated time-to-market constraints and the required automated tool support are becoming increasingly important issues, especially in situations where product derivatives and extensions introduce unforeseen and possibly dramatic constraints in the system optimization process. Nevertheless, the system designer remains an irreplaceable cornerstone for steering the whole system optimization process. This paper presents the relationship of aforementioned aspects in the context of optimizing data access to memory, which is the dominant factor determining the system-on-a-chip area, data throughput and power consumption. The case study of a 1D and 2D forward and inverse wavelet transform, interacting with surrounding system modules imposed by current multimedia compression standards, leads the reader through the peculiar technical counter-measures and script-based optimization steps to be followed for reaching a satisfactory global optimization. In particular, the data dependencies between the different functional modules are shown to be crucial in the memory optimization process and lead to non-trivial/counter-intuitive decision takings that can increase the energy consumption gains compared to more commonly-accepted, though suboptimal approaches. An example is the counter-intuitive observation that though JPEG2000 uses independently entropy coded blocks over its wavelet subbands, it requires more memory because of “hidden” data dependencies, than its zero-tree based MPEG-4 counterpart, whose intricate entropy coding spreads over all the subbands. Hence, to achieve an overall optimal implementation with good trade-offs between efficiency and cost, it is strongly suggested that algorithmic and implementation designers should co-operate in early stages of the multimedia systems design, facilitated by high-level memory cost estimation analyses.
APA, Harvard, Vancouver, ISO, and other styles
41

Räty, Minna, and Mikko Kuronen. "efdm–An R package offering a scenario tool beyond forestry." PLOS ONE 17, no. 8 (2022): e0264380. http://dx.doi.org/10.1371/journal.pone.0264380.

Full text
Abstract:
Scenario tools are widely used to support policymaking and strategic planning. Loss of biodiversity, climate change, and increase in biomass demand ways to project future forest resources considering e.g. various protection schemes, alterations to forest management, and potential threats like pests, wind, and drought. The European Forestry Dynamics Model (EFDM) is an area-based matrix model that can combine all these aspects in a scenario, simulating large-scale impacts. The inputs to the EFDM are the initial forest state and models for management activities such as thinning, felling or other silvicultural treatments. The results can be converted into user-defined outputs like wood volumes, the extent of old forests, dead wood, carbon, or harvest income. We present here a new implementation of the EFDM as an open-source R package. This new implementation enables the development of more complex scenarios than before, including transitions from even-aged forestry to continuous cover forestry, and changes in land use or tree species. Combined with a faster execution speed, the EFDM can now be used as a building block in optimization systems. The new user interface makes the EFDM more approachable and usable, and it can be combined with other models to study the impact of climate change, for example.
APA, Harvard, Vancouver, ISO, and other styles
42

Al-Ruzouq, Rami, Abdallah Shanableh, Mohamed Barakat A. Gibril, and Saeed AL-Mansoori. "Image Segmentation Parameter Selection and Ant Colony Optimization for Date Palm Tree Detection and Mapping from Very-High-Spatial-Resolution Aerial Imagery." Remote Sensing 10, no. 9 (2018): 1413. http://dx.doi.org/10.3390/rs10091413.

Full text
Abstract:
Accurate mapping of date palm trees is essential for their sustainable management, yield estimation, and environmental studies. In this study, we integrated geographic object-based image analysis, class-specific accuracy measures, fractional factorial design, metaheuristic feature-selection technique, and rule-based classification to detect and map date palm trees from very-high-spatial-resolution (VHSR) aerial images of two study areas. First, multiresolution segmentation was optimized through the synergy of the F1-score accuracy measure and the robust Taguchi design. Second, ant colony optimization (ACO) was adopted to select the most significant features. Out of 31 features, only 12 significant color invariants and textural features were selected. Third, based on the selected features, the rule-based classification with the aid of a decision tree algorithm was applied to extract date palm trees. The proposed methodology was developed on a subset of the first study area, and ultimately applied to the second study area to investigate its efficiency and transferability. To evaluate the proposed classification scheme, various supervised object-based algorithms, namely random forest (RF), support vector machine (SVM), and k-nearest neighbor (k-NN), were applied to the first study area. The result of image segmentation optimization demonstrated that segmentation optimization based on an integrated F1-score class-specific accuracy measure and Taguchi statistical design showed improvement compared with objective function, along with the Taguchi design. Moreover, the result of the feature selection by ACO outperformed, with almost 88% overall accuracy, several feature-selection techniques, such as chi-square, correlation-based feature selection, gain ratio, information gain, support vector machine, and principal component analysis. The integrated framework for palm tree detection outperformed RF, SVM, and k-NN classification algorithms with an overall accuracy of 91.88% and 87.03%, date palm class-specific accuracies of 0.91 and 0.89, and kappa coefficients of 0.90 and 0.85 for the first and second study areas, respectively. The proposed integrated methodology demonstrated a highly efficient and promising tool to detect and map date palm trees from VHSR aerial images.
APA, Harvard, Vancouver, ISO, and other styles
43

Vlachou, Sofia, and Michail Panagopoulos. "Applying machine learning methods to quantify emotional experience in installation art." Technoetic Arts 21, no. 1 (2023): 53–72. http://dx.doi.org/10.1386/tear_00097_1.

Full text
Abstract:
Aesthetic experience is original, dynamic and ever-changing. This article covers three research questions (RQs) concerning how immersive installation artworks can elicit emotions that may contribute to their popularity. Based on Yayoi Kusama’s and Peter Kogler’s kaleidoscopic rooms, this study aims to predict the emotions of visitors of immersive installation art based on their Twitter activity. As indicators, we employed the total number of likes, comments, retweets, followers, followings, the average of tweets per user, and emotional response. According to our evaluation of emotions, panic obtained the highest scores. Furthermore, compared to traditional machine learning algorithms, Tree-based Pipeline Optimization Tool (TPOT) Automated Machine Learning used in this research yielded slightly lower performance. We forecast that our findings will stimulate future research in the fields of data analysis, cultural heritage management and marketing, aesthetics and cultural analytics.
APA, Harvard, Vancouver, ISO, and other styles
44

Karami, Vania, Giulio Nittari, Enea Traini, and Francesco Amenta. "An Optimized Decision Tree with Genetic Algorithm Rule-Based Approach to Reveal the Brain’s Changes During Alzheimer’s Disease Dementia." Journal of Alzheimer's Disease 84, no. 4 (2021): 1577–84. http://dx.doi.org/10.3233/jad-210626.

Full text
Abstract:
Background: It is desirable to achieve acceptable accuracy for computer aided diagnosis system (CADS) to disclose the dementia-related consequences on the brain. Therefore, assessing and measuring these impacts is fundamental in the diagnosis of dementia. Objective: This study introduces a new CADS for deep learning of magnetic resonance image (MRI) data to identify changes in the brain during Alzheimer’s disease (AD) dementia. Methods: The proposed algorithm employed a decision tree with genetic algorithm rule-based optimization to classify input data which were extracted from MRI. This pipeline is applied to the healthy and AD subjects of the Open Access Series of Imaging Studies (OASIS). Results: Final evaluation of the CADS and its comparison with other systems supported the potential of the proposed model as a novel tool for investigating the progression of AD and its great ability as an innovative computerized help to facilitate the decision-making procedure for the diagnosis of AD. Conclusion: The one-second time response, together with the identified high accurate performance, suggests that this system could be useful in future cognitive and computational neuroscience studies.
APA, Harvard, Vancouver, ISO, and other styles
45

SEKAR, RAJENDRAN, MUTHUKUMAR ARUNACHALAM, and KANNAN SUBRAMANIAN. "FUZZY-PROPORTIONAL INTEGRAL DERIVATIVE CONTROLLER WITH INTERACTIVE DECISION TREE." REVUE ROUMAINE DES SCIENCES TECHNIQUES — SÉRIE ÉLECTROTECHNIQUE ET ÉNERGÉTIQUE 69, no. 4 (2024): 395–400. http://dx.doi.org/10.59277/rrst-ee.2024.69.4.5.

Full text
Abstract:
The STATCOM is extensively used in the power system to address the power quality issues by actively compensating the reactive power requirements of the load. However, the STATCOM performance depends on the underlying controller operation to handle sudden load changes and disturbances optimally with faster response. To solve this issue, this paper presents the intelligent algorithm-optimized fuzzy rule-based Proportional Integral Derivative (PID) controller to enhance the performance of the STATCOM. The proposed controller consists of a fuzzy-PID controller capable of handling non-linear dynamics and sudden changes in the load. The interactive decision tree (IDT) technique detects weaker metrics and improves their influence in control scenarios. The Biogeography-based optimization (BBO) algorithm minimizes the controller's integral absolute error to tune the fuzzy-PID controller parameters. The algorithm terminates once all the metrics are tuned to satisfaction with the operator’s consent. The proposed IDT-based fuzzy PID controller is tested on a power system containing a nonlinear load, and its performance is compared with the existing controller and simulated using the MATLAB/Simulink tool. The proposed controller provides fast control action, reduces the reactive power from the source by 42.5%, and improves the power factor by 1.5%.
APA, Harvard, Vancouver, ISO, and other styles
46

Kumar, Shailender, Preetam Kumar, and Aman Mittal. "Study of Optimized Window Aggregate Function for Big Data Analytics." Recent Patents on Engineering 13, no. 2 (2019): 101–7. http://dx.doi.org/10.2174/1872212112666180330162741.

Full text
Abstract:
Background: A Window Aggregate function belongs to a class of functions, which have emerged as a very important tool for Big Data Analytics. They lend support in analysis and decisionmaking applications. A window aggregate function aggregates and returns the result by applying the function over a limited number of tuples corresponding to current tuple and hence lending support for big data analytics. We have gone through different patents related to window aggregate functions and its optimization. The cost associated with Big data analytics, especially the processing of window functions is one of the major limiting factors. However, now a number of optimizing techniques have evolved for both single as well as multiple window aggregate functions. Methods: In this paper, the authors have discussed various optimization techniques and summarized the latest techniques that have been developed over a period through intensive research in this area. The paper tried to compare various techniques based on certain parameters like the degree of parallelism, multiple window function support, execution time etc. Results: After analyzing all these techniques, segment tree data structure seems better technique as it outperforms other techniques on different grounds like efficiency, memory overhead, execution speed and degree of parallelism. Conclusion: In order to optimize the window aggregate function, segment tree data structure technique is a better technique, which can certainly improve the processing of window aggregate function specifically in big data analytics.
APA, Harvard, Vancouver, ISO, and other styles
47

Lisańczuk, Marek, Grzegorz Krok, Krzysztof Mitelsztedt, and Justyna Bohonos. "Influence of Main Flight Parameters on the Performance of Stand-Level Growing Stock Volume Inventories Using Budget Unmanned Aerial Vehicles." Forests 15, no. 8 (2024): 1462. http://dx.doi.org/10.3390/f15081462.

Full text
Abstract:
Low-altitude aerial photogrammetry can be an alternative source of forest inventory data and a practical tool for rapid forest attribute updates. The availability of low-cost unmanned aerial systems (UASs) and continuous technological advances in terms of their flight duration and automation capabilities makes these solutions interesting tools for supporting various forest management needs. However, any practical application requires a priori empirical validation and optimization steps, especially if it is to be used under different forest conditions. This study investigates the influence of the main flight parameters, i.e., ground sampling distance and photo overlap, on the performance of individual tree detection (ITD) stand-level forest inventories, based on photogrammetric data obtained from budget unmanned aerial systems. The investigated sites represented the most common forest conditions in the Polish lowlands. The results showed no direct influence of the investigated factors on growing stock volume predictions within the analyzed range, i.e., overlap from 80 × 80 to 90 × 90% and GSD from 2 to 6 cm. However, we found that the tree detection ratio had an influence on estimation errors, which ranged from 0.6 to 15.3%. The estimates were generally coherent across repeated flights and were not susceptible to the weather conditions encountered. The study demonstrates the suitability of the ITD method for small-area forest inventories using photogrammetric UAV data, as well as its potential optimization for larger-scale surveys.
APA, Harvard, Vancouver, ISO, and other styles
48

Somani, Nalin, Arminder Singh Walia, Nitin Kumar Gupta, Jyoti Prakash Panda, Anshuman Das, and Sudhansu Ranjan Das. "Data driven surrogate model-based optimization of the process parameters in electric discharge machining of D2 steel using Cu-SiC composite tool for the machined surface roughness and the tool wear." Revista de Metalurgia 59, no. 2 (2023): e242. http://dx.doi.org/10.3989/revmetalm.242.

Full text
Abstract:
Electrical discharge machining (EDM) is mainly utilized for the die manufacturing and also used to machine the hard materials. Pure Copper, Copper based alloys, brass, graphite, steel are the conventional electrode materials for EDM process. While machining with the conventional electrode materials, tool wear becomes the main bottleneck which led to increased machining cost. In the present work, the composite tool tip comprises 80% Copper and 20% silicon carbide was used for the machining of hardened D2 steel. The powder metallurgy route was used to fabricate the composite tool tip. Electrode wear rate and surface roughness were assessed with respect to the different process parameters like input current, gap voltage, pulse on time, pulse off time and dielectric flushing pressure. During the analysis it was found that Input current (I p ), Pulse on time (T on ) and Pulse off time (T off ) were the significant parameters which were affecting the tool wear rate (TWR) while the I p , T on and flushing pressure affected more the surface roughness (SR). SEM micrograph reveals that increase in I p leads to increase in the wear rate of the tool. The data obtained from experiments were used to develop machine learning based surrogate models. Three machine learning (ML) models are random forest, polynomial regression and gradient boosted tree. The predictive capability of ML based surrogate models was assessed by contrasting the R 2 and mean square error (MSE) of prediction of responses. The best surrogate model was used to develop a complex objective function for use in firefly algorithm-based optimization of input machining parameters for minimization of the output responses.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Huacong, Huaiqing Zhang, Keqin Xu, et al. "A Novel Framework for Stratified-Coupled BLS Tree Trunk Detection and DBH Estimation in Forests (BSTDF) Using Deep Learning and Optimization Adaptive Algorithm." Remote Sensing 15, no. 14 (2023): 3480. http://dx.doi.org/10.3390/rs15143480.

Full text
Abstract:
Diameter at breast height (DBH) is a critical metric for quantifying forest resources, and obtaining accurate, efficient measurements of DBH is crucial for effective forest management and inventory. A backpack LiDAR system (BLS) can provide high-resolution representations of forest trunk structures, making it a promising tool for DBH measurement. However, in practical applications, deep learning-based tree trunk detection and DBH estimation using BLS still faces numerous challenges, such as complex forest BLS data, low proportions of target point clouds leading to imbalanced class segmentation accuracy in deep learning models, and low fitting accuracy and robustness of trunk point cloud DBH methods. To address these issues, this study proposed a novel framework for BLS stratified-coupled tree trunk detection and DBH estimation in forests (BSTDF). This framework employed a stratified coupling approach to create a tree trunk detection deep learning dataset, introduced a weighted cross-entropy focal-loss function module (WCF) and a cosine annealing cyclic learning strategy (CACL) to enhance the WCF-CACL-RandLA-Net model for extracting trunk point clouds, and applied a (least squares adaptive random sample consensus) LSA-RANSAC cylindrical fitting method for DBH estimation. The findings reveal that the dataset based on the stratified-coupled approach effectively reduces the amount of data for deep learning tree trunk detection. To compare the accuracy of BSTDF, synchronous control experiments were conducted using the RandLA-Net model and the RANSAC algorithm. To benchmark the accuracy of BSTDF, we conducted synchronized control experiments utilizing a variety of mainstream tree trunk detection models and DBH fitting methodologies. Especially when juxtaposed with the RandLA-Net model, the WCF-CACL-RandLA-Net model employed by BSTDF demonstrated a 6% increase in trunk segmentation accuracy and a 3% improvement in the F1 score with the same training sample volume. This effectively mitigated class imbalance issues encountered during the segmentation process. Simultaneously, when compared to RANSAC, the LSA-RANCAC method adopted by BSTDF reduced the RMSE by 1.08 cm and boosted R2 by 14%, effectively tackling the inadequacies of RANSAC’s filling. The optimal acquisition distance for BLS data is 20 m, at which BSTDF’s overall tree trunk detection rate (ER) reaches 90.03%, with DBH estimation precision indicating an RMSE of 4.41 cm and R2 of 0.87. This study demonstrated the effectiveness of BSTDF in forest DBH estimation, offering a more efficient solution for forest resource monitoring and quantification, and possessing immense potential to replace field forest measurements.
APA, Harvard, Vancouver, ISO, and other styles
50

Yan, Shaoli. "Automatic Anomaly Monitoring Research of Business English Literature Translation Based on Decision Tree Intelligent Analysis." Scientific Programming 2022 (November 10, 2022): 1–9. http://dx.doi.org/10.1155/2022/9009204.

Full text
Abstract:
As an important language tool, literature in business English defines rights and obligations in business activities from the perspective of literature translation. This article discusses business English from the perspective of literature translation, which should not only preserve the characteristics of literature, but also ensure the smooth and correct language. In order to improve the accuracy of the automatic translation of business English literature and optimize the design of the teaching platform for business English literature translation, a design method of the teaching platform for business English literature translation based on the decision tree logistic model is proposed. The platform design consists of two modules: automatic translation algorithm design and software development of the platform. Using the decision tree logistics model to analyze the semantic features of business English translation and context feature matching and adaptive semantic variable optimization method to analyze automation lexical features of business English translation and to extract the correlation between vocabulary and vocabulary characteristics, in the context of a specific business translation difference correction, the accuracy of English translation is improved. The software design of the platform is carried out under the decision tree logistics model. The platform construction is mainly divided into vocabulary database module, English information processing module, network interface module, and human-computer interaction interface module. B/S framework protocol is used for integrated development and the design of translation platform. According to the characteristics of the data business application and the particularity of data security risk monitoring, from business English requirement analysis, the study of business English translation behavior monitoring ability and analysis in the process of abnormal behavior monitoring techniques and methods, including data access, data processing, experience in engine, and model engine puts forward the future research direction. The platform test results show that this method has good accuracy and strong automatic translation ability in business English literature translation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!