To see the other types of publications on this topic, follow the link: Weighted adaptive min-max normalization.

Journal articles on the topic 'Weighted adaptive min-max normalization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Weighted adaptive min-max normalization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kalluri, Venkata Saiteja, Sai Chakravarthy Malineni, Manjula Seenivasan, Jeevitha Sakkarai, Deepak Kumar, and Bhuvanesh Ananthan. "Enhancing manufacturing efficiency: leveraging CRM data with Lean-based DL approach for early failure detection." Bulletin of Electrical Engineering and Informatics 14, no. 3 (2025): 2319–29. https://doi.org/10.11591/eei.v14i3.8757.

Full text
Abstract:
In the pursuit of enhancing manufacturing competitiveness in India, companies are exploring innovative strategies to streamline operations and ensure product quality. Embracing Lean principles has become a focal point for many, aiming to optimize profitability while minimizing waste. As part of this endeavour, researchers have introduced various methodologies grounded in Lean principles to track and mitigate operational inefficiencies. This paper introduces a novel approach leveraging deep learning (DL) techniques to detect early failures in manufacturing systems. Initially, realtime data is collected and subjected to a normalization process, employing the weighted adaptive min-max normalization (WAdapt-MMN) technique to enhance data relevance and facilitate the training process. Subsequently, the paper proposes the utilization of a triple streamed attentive recalling recurrent neural network (TSAtt-RRNN) model to effectively identify Leanbased manufacturing failures. Through empirical evaluation, the proposed approach achieves promising results, with an accuracy of 99.23%, precision of 98.79%, recall of 98.92%, and F-measure of 99.2% in detecting early failures. This research underscores the potential of integrating DL methodologies with customer relationship management (CRM) data to bolster early failure detection capabilities in manufacturing, thereby fostering operational efficiency and competitive advantage.
APA, Harvard, Vancouver, ISO, and other styles
2

Prasetyowati, Sri Arttini Dwi, Munaf Ismail, and Badieah Badieah. "Implementation of Least Mean Square Adaptive Algorithm on Covid-19 Prediction." JUITA: Jurnal Informatika 10, no. 1 (2022): 139. http://dx.doi.org/10.30595/juita.v10i1.11963.

Full text
Abstract:
This study used Corona Virus Disease-19 (Covid-19) data in Indonesia from June to August 2021, consisting of data on people who were infected or positive Covid-19, recovered from Covid-19, and passed away from Covid-19. The data were processed using the adaptive LMS algorithm directly without pre-processing cause calculation errors, because covid-19 data was not balanced. Z-score and min-max normalization were chosen as pre-processing methods. After that, the prediction process can be carried out using the LMS adaptive method. The analysis was done by observing the error prediction that occurred every month per case. The results showed that data pre-processing using min-max normalization was better than with Z-score normalization because the error prediction for pre-processing using min-max and z-score were 18% and 47%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Rodríguez, Carlos Gervasio, María Isabel Lamas, Juan de Dios Rodríguez, and Claudio Caccia. "ANALYSIS OF THE PRE-INJECTION CONFIGURATION IN A MARINE ENGINE THROUGH SEVERAL MCDM TECHNIQUES." Brodogradnja 72, no. 4 (2021): 1–17. http://dx.doi.org/10.21278/brod72401.

Full text
Abstract:
The present manuscript describes a computational model employed to characterize the performance and emissions of a commercial marine diesel engine. This model analyzes several pre-injection parameters, such as starting instant, quantity, and duration. The goal is to reduce nitrogen oxides (NOx), as well as its effect on emissions and consumption. Since some of the parameters considered have opposite effects on the results, the present work proposes a MCDM (Multiple-Criteria Decision Making) methodology to determine the most adequate pre-injection configuration. An important issue in MCDM models is the data normalization process. This operation is necessary to convert the available data into a non-dimensional common scale, thus allowing ranking and rating alternatives. It is important to select a suitable normalization technique, and several methods exist in the literature. This work considers five well-known normalization procedures: linear max, linear max-min, linear sum, vector, and logarithmic normalization. As to the solution technique, the study considers three MCDM models: WSM (Weighted Sum Method), WPM (Weighted Product Method) and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution). The linear max, linear sum, vector, and logarithmic normalization procedures brought the same result: -22º CA ATDC pre-injection starting instant, 25% pre-injection quantity and 1-2º CA pre-injection duration. Nevertheless, the linear max min normalization procedure provided a result, which is different from the others and not recommended.
APA, Harvard, Vancouver, ISO, and other styles
4

Himsar, Himsar. "Payment System Liquidity Index." Talenta Conference Series: Energy and Engineering (EE) 1, no. 2 (2018): 196–210. http://dx.doi.org/10.32734/ee.v1i2.250.

Full text
Abstract:
ISSP is an index that demonstrates payment system’s stability figuring its liquidity (ISLSP) and its operational capability (IOSP). It was formed using two methods, which are statistical normalization and conversion using empirical normalization Min-Max. Basically, this paper intends to evaluate towards variables used in forming ISLSP and basically as a tool to ensure data sensitivity to important events stated. To get ISLSP that is sensitive to RTGS liquidity condition, we use coefficient from each weighted variable through simultaneous regression. We get parameters simbolized , and that are used as weight for each variable. Based on observation to these weighted variables, liquidity variables contribute 60%, PUAB contribute 30%, and interconnectedness contribute 10% in forming ISLSP.
APA, Harvard, Vancouver, ISO, and other styles
5

Shantal, Mohammed, Zalinda Othman, and Azuraliza Abu Bakar. "A Novel Approach for Data Feature Weighting Using Correlation Coefficients and Min–Max Normalization." Symmetry 15, no. 12 (2023): 2185. http://dx.doi.org/10.3390/sym15122185.

Full text
Abstract:
In the realm of data analysis and machine learning, achieving an optimal balance of feature importance, known as feature weighting, plays a pivotal role, especially when considering the nuanced interplay between the symmetry of data distribution and the need to assign differential weights to individual features. Also, avoiding the dominance of large-scale traits is essential in data preparation. This step makes choosing an effective normalization approach one of the most challenging aspects of machine learning. In addition to normalization, feature weighting is another strategy to deal with the importance of the different features. One of the strategies to measure the dependency of features is the correlation coefficient. The correlation between features shows the relationship strength between the features. The integration of the normalization method with feature weighting in data transformation for classification has not been extensively studied. The goal is to improve the accuracy of classification methods by striking a balance between the normalization step and assigning greater importance to features with a strong relation to the class feature. To achieve this, we combine Min–Max normalization and weight the features by increasing their values based on their correlation coefficients with the class feature. This paper presents a proposed Correlation Coefficient with Min–Max Weighted (CCMMW) approach. The data being normalized depends on their correlation with the class feature. Logistic regression, support vector machine, k-nearest neighbor, neural network, and naive Bayesian classifiers were used to evaluate the proposed method. Twenty UCI Machine Learning Repository and Kaggle datasets with numerical values were also used in this study. The empirical results showed that the proposed CCMMW significantly improves the classification performance through support vector machine, logistic regression, and neural network classifiers in most datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

HAFS, Toufik, Hatem ZEHIR, and Ali HAFS. "Enhancing Recognition in Multimodal Biometric Systems: Score Normalization and Fusion of Online Signatures and Fingerprints." Romanian Journal of Information Science and Technology 2024, no. 1 (2024): 37–49. http://dx.doi.org/10.59277/romjist.2024.1.03.

Full text
Abstract:
Multimodal biometrics employs multiple modalities within a single system to address the limitations of unimodal systems, such as incomplete data acquisition or deliberate fraud, while enhancing recognition accuracy. This study explores score normalization and its impact on system performance. To fuse scores effectively, prior normalization is necessary, followed by a weighted sum fusion technique that aligns impostor and genuine scores within a common range. Experiments conducted on three biometric databases demonstrate the promising efficacy of the proposed approach, particularly when combined with Empirical Modal Decomposition (EMD). The fusion system exhibits strong performance, with the best outcome achieved by merging the online signature and fingerprint modalities, resulting in a normalized Min-Max score-based Equal Error Rate (EER) of 1.69%.
APA, Harvard, Vancouver, ISO, and other styles
7

Nayak, Dillip Ranjan, Neelamadhab Padhy, Pradeep Kumar Mallick, Mikhail Zymbler, and Sachin Kumar. "Brain Tumor Classification Using Dense Efficient-Net." Axioms 11, no. 1 (2022): 34. http://dx.doi.org/10.3390/axioms11010034.

Full text
Abstract:
Brain tumors are most common in children and the elderly. It is a serious form of cancer caused by uncontrollable brain cell growth inside the skull. Tumor cells are notoriously difficult to classify due to their heterogeneity. Convolutional neural networks (CNNs) are the most widely used machine learning algorithm for visual learning and brain tumor recognition. This study proposed a CNN-based dense EfficientNet using min-max normalization to classify 3260 T1-weighted contrast-enhanced brain magnetic resonance images into four categories (glioma, meningioma, pituitary, and no tumor). The developed network is a variant of EfficientNet with dense and drop-out layers added. Similarly, the authors combined data augmentation with min-max normalization to increase the contrast of tumor cells. The benefit of the dense CNN model is that it can accurately categorize a limited database of pictures. As a result, the proposed approach provides exceptional overall performance. The experimental results indicate that the proposed model was 99.97% accurate during training and 98.78% accurate during testing. With high accuracy and a favorable F1 score, the newly designed EfficientNet CNN architecture can be a useful decision-making tool in the study of brain tumor diagnostic tests.
APA, Harvard, Vancouver, ISO, and other styles
8

Patanavijit, Vorapoj. "Denoising performance analysis of adaptive decision based inverse distance weighted interpolation (DBIDWI) algorithm for salt and pepper noise." Indonesian Journal of Electrical Engineering and Computer Science 15, no. 2 (2019): 804. http://dx.doi.org/10.11591/ijeecs.v15.i2.pp804-813.

Full text
Abstract:
<p>Due to its superior performance for denoising an image, which is contaminated by impulsive noise, an adaptive decision based inverse distance weighted interpolation (DBIDWI) algorithm is one of the most dominant and successful denoising algorithm, which is recently proposed in 2017, however this DBIDWI algorithm is not desired for denoising the full dynamic intensity range image, which is comprised of min or max intensity. Consequently, the research article aims to study the performance and its limitation of the DBIDWI algorithm when the DBIDWI algorithm is performed in both general images and the images, which are comprised of min or max intensity. In this simulation experiments, six noisy images (Lena, Mobile, Pepper, Pentagon, Girl and Resolution) under salt&pepper noise are used to evaluate the performance and its limitation of the DBIDWI algorithm in denoised image quality (PSNR) perspective.</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Lu, Kuan, Song Gao, Pang Xiangkun, Zhu lingkai, Xiangrong Meng, and Wenxue Sun. "Multi-layer Long Short-term Memory based Condenser Vacuum Degree Prediction Model on Power Plant." E3S Web of Conferences 136 (2019): 01012. http://dx.doi.org/10.1051/e3sconf/201913601012.

Full text
Abstract:
A multi-layer LSTM (Long short-term memory) model is proposed for condenser vacuum degree prediction of power plants. Firstly, Min-max normalization is used to pre-process the input data. Then, the model proposes the two-layer LSTM architecture to identify the time series pattern effectively. ADAM(Adaptive moment)optimizer is selected to find the optimum parameters for the model during training. Under the proposed forecasting framework, experiments illustrates that the two-layer LSTM model can give a more accurate forecast to the condenser vacuum degree compared with other simple RNN (Recurrent Neural Network) and one-layer LSTM model.
APA, Harvard, Vancouver, ISO, and other styles
10

Sharma, Nikhil, Prateek Jeet Singh Sohi, and Bharat Garg. "An Adaptive Weighted Min-Mid-Max Value Based Filter for Eliminating High Density Impulsive Noise." Wireless Personal Communications 119, no. 3 (2021): 1975–92. http://dx.doi.org/10.1007/s11277-021-08314-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Alhamad, Apriyanto, Azminuddin I. S. Azis, Budy Santoso, and Sunarto Taliki. "Prediksi Penyakit Jantung Menggunakan Metode-Metode Machine Learning Berbasis Ensemble – Weighted Vote." Jurnal Edukasi dan Penelitian Informatika (JEPIN) 5, no. 3 (2019): 352. http://dx.doi.org/10.26418/jp.v5i3.37188.

Full text
Abstract:
Kematian yang disebabkan penyakit jantung masih sangat tinggi, sehingga perlu peningkatan upaya-upaya pencegahannya, misalnya dengan meningkatkan capaian model prediksinya. Penerapan metode-metode machine learning pada dataset publik (Cleveland, Hungary, Switzerland, VA Long Beach, & Statlog) yang umumnya digunakan oleh para peneliti untuk prediksi penyakit jantung, termasuk pengembangan alat bantunya, masih belum menangani missing value, noisy data, unbalanced class, dan bahkan data validation secara efisien. Oleh karena itu, pendekatan imputasi mean/mode diusulkan untuk menangani missing value replacement, Min-Max Normalization untuk menangani smoothing noisy data, K-Fold Cross Validation untuk menangani data validation, dan pendekatan ensemble menggunakan metode Weighted Vote (WV) yang dapat menyatukan kinerja tiap-tiap metode machine learning untuk mengambil keputusan klasifikasi sekaligus untuk mereduksi unbalanced class. Hasil penelitian ini menunjukkan bahwa metode yang diusulkan tersebut memberikan akurasi sebesar 85,21%, sehingga mampu meningkatkan kinerja akurasi metode-metode machine learning, selisih 7,14% dengan Artificial Neural Network, 2,77% dengan Support Vector Machine, 0,34% dengan C4.5, 2,94% dengan Naïve Bayes, dan 3,95% dengan k-Nearest Neighbor.
APA, Harvard, Vancouver, ISO, and other styles
12

Prasetyowati, Sri Arttini Dwi, Munaf Ismail, Eka Nuryanto Budisusila, De Rosal Ignatius Moses Setiadi, and Mauridhi Hery Purnomo. "Dataset Feasibility Analysis Method based on Enhanced Adaptive LMS method with Min-max Normalization and Fuzzy Intuitive Sets." International Journal on Electrical Engineering and Informatics 14, no. 1 (2022): 55–75. http://dx.doi.org/10.15676/ijeei.2022.14.1.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Dai, Jianhua, Ye Liu, and Jiaolong Chen. "Feature selection via max-independent ratio and min-redundant ratio based on adaptive weighted kernel density estimation." Information Sciences 568 (August 2021): 86–112. http://dx.doi.org/10.1016/j.ins.2021.03.049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Benaliouche, Houda, and Mohamed Touahria. "Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint." Scientific World Journal 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/829369.

Full text
Abstract:
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.
APA, Harvard, Vancouver, ISO, and other styles
15

Polatgil, Mesut. "Investigation of the Effect of Normalization Methods on ANFIS Success: Forestfire and Diabets Datasets." International Journal of Information Technology and Computer Science 14, no. 1 (2022): 1–8. http://dx.doi.org/10.5815/ijitcs.2022.01.01.

Full text
Abstract:
Machine learning and artificial intelligence techniques are more and more in our lives and studies in this field are increasing day by day. Data is vital for these studies. In order to draw meaningful conclusions from the available data, new methods are proposed and successful results are obtained. The preparation of the obtained data is very important in the studies to be carried out. Data preprocessing is very important in the preparation of data. The most critical stage of the data preprocessing process is the scaling or normalization of the data. Machine learning libraries such as scikit-learn and programming languages such as R provide the necessary libraries to scale data. However, it is not known exactly which normalization method will be applied and which will yield more successful results. The success of these normalization methods has been investigated on many different methods, but such a study has not been done on the adaptive neural fuzzy inference system (ANFIS). The aim of this study is to examine the success of normalization methods on ANFIS in terms of both classification and regression problems. So, for studies using the Anfis method, guidance will be provided on which normalization process will give better results in the data preprocessing stage. Four different normalization methods in the scikit-learn library were applied on the Diabets and Forestfire datasets in the UCI database. The results are presented separately for both classification and regression. It has been determined that min-max normalization in classification problems and working with original data in regression problems are more successful.
APA, Harvard, Vancouver, ISO, and other styles
16

Hafs, Toufik, Hatem Zehir, Ali Hafs, and Amine Nait-Ali. "Multimodal Biometric System Based on the Fusion in Score of Fingerprint and Online Handwritten Signature." Applied Computer Systems 28, no. 1 (2023): 58–65. http://dx.doi.org/10.2478/acss-2023-0006.

Full text
Abstract:
Abstract Multimodal biometrics is the technique of using multiple modalities on a single system. This allows us to overcome the limitations of unimodal systems, such as the inability to acquire data from certain individuals or intentional fraud, while improving recognition performance. In this paper, a study of score normalization and its impact on the performance of the system is performed. The fusion of scores requires prior normalisation before applying a weighted sum fusion that separates impostor and genuine scores into a common interval with close ranges. The experiments were carried out on three biometric databases. The results show that the proposed strategy performs very encouragingly, especially in combination with Empirical Modal Decomposition (EMD). The proposed fusion system shows good performance. The best result is obtained by merging the globality online signature and fingerprint where an EER of 1.69 % is obtained by normalizing the scores according to the Min-Max method.
APA, Harvard, Vancouver, ISO, and other styles
17

Vinitha, Vinitha, V. Parthasarathy, and R. Santhosh. "Dense-BiGRU: Densely Connected Bi-directional Gated Recurrent Unit based Heart Failure Detection using ECG Signal." Journal of Cybersecurity and Information Management 14, no. 2 (2024): 53–69. http://dx.doi.org/10.54216/jcim.140204.

Full text
Abstract:
Heart failure, a state marked by the heart's inefficiency in pumping blood adequately., can lead to serious health complications and reduced quality of life. Detecting heart failure early is crucial as it allows for timely intervention and management strategies to prevent progression and improve patient outcomes. The effectiveness of integrating ECG and AI for heart failure detection stems from AI's capacity to meticulously analyze extensive ECG datasets, facilitating the early identification of nuanced cardiac irregularities and enhancing diagnostic precision. While the current research lacks sufficient accuracy and is burdened by complexity issues. To overcome this issue, we proposed a novel Densely Connected Bi-directional Gated Recurrent Unit (Dense-BiGRU) model for accurate heart failure detection. In this work, we enhanced collected ECG signal in terms of performing multiple data pre-treatment including as denoising, powerline interference and normalization utilizing Collaborative Empirical Mode Decomposition (CEMD) algorithm, Adaptive Least Mean Square (Adaptive LMS) and min-max normalization method, respectively. Here, we utilized the LiteStream_Net layer for extracting appropriate feature from pre-processed signal. Finally, based on extracted features heart failure detection is implemented through introducing Dense-BiGRU algorithm. The proposed research is implemented using MATLAB simulation tools, and its validation is conducted through various simulation metrics including accuracy, recall, precision, F1-score, and AUC. The results of the implementation demonstrate that the proposed research surpasses existing state-of-the-art methodologies.
APA, Harvard, Vancouver, ISO, and other styles
18

Zheng, Zhanguang, Kaiming Wan, Tao Xu, and Liping Jiang. "A Statistical Methodology of Cyclic Plasticity Inhomogeneity at Grain Scale." Journal of Modern Mechanical Engineering and Technology 12 (July 15, 2025): 25–33. https://doi.org/10.31875/2409-9848.2025.12.04.

Full text
Abstract:
The inhomogeneous plastic deformation has important effects on the manufacturing process and the fatigue property of mechanical products. To directly and correctly evaluate the deformation inhomogeneity of grain scale under cyclic loading, a statistical method is proposed and named as the normalized standard deviation. The method is comprised of the following steps: (1) Construct a representative volume element (RVE) of polycrystalline by Voronoi tessellation and electron backscatter diffraction, and calculate the grain strain by a constitutive model of crystal cyclical plasticity. (2) Deal with grain strain data of RVE by Min-max normalization method. (3) Compute the standard deviation of the normalized data as the identification of mesoscopic inhomogeneity. In order to validate the proposed normalized standard deviation, the contrastive analyses with the strain contours, the weighted standard deviation and the coefficient of variation are conducted at the same conditions of cyclic loading. The results demonstrated that the normalized standard deviation was the best as the indication of cyclic plasticity inhomogeneity among the above methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Shushu. "A Fractional-Order Weighted and Self-Adaptive Max-Min Ant System with 3-Opt Algorithm for Traveling Salesman Problem." International Journal of Intelligent Information Systems 5, no. 4 (2016): 48. http://dx.doi.org/10.11648/j.ijiis.20160504.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Protic, Danijela, Loveleen Gaur, Miomir Stankovic, and Md Anisur Rahman. "Cybersecurity in Smart Cities: Detection of Opposing Decisions on Anomalies in the Computer Network Behavior." Electronics 11, no. 22 (2022): 3718. http://dx.doi.org/10.3390/electronics11223718.

Full text
Abstract:
The increased use of urban technologies in smart cities brings new challenges and issues. Cyber security has become increasingly important as many critical components of information and communication systems depend on it, including various applications and civic infrastructures that use data-driven technologies and computer networks. Intrusion detection systems monitor computer networks for malicious activity. Signature-based intrusion detection systems compare the network traffic pattern to a set of known attack signatures and cannot identify unknown attacks. Anomaly-based intrusion detection systems monitor network traffic to detect changes in network behavior and identify unknown attacks. The biggest obstacle to anomaly detection is building a statistical normality model, which is difficult because a large amount of data is required to estimate the model. Supervised machine learning-based binary classifiers are excellent tools for classifying data as normal or abnormal. Feature selection and feature scaling are performed to eliminate redundant and irrelevant data. Of the 24 features of the Kyoto 2006+ dataset, nine numerical features are considered essential for model training. Min-Max normalization in the range [0,1] and [−1,1], Z-score standardization, and new hyperbolic tangent normalization are used for scaling. A hyperbolic tangent normalization is based on the Levenberg-Marquardt damping strategy and linearization of the hyperbolic tangent function with a narrow slope gradient around zero. Due to proven classification ability, in this study we used a feedforward neural network, decision tree, support vector machine, k-nearest neighbor, and weighted k-nearest neighbor models Overall accuracy decreased by less than 0.1 per cent, while processing time was reduced by more than a two-fold reduction. The results show a clear benefit of the TH scaling regarding processing time. Regardless of how accurate the classifiers are, their decisions can sometimes differ. Our study describes a conflicting decision detector based on an XOR operation performed on the outputs of two classifiers, the fastest feedforward neural network, and the more accurate but slower weighted k-nearest neighbor model. The results show that up to 6% of different decisions are detected.
APA, Harvard, Vancouver, ISO, and other styles
21

Bevl Naidu, Krishna Babu Sambaru, Guru Prasad Pasumarthi, Romala Vijaya Srinivas, K. Srinivasa Krishna, and V. Purna Kumari Pechetty. "Solar-Powered Aerobics Training Robot with Adaptive Energy Management for Improved Environmental Sustainability." Journal of Environmental & Earth Sciences 7, no. 6 (2025): 482–96. https://doi.org/10.30564/jees.v7i6.9012.

Full text
Abstract:
With the rapid advancement of robotics and Artificial Intelligence (AI), aerobics training companion robots now support eco-friendly fitness by reducing reliance on nonrenewable energy. This study presents a solar-powered aerobics training robot featuring an adaptive energy management system designed for sustainability and efficiency. The robot integrates machine vision with an enhanced Dynamic Cheetah Optimizer and Bayesian Neural Network (DynCO-BNN) to enable precise exercise monitoring and real-time feedback. Solar tracking technology ensures optimal energy absorption, while a microcontroller-based regulator manages power distribution and robotic movement. Dual-battery switching ensures uninterrupted operation, aided by light and I/V sensors for energy optimization. Using the INSIGHT-LME IMU dataset, which includes motion data from 76 individuals performing Local Muscular Endurance (LME) exercises, the system detects activities, counts repetitions, and recognizes human movements. To minimize energy use during data processing, Min-Max normalization and two-dimensional Discrete Fourier Transform (2D-DFT) are applied, boosting computational efficiency. The robot accurately identifies upper and lower limb movements, delivering effective exercise guidance. The DynCO-BNN model achieved a high tracking accuracy of 96.8%. Results confirm improved solar utilization, ecological sustainability, and reduced dependence on fossil fuels—positioning the robot as a smart, energy-efficient solution for next-generation fitness technology.
APA, Harvard, Vancouver, ISO, and other styles
22

Bagrecha, Chaya, Sachin Goswami, Manjula Jain, and Pompi Das Sengupta. "A framework for predicting stock prices based on a novel deep learning algorithm." Multidisciplinary Science Journal 6 (July 3, 2024): 2024ss0402. http://dx.doi.org/10.31893/multiscience.2024ss0402.

Full text
Abstract:
Stock price forecasting has been a difficult and crucial undertaking in the financial markets. The complex prediction models have been developed as a result of the changing stock values, which are affected by a wide range of variables. The development of deep learning (DL) and improved processing power has made programmed techniques of prediction effective at forecasting stock prices. In this article, we proposed a Stochastic Gradient Descent Weighted Long Short Term Memory (SGD-LSTM) method and a complete framework is used for predicting stock prices, which gives a novel viewpoint on stock market forecasting. Our system is comprised of three main parts: data preparation, feature extraction, and model design. To provide the highest quality data for training and assessment, we apply cutting-edge methods for data cleaning, normalization, and feature selection. The stock price data’s are collected. Min-max normalization method is used for preprocessing the collected data. The presented study makes use of the relative strength index to extract features since the index is representative of a variety of investing strategies that are taken into consideration by both the buyers as well as the sellers. This technique is used for analyzing the financial markets. We compared our system against both conventional forecasting models and other deep learning techniques on a wide variety of historical stock datasets to determine its efficacy. The results of the comparison reveal that the suggested prediction model provides more accurate forecasts.
APA, Harvard, Vancouver, ISO, and other styles
23

Kumar Reddy, Sama Lenin, C. V. Rao, P. Rajesh Kumar, R. V. G. Anjaneyulu, and B. Gopala Krishna. "An index based road feature extraction from LANDSAT-8 OLI images." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 2 (2021): 1319. http://dx.doi.org/10.11591/ijece.v11i2.pp1319-1336.

Full text
Abstract:
Road feature extraction from the remote sensing images is an arduous task and has a significant role in various applications of urban planning, updating the maps, traffic management, etc. In this paper, a new band combination (B652) to form a road index (RI) from OLI multispectral bands based on the spectral reflectance of asphalt, is presented for road feature extraction. The B652 is converted to road index by normalization. The morphological operators (top-hat or bottom-hat) uses on RI to enhance the roads. To sharpen the edges and for better discrimination of features, shock square filter (SSF), is proposed. Then, an iterative adaptive threshold (IAT) based online search with variational min-max and Markov random fields (MRF) model are used on the SSF image to segment the roads and non-roads. The roads are extracting by using the rules based on the connected component analysis. IAT and MRF model segmentation methods prove the proposed index (RI) able to extract road features productively. The proposed methodology is a combination of saturation based adaptive thresholding and morphology (SATM), and saturation based MRF (SMRF), applied to OLI images of several urban cities of India, producing the satisfactory results. The experimental results with the quantitative analysis presented in the paper.
APA, Harvard, Vancouver, ISO, and other styles
24

Sama, Lenin Kumar Reddy, V. Rao C., Rajesh Kumar P., V. G. Anjaneyulu R., and Gopala Krishna B. "An index based road feature extraction from LANDSAT-8 OLI images." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 2 (2021): 1319–36. https://doi.org/10.11591/ijece.v11i2.pp1319-1336.

Full text
Abstract:
Road feature extraction from the remote sensing images is an arduous task and has a significant role in various applications of urban planning, updating the maps, traffic management, etc. In this paper, a new band combination (B652) to form a road index (RI) from OLI multispectral bands based on the spectral reflectance of asphalt, is presented for road feature extraction. The B652 is converted to road index by normalization. The morphological operators (Top-hat or Bottom-hat) uses on RI to enhance the roads. To sharpen the edges and for better discrimination of features, shock square filter (SSF), is proposed. Then, an iterative adaptive threshold (IAT) based online search with variational min-max and markov random fields (MRF) model are used on the SSF image to segment the roads and non-roads. The roads are extracting by using the rules based on the connected component analysis. IAT and MRF model segmentation methods prove the proposed index (RI) able to extract road features productively. The proposed methodology is a combination of saturation based adaptive thresholding and morphology (SATM), and saturation based MRF (SMRF), applied to OLI images of several urban cities of India, producing the satisfactory results. The experimental results with the quantitative analysis presented in the paper.
APA, Harvard, Vancouver, ISO, and other styles
25

Malefaki, Sonia, Dionysios Markatos, Angelos Filippatos, and Spiros Pantelakis. "A Comparative Analysis of Multi-Criteria Decision-Making Methods and Normalization Techniques in Holistic Sustainability Assessment for Engineering Applications." Aerospace 12, no. 2 (2025): 100. https://doi.org/10.3390/aerospace12020100.

Full text
Abstract:
The sustainability evaluation of engineering processes and structures is a multifaceted challenge requiring the integration of diverse and often conflicting criteria. To address this challenge, Multi-Criteria Decision-Making (MCDM) methods have emerged as effective tools. However, the selection of the most suitable MCDM approach for problems involving multiple criteria is critical to ensuring robust, reliable, and actionable outcomes. Equally significant is the choice of a proper normalization technique, which plays a pivotal role in determining the robustness and reliability of the results. This study investigates the impact of common MCDM tools on the decision-making process concerning diverse aspects of sustainability. It also examines how different normalization methods influence the final outcomes. Sustainability in this context is understood as a trade-off among five key dimensions: performance, environmental impact, economic impact, social impact, and circularity. The outcome of the MCDM process is represented by an aggregated metric, referred to as the Sustainability Index (SI). This index offers a comprehensive and robust framework for evaluating sustainability and facilitating decision-making when conflicting criteria are present. To assess the effects of implementing different MCDM and normalization choices on the sustainability assessment, a dataset from the aviation sector is employed. Specifically, a typical aircraft component is analyzed as a case study for holistic sustainability assessment, utilizing data that represent the various dimensions of sustainability mentioned above, for this component. Additionally, the study investigates the influence of initial data variations and weight variations within the MCDM process on the results. The results indicate that, overall, the different MCDM and normalization methods lead to similar outcomes when applied to the design alternatives. However, a deeper dive into the results reveals that the weighted sum method, when paired with min-max normalization, appears to be more appropriate, based on the use case involved for the present investigation, due to its robustness regarding small variations in the initial data and its sensitivity to large ones. This research underscores the critical importance of selecting appropriate MCDM tools and normalization methods to enhance transparency, robustness, reliability, and consistency of sustainability assessments within a holistic framework.
APA, Harvard, Vancouver, ISO, and other styles
26

Jayabalan, Bhuvana, Rakesh Kumar Yadav, Raman Batra, and Ashendra Kumar Saxena. "An innovative neural network-based technique for identifying power quality issues." Multidisciplinary Science Journal 6 (July 12, 2024): 2024ss0301. http://dx.doi.org/10.31893/multiscience.2024ss0301.

Full text
Abstract:
Power quality (PQ) is defined as a combination of voltage quality and current quality. PQ is fast and difficult to predict. The primary issues that customers in the industry are consider being related to transitory interruptions, drawback include voltage sags, surges, harmonics and interruptions. To overcome this problem, we proposed an Adaptive Feedforward Bidirectional Gated Recurrent Neural Network (AF-BiGRNN) method to improve power quality issues. A PQ measurement shows a much more valuable asset. In the study, we gather the Reference Energy Disaggregation (REDD) dataset. The collected data is preprocessed using min max normalization to clean the data. Wavelet Packet Transform (WPT) is employed to extract the data features. The selected appropriate data is used to test the process using AF-BiGRNN method. The simulation findings of the result use a Python tool. As a result, our proposed method achieves significant outcomes with Accuracy (96%), Energy consumption (67.8%), Computational time (2.0%) and Voltage stability (97%). The Conclusion that identifies power quality issues is crucial for ensuring the reliability and efficiency of electrical systems.
APA, Harvard, Vancouver, ISO, and other styles
27

Wenda, Akianus, Antonius R. Kopong Notan, Shalwa Azizah Rananda Sudirman, T. Ferdiansyah Sudirman, Tegar Surya Pratama, and Zurnan Alfian. "Application of K-Means on Human Rights, Demographic, Economic, and Crypto Investment Data." Journal of Artificial Intelligence and Engineering Applications (JAIEA) 4, no. 3 (2025): 2539–48. https://doi.org/10.59934/jaiea.v4i3.1215.

Full text
Abstract:
Abstract This study combines the K-Means Clustering and Decision Tree methods to analyze multidomain data covering economic and social human rights, demographics, poverty, crypto investment, and sustainable financing in Indonesia's financial services sector. Data was obtained from various credible sources such as the National Commission on Human Rights (Komnas HAM), the Central Statistics Agency (BPS), the Financial Services Authority (OJK), and scientific publications (2019–2023), then processed through missing value handling, outlier detection, and normalization using Min-Max Scaling and Z-score. K-Means was used to group regions based on the similarity of socio-economic and financial indicators, while Decision Tree was used to classify financial entities based on ESG (Environmental, Social, and Governance) scores. Model evaluation was conducted using WCSS, Silhouette Score, Davies-Bouldin Index, and classification accuracy. The results show the formation of clusters representing different levels of inequality and sustainability in Indonesia. This approach contributes to understanding the dynamics of multidimensional development and provides a basis for more adaptive and sustainable policies in the socio-economic and financial sectors.
APA, Harvard, Vancouver, ISO, and other styles
28

Ira Modifa Tarigan, Muhammad Ade Kurnia Harahap, Endang setyawati, Jimmy Moedjahedy, Ernie C Avila, and Robbi Rahim. "A Multi-Criteria Decision-Making Approach for Warehouse Location Selection using TOPSIS." JINAV: Journal of Information and Visualization 4, no. 1 (2023): 45–52. http://dx.doi.org/10.35877/454ri.jinav1616.

Full text
Abstract:
This research makes use of the Method for Order of Preference by Similarity to Ideal Solution, also known as the TOPSIS approach, in order to discover the most suitable site for a company's warehouse. Following the establishment of the criteria for the selection of the warehouse location, weights were allotted to each of the criteria. The min-max method was utilized to do data normalization once it had been collected for each prospective location. After constructing the decision matrix with the weighted normalized values and determining the ideal and non-ideal solutions for each criterion, the results were then presented. Following the calculation of the Euclidean distance between each potential location and the ideal and non-ideal solutions, the TOPSIS formula was used to determine the relative proximity between each of the potential locations. The site of the potential location that was the highest relative closeness to the optimum solution was chosen to be the optimal location for the warehouse. By employing this strategy, the company will be able to make an educated decision regarding the location of their warehouse, which will, in the long run, result in improved operational efficiency and cost savings.
APA, Harvard, Vancouver, ISO, and other styles
29

Khaled, Khaled. "EfficientDense-ViT: APT Detection via Hybrid Deep Learning Framework with Hybrid Dipper Throated Sine Cosine Optimization Algorithm (HDT-SCO)." Journal of Cybersecurity and Information Management 15, no. 2 (2025): 147–64. https://doi.org/10.54216/jcim.150212.

Full text
Abstract:
Advanced Persistent Threats (APT) are intelligent, sophisticated cyberattacks that frequently evade detection by gradually interfering with vital systems or focusing on sensitive data. It is proposed herein the new approach of the Hybrid Dipper Throated Sine Cosine Optimization Algorithm (HDT-SCO) for APT detection in association with the EfficientDense-ViT model. It handles the class imbalance issue with advanced processing Adaptive Synthetic Minority Oversampling Technique (ADASYN), including min-max scaling for normalization, and median imputation for missing values. In terms of feature engineering, ResNet-152 and Symbolic Aggregate Approximation (SAX) are adopted for statistical, deep, and time series feature extraction. HDT-SCO optimizes the selection of relevant features to refine by integrating into it the three approaches: PCA, RFE, RF Feature Importance, and L1 Regularization (Lasso). Compared to current detection techniques, the best detection model shows high performance and efficiency through the hybrid deep learning model known as EfficientDense-ViT, which is a combination of EfficientNet, DenseNet, and Vision Transformers (ViT) that can detect APTs reliably. This method shows considerable improvement in both accuracy (0.98741 for the 7030 split and 0.99143 for the 8020 split) and efficiency as compared to existing models in the detection of APTs in cybersecurity.
APA, Harvard, Vancouver, ISO, and other styles
30

Kumar, C. Sandeep, Karan Ram Lal Gupta, Geetika M. Patel, and J. M. Haria. "A hybrid machine learning-powered intelligent system for enhancing dengue patient safety and care." Multidisciplinary Science Journal 6 (August 2, 2024): 2024ss0602. http://dx.doi.org/10.31893/multiscience.2024ss0602.

Full text
Abstract:
Dengue fever is a significant global health concern, with millions of cases reported each year, leading to considerable morbidity and mortality. Early diagnosis, patient monitoring and timely intervention are crucial for managing dengue patients. This study proposed a hybrid machine learning-powered intelligent system designed to enhance dengue patients safety and care. Utilizing data provided at enrollment, including platelet, age, white cell, genders, hematocrit and lymphocyte counts, a Weighted K Nearest Neighbor fused Gradient Boosting Decision Tree (WKNN-GBDT) was utilized to forecast the ultimate diagnosis. The study included 50 patients recruited between July to October 2019 and diagnosed with dengue infection. The collected data was preprocessed and features extracted using min max normalization and independent component analysis (ICA). The WKNN-GBDT model had an overallprecision of 0.96%, f1-score of 0.90% accuracy of 0.98%, andrecall of 0.99% in predicting the final diagnosis. As a consequence of seasonality and other variables, model results changed over time. These models might enhance medical decision-making in the field of medical care and offer passive surveillance in dengue affected areas. Considering the unexpected consequences of human-induced climate change and its impact on health, as well as the implications of seasonality and shifting disease prevalence, is crucial.
APA, Harvard, Vancouver, ISO, and other styles
31

Althobaiti, Maha M., and José Escorcia-Gutierrez. "Weighted salp swarm algorithm with deep learning-powered cyber-threat detection for robust network security." AIMS Mathematics 9, no. 7 (2024): 17676–95. http://dx.doi.org/10.3934/math.2024859.

Full text
Abstract:
<abstract><p>The fast development of the internet of things has been associated with the complex worldwide problem of protecting interconnected devices and networks. The protection of cyber security is becoming increasingly complicated due to the enormous growth in computer connectivity and the number of new applications related to computers. Consequently, emerging intrusion detection systems could execute a potential cyber security function to identify attacks and variations in computer networks. An efficient data-driven intrusion detection system can be generated utilizing artificial intelligence, especially machine learning methods. Deep learning methods offer advanced methodologies for identifying abnormalities in network traffic efficiently. Therefore, this article introduced a weighted salp swarm algorithm with deep learning-powered cyber-threat detection and classification (WSSADL-CTDC) technique for robust network security, with the aim of detecting the presence of cyber threats, keeping networks secure using metaheuristics with deep learning models, and implementing a min-max normalization approach to scale the data into a uniform format to accomplish this. In addition, the WSSADL-CTDC technique applied the shuffled frog leap algorithm (SFLA) to elect an optimum subset of features and applied a hybrid convolutional autoencoder (CAE) model for cyber threat detection and classification. A WSSA-based hyperparameter tuning method can be employed to enhance the detection performance of the CAE model. The simulation results of the WSSADL-CTDC system were examined in the benchmark dataset. The extensive analysis of the accuracy of the results found that the WSSADL-CTDC technique exhibited a better value of 99.13% than comparable methods on different measures.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
32

Qin, Xinjing, Zhisheng Wang, Manqun Zhang, Yue Feng, and Kexian Li. "Lawn Lamp Design Based on Fuzzy Control and Secondary Optical Optimization." Applied Sciences 13, no. 3 (2023): 1631. http://dx.doi.org/10.3390/app13031631.

Full text
Abstract:
With the emergence of new technologies, the design of urban infrastructure is constantly being innovated, and the lawn lamp as urban lighting infrastructure is an important part of urban infrastructure. For the current lawn lamp function, there are single, large power consumption, low light energy utilization and other shortcomings. Combined with deep learning and optical design, this paper constructs an adaptive lighting control system based on the technology of the Internet. Considering the nonlinear and time-varying characteristics of external factors, a fuzzy control model with ambient light level and pedestrian flow as input and dimming coefficient K (0 < K < 1) as output is proposed to adjust the brightness of the light source and achieve energy savings. In order to improve the light energy utilization of the luminaire and reduce the glare index of the luminaire, a free-form total internal reflection (TIR) lens was designed by finding the optimal curvature of the lens through the polycurved edge light principle. The light source of the lawn lamp was simulated by TracePro, and the results showed that the light energy utilization reached 90%. Finally, the ambient illumination and pedestrian flow data of Dalian ZT Park were measured for different time periods at the site, and the data were normalized using the min-max normalization algorithm. The adaptive dimming capability of the system was verified through simulation tests and field tests, and the results showed that the lighting energy efficiency under the control system was 38%.
APA, Harvard, Vancouver, ISO, and other styles
33

Priyadharshini, M., A. Faritha Banu, Bhisham Sharma, Subrata Chowdhury, Khaled Rabie, and Thokozani Shongwe. "Hybrid Multi-Label Classification Model for Medical Applications Based on Adaptive Synthetic Data and Ensemble Learning." Sensors 23, no. 15 (2023): 6836. http://dx.doi.org/10.3390/s23156836.

Full text
Abstract:
In recent years, both machine learning and computer vision have seen growth in the use of multi-label categorization. SMOTE is now being utilized in existing research for data balance, and SMOTE does not consider that nearby examples may be from different classes when producing synthetic samples. As a result, there can be more class overlap and more noise. To avoid this problem, this work presented an innovative technique called Adaptive Synthetic Data-Based Multi-label Classification (ASDMLC). Adaptive Synthetic (ADASYN) sampling is a sampling strategy for learning from unbalanced data sets. ADASYN weights minority class instances by learning difficulty. For hard-to-learn minority class cases, synthetic data are created. Their numerical variables are normalized with the help of the Min-Max technique to standardize the magnitude of each variable’s impact on the outcomes. The values of the attribute in this work are changed to a new range, from 0 to 1, using the normalization approach. To raise the accuracy of multi-label classification, Velocity-Equalized Particle Swarm Optimization (VPSO) is utilized for feature selection. In the proposed approach, to overcome the premature convergence problem, standard PSO has been improved by equalizing the velocity with each dimension of the problem. To expose the inherent label dependencies, the multi-label classification ensemble of Adaptive Neuro-Fuzzy Inference System (ANFIS), Probabilistic Neural Network (PNN), and Clustering-Based Decision tree methods will be processed based on an averaging method. The following criteria, including precision, recall, accuracy, and error rate, are used to assess performance. The suggested model’s multi-label classification accuracy is 90.88%, better than previous techniques, which is PCT, HOMER, and ML-Forest is 65.57%, 70.66%, and 82.29%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
34

Jayanna, Niranjan Shadaksharappa, and Raviprakash Madenur Lingaraju. "Seasonal auto-regressive integrated moving average with bidirectional long short-term memory for coconut yield prediction." International Journal of Electrical and Computer Engineering (IJECE) 15, no. 1 (2025): 783. http://dx.doi.org/10.11591/ijece.v15i1.pp783-791.

Full text
Abstract:
Crop yield prediction helps farmers make informed decisions regarding the optimal timing for crop cultivation, taking into account environmental factors to enhance predictive accuracy and maximize yields. The existing methods require a massive amount of data, which is complex to acquire. To overcome this issue, this paper proposed a seasonal auto-regressive integrated moving average-bidirectional long short-term memory (SARIMA-BiLSTM) for coconut yield prediction. The collected dataset is preprocessed through a label encoder and min-max normalization is employed to change non-numeric features into numerical features and enhance model performance. The preprocessed features are selected through an adaptive strategy-based whale optimization algorithm (AS-WOA) to avoid local optima issues. Then, the selected features are given to the SARIMA-BiLSTM to predict the coconut yields. The proposed SARIMA-BiLSTM is adaptable to handling a widespread of various seasonal patterns and captures spatial features. The SARIMA-BiLSTM performance is estimated through the coefficient of determination (R2), mean absolute error (MAE), mean squared error (MSE), and root mean square error (RMSE). SARIMA-BiLSTM attains 0.84 of R2, 0.056 of MAE, 0.081 of MSE, and 0.907 of RMSE which is better when compared to existing techniques like multilayer stacked ensemble, convolutional neural network and deep neural network (CNN-DNN) and autoregressive moving average (ARIMA).
APA, Harvard, Vancouver, ISO, and other styles
35

Wu, Pei-Yi, Yuan-Jin Lin, Yu-Jen Chang, et al. "Deep Learning-Assisted Diagnostic System: Apices and Odontogenic Sinus Floor Level Analysis in Dental Panoramic Radiographs." Bioengineering 12, no. 2 (2025): 134. https://doi.org/10.3390/bioengineering12020134.

Full text
Abstract:
Odontogenic sinusitis is a type of sinusitis caused by apical lesions of teeth near the maxillary sinus floor. Its clinical symptoms are highly like other types of sinusitis, often leading to misdiagnosis as general sinusitis by dentists in the early stages. This misdiagnosis delays treatment and may be accompanied by toothache. Therefore, using artificial intelligence to assist dentists in accurately diagnosing odontogenic sinusitis is crucial. This study introduces an innovative odontogenic sinusitis image processing technique, which is fused with common contrast limited adaptive histogram equalization, Min-Max normalization, and the RGB mapping method. Moreover, this study combined various deep learning models to enhance diagnostic accuracy. The YOLO 11n model was used to detect odontogenic sinusitis single tooth position in dental panoramic radiographs and achieved an accuracy of 98.2%. The YOLOv8n-cls model diagnosed odontogenic sinusitis with a final classification accuracy of 96.1%, achieving a 16.9% improvement over non-enhanced methods and outperforming recent studies by at least 4%. Additionally, in clinical applications, the classification accuracy for non-odontogenic sinusitis was 95.8%, while for odontogenic sinusitis it was 97.6%. The detection method developed in this study effectively reduces the radiation dose patients receive during CT imaging and serves as an auxiliary system, providing dentists with reliable support for the precise diagnosis of odontogenic sinusitis.
APA, Harvard, Vancouver, ISO, and other styles
36

Liang, Zhenyu, Letian Chen, and Wenbin Xiao. "Compression AutoEncoder for High-Resolution Ocean Sound Speed Profile Data." Journal of Physics: Conference Series 2718, no. 1 (2024): 012067. http://dx.doi.org/10.1088/1742-6596/2718/1/012067.

Full text
Abstract:
Abstract High-resolution ocean sound speed profile (HROSSP) data is essential for ocean acoustic modeling and sonar performance evaluation. However, the large volume and storage requirements of this data severely restrict its practical application in ocean acoustics. In this paper, we propose a compression autoencoder specifically designed for managing HROSSP data (CAE-HROSSP) and investigate the optimal network structure. Experimental results demonstrate that by using the min-max normalization method for input data and the corresponding inverse normalization for output data, along with employing the LeakyReLU function as the final activation layer, the accuracy of decompressed data reconstruction can be significantly improved. To tackle the challenges of fitting the distribution of surface sound speed data caused by significant variations and noise, we propose two loss functions: slice mean square error and elemental mean square error. These loss functions are combined with mean squared error through weighted summation to enhance CAE-HROSSP’s ability to fit the distribution of surface sound speed values and minimize the reconstruction errors of compressed data. Performance evaluation experiments reveal that CAE-HROSSP outperforms two existing methods in compressing HROSSP data, achieving superior performance with smaller data reconstruction errors at higher compression ratios. Furthermore, transfer learning is utilized to enhance the training of CAE-HROSSP, employing HROSSP data from the area where the mesoscale eddy is situated, as well as at the convergence of cold and warm ocean currents. The compression performance of both the training set and the validation set is comparable in the sea, where the structure of the sound speed profile varies greatly. This indicates that CAE-HROSSP can compress highly variable sound speed profile data in more sea areas using transfer learning, and has the potential to be extended globally. The findings and insights obtained from this study provide guidance for future endeavors in utilizing autoencoders to compress HROSSP data.
APA, Harvard, Vancouver, ISO, and other styles
37

Sheliemina, Nataliia, and Ievgen Rekun. "Economic and mathematical modeling of the mutual influence between a country’s economic well-being and healthcare expenditure." Problems and prospects of economics and management, no. 1(41) (May 16, 2025): 111–22. https://doi.org/10.25140/2411-5215-2025-1(41)-111-122.

Full text
Abstract:
This article investigates the relationship between a country's level of economic well-being and its healthcare expenditure using economic and mathematical modeling tools. The relevance of the topic stems from the increasing importance of human capital in transitional economies, demographic challenges, and the growing financial burden on healthcare systems due to post-crisis and pandemic factors. A composite well-being index (WI-5) is developed, integrating five key indicators: GDP per capita, public expenditure on education, life expectancy at birth, unemployment rate (inverted), and the Corruption Perceptions Index. The methodological approach is based on min-max normalization followed by weighted aggregation. Two regression models are constructed to assess both direct and reverse effects between well-being and healthcare spending. The analysis covers the period from 2020 to 2024. In cases of incomplete data, linear extrapolation was applied based on prior trends. The empirical focus includes Ukraine and three European Union countries: Germany, France, and Italy. The findings confirm the presence of a statistically significant bidirectional relationship, suggesting the cyclical nature of interactions between economic and social factors. Increased investment in healthcare positively contributes to economic well-being by improving population health and labor productivity. Conversely, higher levels of well-being expand fiscal capacity for sustained investment in the social sector. The results offer practical implications for designing evidence-based public policy in healthcare financing and long-term socio-economic planning.
APA, Harvard, Vancouver, ISO, and other styles
38

Prihandi, Ifan, Sutarto Wijono, Irwan Sembiring, and Evi Maria. "Implementation of ARIMA with Min-Max Normalization for predicting the Price and Production Quantity of Red Chili Peppers in North Sumatra Province considering Rainfall and Sunlight Duration Factors." Engineering, Technology & Applied Science Research 15, no. 2 (2025): 21876–87. https://doi.org/10.48084/etasr.9875.

Full text
Abstract:
Red chili peppers are a vital agricultural commodity in the North Sumatra province, playing a significant role in Indonesia's economy. Fluctuations in chili prices affect farmers, consumers, and overall economic stability. This study leverages time series forecasting using the ARIMA model to predict red chili pepper prices and production, incorporating weather factors such as rainfall and sunlight duration. The dataset spans March 2021 to December 2023 and includes historical records of chili prices, production levels, and weather conditions. The analysis reveals a strong correlation between price fluctuations and production trends: Prices tend to rise when production declines and fall when yields increase. Additionally, production is influenced by weather conditions, where excessive rainfall damages crops and reduces yields, while balanced rainfall and sunlight duration support optimal growth. The ARIMA model demonstrates its effectiveness in capturing these patterns, providing actionable insights for farmers and policymakers to predict price changes and optimize production strategies. By integrating data-driven forecasting with weather analysis, this research contributes to more adaptive and informed decision-making in the agricultural sector.
APA, Harvard, Vancouver, ISO, and other styles
39

Abdelaziz, Ahmed, and Alia N. Mahmoud. "Clustered IoT Based Data Fusion model for Smart Healthcare Systems." Journal of Intelligent Systems and Internet of Things 6, no. 2 (2022): 22–31. http://dx.doi.org/10.54216/jisiot.060202.

Full text
Abstract:
Futuristic sustainable computing solutions in e-healthcare applications were depends on the Internet of Things (IoT) and cloud computing (CC), has provided several features and realistic services. IoT-related medical devices gather the necessary data like recurrent transmissions in health limitations and upgrade the exactness of health limitations all inside a standard period. These data can be generated from different types of sensors in different formats. As a result, the data fusion is a big challenge to handle these IoT-based data. Moreover, IoT gadgets and medical parameters based on sensor readings are deployed for detecting diseases at the correct time beforehand attaining the rigorous state. Machine learning (ML) methods play a very significant task in determining decisions and managing a large volume of data. This manuscript offers a new Hyperparameter Tuned Deep learning Enabled Clustered IoT Based Smart Healthcare System (HPTDLEC-SHS) model. The presented HPTDLEC-SHS technique mainly focuses on the clustering of IoT devices using weighted clustering scheme and enables disease diagnosis process. At the beginning level, the HPTDLEC-SHS technique exploits min-max data normalization technique to convert the input data into compatible format. Besides, the gated recurrent unit (GRU) model is utilized to carry out the classification process. Finally, Jaya optimization algorithm (JOA) is exploited to fine tune the hyperparameters related to the GRU model. To demonstrate the enhanced performance of the HPTDLEC-SHS technique, an extensive comparative outcome highlighted its supremacy over other models.
APA, Harvard, Vancouver, ISO, and other styles
40

Mohammed, Gouse Pasha, Naif Alasmari, Hadeel Alsolai, Saud S. Alotaibi, Najm Alotaibi, and Heba Mohsen. "Autonomous Short-Term Traffic Flow Prediction Using Pelican Optimization with Hybrid Deep Belief Network in Smart Cities." Applied Sciences 12, no. 21 (2022): 10828. http://dx.doi.org/10.3390/app122110828.

Full text
Abstract:
Accurate and timely traffic flow prediction not just allows traffic controllers to evade traffic congestion and guarantee standard traffic functioning, it even assists travelers to take advantage of planning ahead of schedule and modifying travel routes promptly. Therefore, short-term traffic flow prediction utilizing artificial intelligence (AI) techniques has received significant attention in smart cities. This manuscript introduces an autonomous short-term traffic flow prediction using optimal hybrid deep belief network (AST2FP-OHDBN) model. The presented AST2FP-OHDBN model majorly focuses on high-precision traffic prediction in the process of making near future prediction of smart city environments. The presented AST2FP-OHDBN model initially normalizes the traffic data using min–max normalization. In addition, the HDBN model is employed for forecasting the traffic flow in the near future, and makes use of DBN with an adaptive learning step approach to enhance the convergence rate. To enhance the predictive accuracy of the DBN model, the pelican optimization algorithm (POA) is exploited as a hyperparameter optimizer, which in turn enhances the overall efficiency of the traffic flow prediction process. For assuring the enhanced predictive outcomes of the AST2FP-OHDBN algorithm, a wide-ranging experimental analysis can be executed. The experimental values reported the promising performance of the AST2FP-OHDBN method over recent state-of-the-art DL models with minimal average mean-square error of 17.19132 and root-mean-square error of 22.6634.
APA, Harvard, Vancouver, ISO, and other styles
41

Fan, Ke-Jun, Bo-Yuan Liu, and Wen-Hao Su. "Discrimination of Deoxynivalenol Levels of Barley Kernels Using Hyperspectral Imaging in Tandem with Optimized Convolutional Neural Network." Sensors 23, no. 5 (2023): 2668. http://dx.doi.org/10.3390/s23052668.

Full text
Abstract:
Deoxynivalenol (DON) in raw and processed grain poses significant risks to human and animal health. In this study, the feasibility of classifying DON levels in different genetic lines of barley kernels was evaluated using hyperspectral imaging (HSI) (382–1030 nm) in tandem with an optimized convolutional neural network (CNN). Machine learning methods including logistic regression, support vector machine, stochastic gradient descent, K nearest neighbors, random forest, and CNN were respectively used to develop the classification models. Spectral preprocessing methods including wavelet transform and max-min normalization helped to enhance the performance of different models. A simplified CNN model showed better performance than other machine learning models. Competitive adaptive reweighted sampling (CARS) in combination with successive projections algorithm (SPA) was applied to select the best set of characteristic wavelengths. Based on seven wavelengths selected, the optimized CARS-SPA-CNN model distinguished barley grains with low levels of DON (<5 mg/kg) from those with higher levels (5 mg/kg < DON ≤ 14 mg/kg) with an accuracy of 89.41%. The lower levels of DON class I (0.19 mg/kg ≤ DON ≤ 1.25 mg/kg) and class II (1.25 mg/kg < DON ≤ 5 mg/kg) were successfully distinguished based on the optimized CNN model, yielding a precision of 89.81%. The results suggest that HSI in tandem with CNN has great potential for discrimination of DON levels of barley kernels.
APA, Harvard, Vancouver, ISO, and other styles
42

Alsubayhay, Abraheem Mohammed Sulayman, Mohamed A. E. Abdalla, and Ali A. Salem Buras. "Adaptive HCI Systems with GRU-Based User Emotion Recognition and Response Prediction." International Science and Technology Journal 35, no. 1 (2024): 1–19. http://dx.doi.org/10.62341/amam2098.

Full text
Abstract:
Human-Computer Interaction (HCI) has evolved to incorporate sophisticated systems capable of recognizing and responding to user emotions, enhancing the user experience by making interactions more intuitive and engaging. This paper explores the development of adaptive HCI systems utilizing Gated Recurrent Unit (GRU) neural networks for emotion recognition and response prediction. The core objective is to create interfaces that dynamically adjust based on the user's emotional state, thereby improving usability and satisfaction across various applications, including healthcare, education, and customer service. GRU-based models are particularly effective for this task due to their ability to handle sequential data and capture temporal dependencies inherent in emotional expressions. By processing multimodal inputs such as facial expressions, voice intonations, text, and physiological signals, these systems can accurately detect and predict emotions in real-time. The DEAP dataset, which includes EEG and other physiological recordings along with self-assessed emotional ratings, serves as a foundational resource for training and validating these models. The proposed approach involves pre-processing the physiological data using techniques like min-max normalization to ensure consistency and stability during model training. The GRU models then learn to map these inputs to corresponding emotional states, leveraging their memory capabilities to retain and interpret temporal patterns. The adaptive nature of these systems enables them to provide personalized responses, such as adjusting the interface layout or offering support when negative emotions are detected. The implementation of GRU-based emotion recognition and response prediction in HCI systems holds significant potential for enhancing user interactions by making them more responsive and emotionally intelligent. This paper demonstrates the effectiveness of GRU models in real-time emotion monitoring and highlights their applications in creating more adaptive and empathetic technology interfaces. The proposed model is implemented in Python and has an accuracy of about 99.12% which is higher than other existing methods. Keywords: Human-Computer Interaction (HCI), Gated Recurrent Unit (GRU), Neural Networks, Emotional Detection.
APA, Harvard, Vancouver, ISO, and other styles
43

Vaijayanthimala, J., and T. Padma. "Synthesis Score Level Fusion Based Multifarious Classifier for Multi-Biometrics Applications." Journal of Medical Imaging and Health Informatics 9, no. 8 (2019): 1673–80. http://dx.doi.org/10.1166/jmihi.2019.2762.

Full text
Abstract:
In this paper, we are presenting a face and signature recognition method from a large dataset with the different pose and multiple features. Initially, Face and corresponding signature are detected from devices for further pre-processing. Face recognition is the first stage of a system then the signature verification will be done. The proposed Legion feature based verification method will be developed using four important steps like, (i) feature extraction from face and data glove signals using feature Extraction. The various Features like Local binary pattern, shape and geometrical features of face, then the global and local features of the signatures were extracted. (ii) Score match normalization is used to enhance the recognition accuracy using min–max and median estimations. (iii) Then the match scores are evaluated using synthesis score level fusion based feature matching through Euclidean distance, (iv) Recognition based on the final score. Finally based on the feature library the face image and signature can be recognized. The similarity measurement is done by using Synthesis score level fusion (SSF) based multifarious Neural network (MNN) Classifier with weighted summation formulae where two weights will be optimally found out using Adapted motion search optimization algorithm. Finally SSF-MNN based matching score fusion based decision classifier to determine recognized and non-recognized biometrics. Moreover, in comparative analysis, a proposed technique is compared with the existing method by several performance metrics and the proposed SSF-MNN technique efficiently recognize the face images and corresponding signature from the input databases than the existing technique.
APA, Harvard, Vancouver, ISO, and other styles
44

Guo, Xia. "Research on interactive english classroom teaching based on biosensor technology: Analysis of biological indicators." Molecular & Cellular Biomechanics 22, no. 2 (2025): 935. https://doi.org/10.62617/mcb935.

Full text
Abstract:
With the advancement of educational technology, biosensors are becoming valuable in enhancing classroom interactivity and adapting teaching strategies. In English language classrooms, maintaining student engagement and managing learning anxiety is essential for effective learning; traditional methods fail to offer real-time insights into student engagement and emotional states. The objective of the research was to enhance language instruction effectiveness by monitoring learners’ cognitive states using biosensor technology. Initially, biosensors were used to collect physiological data such as heart rate variability, eye movement, facial expression, posture, and seating data from students during English language lessons and also gathered over four weeks in a controlled classroom setting. The collected data underwent noise reduction using signal-to-noise ratio (SNR) to improve signal clarity and min-max normalization to scale the data within a consistent range for accurate analysis. Spiking neural networks (SNNs) are integrated with biosensors and areic brain neural processing, enabling dynamic adaptation of teaching content based on physiological signals and enhancing personalized learning by responding to student’s cognitive and emotional states. The findings offer that biosensor technology combined with SNNs significantly improves student engagement, reduces language anxiety, and increases learning efficiency. When compared to other weeks, student engagement (30%), cognitive load (10%), task completion efficiency (30%), attention focus (35%), and teacher-student interaction (35%), all showed better outcomes in Week 4. This suggests that biosensor-driven adaptive teaching, powered by SNNs, has the potential to transform interactive language learning.
APA, Harvard, Vancouver, ISO, and other styles
45

Jayanna, Niranjan Shadaksharappa, and Raviprakash Madenur Lingaraju. "Seasonal auto-regressive integrated moving average with bidirectional long short-term memory for coconut yield prediction." International Journal of Electrical and Computer Engineering (IJECE) 15, no. 1 (2025): 783–91. https://doi.org/10.11591/ijece.v15i1.pp783-791.

Full text
Abstract:
Crop yield prediction helps farmers make informed decisions regarding the optimal timing for crop cultivation, taking into account environmental factors to enhance predictive accuracy and maximize yields. The existing methods require a massive amount of data, which is complex to acquire. To overcome this issue, this paper proposed a seasonal auto-regressive integrated moving average-bidirectional long short-term memory (SARIMA-BiLSTM) for coconut yield prediction. The collected dataset is preprocessed through a label encoder and min-max normalization is employed to change non-numeric features into numerical features and enhance model performance. The preprocessed features are selected through an adaptive strategy-based whale optimization algorithm (AS-WOA) to avoid local optima issues. Then, the selected features are given to the SARIMA-BiLSTM to predict the coconut yields. The proposed SARIMA-BiLSTM is adaptable to handling a widespread of various seasonal patterns and captures spatial features. The SARIMA-BiLSTM performance is estimated through the coefficient of determination (R2), mean absolute error (MAE), mean squared error (MSE), and root mean square error (RMSE). SARIMA-BiLSTM attains 0.84 of R2, 0.056 of MAE, 0.081 of MSE, and 0.907 of RMSE which is better when compared to existing techniques like multilayer stacked ensemble,convolutional neural network and deep neural network (CNN-DNN) and autoregressive moving average (ARIMA).
APA, Harvard, Vancouver, ISO, and other styles
46

Rohini. C. "Intelligent Edge Healthcare Using Federated Learning and Clustering with Kepler-Optimized Steerable Graph Neural Networks." Journal of Information Systems Engineering and Management 10, no. 39s (2025): 1–13. https://doi.org/10.52783/jisem.v10i39s.7054.

Full text
Abstract:
Nowadays, people frequently use smart healthcare systems (SHS) to use a variety of smart devices to monitor their health. The SHS uses Internet of Things (IoT) and cloud infrastructure for data collection, transmission via smart devices, data processing, storage, and medical advice. It can be difficult to process so much data from so many IoT devices in a short period. Therefore, in SHS, technical frameworks like fog computing or edge computing can be utilized as mediators between the user and the cloud. It shortens response times for lower-level (edge-level) data processing. If anomalous data is generated, it will react quickly and securely to store and retrieve important data. This paper presents a smart health monitoring system architecture comprising three core layers: Data Generation, Edge Computing, and Cloud Storage. The Data Generation Layer utilizes IoMT devices, wearables, and sensors connected to an Edge-IoT Gateway for stream data acquisition. The Edge Computing Layer uses Z-score Min-Max normalization-based preprocessing, cascading residual graph convolutional networks to extract features, and the Steerable Graph Neural Network with Kepler Optimization Algorithm (SGNN-KOA) to improve the performance. The Cloud Storage Layer is provided to enhance the security feature of the cloud network using lightweight dynamic elliptic curve cryptography with Schoof’s algorithm for data deposit. As mentioned, the system is designed to support multi-modal learning, adaptive feedback, and secure access for facility comprehensive health management. With an accuracy rate of over 99%, convergence in fewer iterations, and high classification capabilities using measures like AUC of 0.99, the suggested method outperforms the others in terms of accuracy at epochs, reduced divergence, and improved accuracy.
APA, Harvard, Vancouver, ISO, and other styles
47

Gupta, Rajesh, Aditya Sharma, Charu Wadhwa, and Yogananthan S. "Improving supply chain performance administration using a novel deep learning algorithm." Multidisciplinary Science Journal 6 (July 3, 2024): 2024ss0410. http://dx.doi.org/10.31893/multiscience.2024ss0410.

Full text
Abstract:
Businesses and economic growth together depend on effective supply chain management. The threats posed by poor supply chain management are beyond the reach of the present methods of managing them. To study the improvement in supply chain efficiency management, we propose a learning and neural-network-based supply chain risk management model called the Integrated Tunicate Swarm Algorithm Adaptive Multilayer Feed Forward Neural Networks (ITSA-AMFNN) method for determining strategies to manage supply chains. The initial step of our methodology is to gather both historical and current supply chain data. Information on demand, stock, shipping and vendor efficiency is featured. To ensure stability and prevent abnormalities that might affect the accuracy of our model, we use min–max normalization on the raw data before feeding it into our classification method. Nonnegative Matrix Factorization (NMF) is a feature extraction method is used to get information from supply chain data. By revealing overlooked latent elements in the data, NMF helps to enhance forecasting and decision-making. The core of our study is the development of a novel deep learning classification called ITSA-AMFNN. This structure fuses swarm optimization with the flexibility of multilayer feed-forward neural networks. To keep up with the ever-changing demands of the supply chain, ITSA-AMFNN excels in predicting demand, inventory optimization, anomaly detection, and route optimization and supplier performance assessment. Finally, the research assesses the risk indicator system in light of the actual state of supply chain management. We use stringent performance assessment criteria including accuracy, precision, sensitivity and specificity to determine how our innovative algorithm performs. The method has been shown to optimize supply chain processes and improve performance in real-world case studies across many sectors.
APA, Harvard, Vancouver, ISO, and other styles
48

Wei, Lifei, Ziran Yuan, Yanfei Zhong, Lanfang Yang, Xin Hu, and Yangxi Zhang. "An Improved Gradient Boosting Regression Tree Estimation Model for Soil Heavy Metal (Arsenic) Pollution Monitoring Using Hyperspectral Remote Sensing." Applied Sciences 9, no. 9 (2019): 1943. http://dx.doi.org/10.3390/app9091943.

Full text
Abstract:
Hyperspectral remote sensing can be used to effectively identify contaminated elements in soil. However, in the field of monitoring soil heavy metal pollution, hyperspectral remote sensing has the characteristics of high dimensionality and high redundancy, which seriously affect the accuracy and stability of hyperspectral inversion models. To resolve the problem, a gradient boosting regression tree (GBRT) hyperspectral inversion algorithm for heavy metal (Arsenic (As)) content in soils based on Spearman’s rank correlation analysis (SCA) coupled with competitive adaptive reweighted sampling (CARS) is proposed in this paper. Firstly, the CARS algorithm is used to roughly select the original spectral data. Second derivative (SD), Gaussian filtering (GF), and min-max normalization (MMN) pretreatments are then used to improve the correlation between the spectra and As in the characteristic band enhancement stage. Finally, the low-correlation bands are removed using the SCA method, and a subset with absolute correlation values greater than 0.6 is retained as the optimal band subset after each pretreatment. For the modeling, the five most representative characteristic bands were selected in the Honghu area of China, and the nine most representative characteristic bands were selected in the Daye area of China. In order to verify the generalization ability of the proposed algorithm, 92 soil samples from the Honghu and Daye areas were selected as the research objects. With the use of support vector machine regression (SVMR), linear regression (LR), and random forest (RF) regression methods as comparative methods, all the models obtained a good prediction accuracy. However, among the different combinations, CARS-SCA-GBRT obtained the highest precision, which indicates that the proposed algorithm can select fewer characteristic bands to achieve a better inversion effect, and can thus provide accurate data support for the treatment and recovery of heavy metal pollution in soils.
APA, Harvard, Vancouver, ISO, and other styles
49

Aljuhni, Abdullah, Amer Aljaedi, Adel R. Alharbi, Ahmed Mubaraki, and Moahd K. Alghuson. "Hybrid Dynamic Galois Field with Quantum Resilience for Secure IoT Data Management and Transmission in Smart Cities Using Reed–Solomon (RS) Code." Symmetry 17, no. 2 (2025): 259. https://doi.org/10.3390/sym17020259.

Full text
Abstract:
The Internet of Things (IoT), which is characteristic of the current industrial revolutions, is the connection of physical devices through different protocols and sensors to share information. Even though the IoT provides revolutionary opportunities, its connection to the current Internet for smart cities brings new opportunities for security threats, especially with the appearance of new threats like quantum computing. Current approaches to protect IoT data are not immune to quantum attacks and are not designed to offer the best data management for smart city applications. Thus, post-quantum cryptography (PQC), which is still in its research stage, aims to solve these problems. To this end, this research introduces the Dynamic Galois Reed–Solomon with Quantum Resilience (DGRS-QR) system to improve the secure management and communication of data in IoT smart cities. The data preprocessing includes K-Nearest Neighbors (KNN) and min–max normalization and then applying the Galois Field Adaptive Expansion (GFAE). Optimization of the quantum-resistant keys is accomplished by applying Artificial Bee Colony (ABC) and Moth Flame Optimization (MFO) algorithms. Also, role-based access control provides strong cloud data security, and quantum resistance is maintained by refreshing keys every five minutes of the active session. For error correction, Reed–Solomon (RS) codes are used which provide data reliability. Data management is performed using an attention-based Bidirectional Long Short-Term Memory (Att-Bi-LSTM) model with skip connections to provide optimized city management. The proposed approach was evaluated using key performance metrics: a key generation time of 2.34 s, encryption time of 4.56 s, decryption time of 3.56 s, PSNR of 33 dB, and SSIM of 0.99. The results show that the proposed system is capable of protecting IoT data from quantum threats while also ensuring optimal data management and processing.
APA, Harvard, Vancouver, ISO, and other styles
50

Bashynska, Iryna, and Ihor Bashynskyi. "THE IMPACT OF ARTIFICIAL INTELLIGENCE ON THE SMARTIZATION OF ENTERPRISES." Смарт-економіка, підприємництво та безпека 2, no. 2 (2024): 17–25. https://doi.org/10.60022/sis.2.(02).2.

Full text
Abstract:
The increasing role of Artificial Intelligence (AI) in enterprise transformation necessitates compre- hensive assessment models that capture the full spectrum of AI-driven smartization. This study introduces the AI-Enterprise Smartization Index (AIES-Index) — a structured framework designed to evaluate AI adoption across five key dimensions: AI adoption level, decision-making autonomy, explainability and transparency, operational efficiency, and sustainability contributions. Unlike existing AI maturity models, which primarily focus on strategic or financial aspects, AIES-Index integrates quantifiable metrics that measure AI’s real-world impact on business operations, decision-making processes, and sustainability efforts. The study employs a multi-criteria weighted index methodology, where each category is assigned a specific weight based on its significance in AI-enabled smartization. A min-max normalization approach ensures com- parability across enterprises, allowing for objective benchmarking. To differentiate AIES-Index from existing models, a comparative analysis highlights its advantages in measuring AI adaptability, transparency, and ESG alignment, areas often overlooked in traditional AI capability frameworks. The results demonstrate that AIES-Index provides a more holistic and quantifiable assessment of AI-driven smartization, incorporating both financial and non-financial metrics. The model emphasizes AI’s role in en- hancing operational efficiency, optimizing business processes, improving explainability, and driving sustain- able innovation. A key finding is that AI transparency and decision-making autonomy are critical factors influ- encing enterprise-wide adoption and regulatory compliance. Future research will focus on empirical validation of AIES-Index using real enterprise data, enabling prac- tical applications across industries. Additionally, a sensitivity analysis will be conducted to assess the model’s robustness by evaluating how variations in specific indicators affect the overall AIES-Index score. These steps will ensure the model’s adaptability to dynamic business environments and sector-specific AI implementations. The proposed AIES-Index framework serves as a valuable tool for enterprises, policymakers, and research- ers, offering a structured methodology to evaluate AI’s transformative impact on business processes while align- ing with modern governance, ethical, and sustainability standards.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!