Academic literature on the topic 'Weighted adaptive min-max normalization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Weighted adaptive min-max normalization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Weighted adaptive min-max normalization"

1

Kalluri, Venkata Saiteja, Sai Chakravarthy Malineni, Manjula Seenivasan, Jeevitha Sakkarai, Deepak Kumar, and Bhuvanesh Ananthan. "Enhancing manufacturing efficiency: leveraging CRM data with Lean-based DL approach for early failure detection." Bulletin of Electrical Engineering and Informatics 14, no. 3 (2025): 2319–29. https://doi.org/10.11591/eei.v14i3.8757.

Full text
Abstract:
In the pursuit of enhancing manufacturing competitiveness in India, companies are exploring innovative strategies to streamline operations and ensure product quality. Embracing Lean principles has become a focal point for many, aiming to optimize profitability while minimizing waste. As part of this endeavour, researchers have introduced various methodologies grounded in Lean principles to track and mitigate operational inefficiencies. This paper introduces a novel approach leveraging deep learning (DL) techniques to detect early failures in manufacturing systems. Initially, realtime data is collected and subjected to a normalization process, employing the weighted adaptive min-max normalization (WAdapt-MMN) technique to enhance data relevance and facilitate the training process. Subsequently, the paper proposes the utilization of a triple streamed attentive recalling recurrent neural network (TSAtt-RRNN) model to effectively identify Leanbased manufacturing failures. Through empirical evaluation, the proposed approach achieves promising results, with an accuracy of 99.23%, precision of 98.79%, recall of 98.92%, and F-measure of 99.2% in detecting early failures. This research underscores the potential of integrating DL methodologies with customer relationship management (CRM) data to bolster early failure detection capabilities in manufacturing, thereby fostering operational efficiency and competitive advantage.
APA, Harvard, Vancouver, ISO, and other styles
2

Prasetyowati, Sri Arttini Dwi, Munaf Ismail, and Badieah Badieah. "Implementation of Least Mean Square Adaptive Algorithm on Covid-19 Prediction." JUITA: Jurnal Informatika 10, no. 1 (2022): 139. http://dx.doi.org/10.30595/juita.v10i1.11963.

Full text
Abstract:
This study used Corona Virus Disease-19 (Covid-19) data in Indonesia from June to August 2021, consisting of data on people who were infected or positive Covid-19, recovered from Covid-19, and passed away from Covid-19. The data were processed using the adaptive LMS algorithm directly without pre-processing cause calculation errors, because covid-19 data was not balanced. Z-score and min-max normalization were chosen as pre-processing methods. After that, the prediction process can be carried out using the LMS adaptive method. The analysis was done by observing the error prediction that occurred every month per case. The results showed that data pre-processing using min-max normalization was better than with Z-score normalization because the error prediction for pre-processing using min-max and z-score were 18% and 47%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Rodríguez, Carlos Gervasio, María Isabel Lamas, Juan de Dios Rodríguez, and Claudio Caccia. "ANALYSIS OF THE PRE-INJECTION CONFIGURATION IN A MARINE ENGINE THROUGH SEVERAL MCDM TECHNIQUES." Brodogradnja 72, no. 4 (2021): 1–17. http://dx.doi.org/10.21278/brod72401.

Full text
Abstract:
The present manuscript describes a computational model employed to characterize the performance and emissions of a commercial marine diesel engine. This model analyzes several pre-injection parameters, such as starting instant, quantity, and duration. The goal is to reduce nitrogen oxides (NOx), as well as its effect on emissions and consumption. Since some of the parameters considered have opposite effects on the results, the present work proposes a MCDM (Multiple-Criteria Decision Making) methodology to determine the most adequate pre-injection configuration. An important issue in MCDM models is the data normalization process. This operation is necessary to convert the available data into a non-dimensional common scale, thus allowing ranking and rating alternatives. It is important to select a suitable normalization technique, and several methods exist in the literature. This work considers five well-known normalization procedures: linear max, linear max-min, linear sum, vector, and logarithmic normalization. As to the solution technique, the study considers three MCDM models: WSM (Weighted Sum Method), WPM (Weighted Product Method) and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution). The linear max, linear sum, vector, and logarithmic normalization procedures brought the same result: -22º CA ATDC pre-injection starting instant, 25% pre-injection quantity and 1-2º CA pre-injection duration. Nevertheless, the linear max min normalization procedure provided a result, which is different from the others and not recommended.
APA, Harvard, Vancouver, ISO, and other styles
4

Himsar, Himsar. "Payment System Liquidity Index." Talenta Conference Series: Energy and Engineering (EE) 1, no. 2 (2018): 196–210. http://dx.doi.org/10.32734/ee.v1i2.250.

Full text
Abstract:
ISSP is an index that demonstrates payment system’s stability figuring its liquidity (ISLSP) and its operational capability (IOSP). It was formed using two methods, which are statistical normalization and conversion using empirical normalization Min-Max. Basically, this paper intends to evaluate towards variables used in forming ISLSP and basically as a tool to ensure data sensitivity to important events stated. To get ISLSP that is sensitive to RTGS liquidity condition, we use coefficient from each weighted variable through simultaneous regression. We get parameters simbolized , and that are used as weight for each variable. Based on observation to these weighted variables, liquidity variables contribute 60%, PUAB contribute 30%, and interconnectedness contribute 10% in forming ISLSP.
APA, Harvard, Vancouver, ISO, and other styles
5

Shantal, Mohammed, Zalinda Othman, and Azuraliza Abu Bakar. "A Novel Approach for Data Feature Weighting Using Correlation Coefficients and Min–Max Normalization." Symmetry 15, no. 12 (2023): 2185. http://dx.doi.org/10.3390/sym15122185.

Full text
Abstract:
In the realm of data analysis and machine learning, achieving an optimal balance of feature importance, known as feature weighting, plays a pivotal role, especially when considering the nuanced interplay between the symmetry of data distribution and the need to assign differential weights to individual features. Also, avoiding the dominance of large-scale traits is essential in data preparation. This step makes choosing an effective normalization approach one of the most challenging aspects of machine learning. In addition to normalization, feature weighting is another strategy to deal with the importance of the different features. One of the strategies to measure the dependency of features is the correlation coefficient. The correlation between features shows the relationship strength between the features. The integration of the normalization method with feature weighting in data transformation for classification has not been extensively studied. The goal is to improve the accuracy of classification methods by striking a balance between the normalization step and assigning greater importance to features with a strong relation to the class feature. To achieve this, we combine Min–Max normalization and weight the features by increasing their values based on their correlation coefficients with the class feature. This paper presents a proposed Correlation Coefficient with Min–Max Weighted (CCMMW) approach. The data being normalized depends on their correlation with the class feature. Logistic regression, support vector machine, k-nearest neighbor, neural network, and naive Bayesian classifiers were used to evaluate the proposed method. Twenty UCI Machine Learning Repository and Kaggle datasets with numerical values were also used in this study. The empirical results showed that the proposed CCMMW significantly improves the classification performance through support vector machine, logistic regression, and neural network classifiers in most datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

HAFS, Toufik, Hatem ZEHIR, and Ali HAFS. "Enhancing Recognition in Multimodal Biometric Systems: Score Normalization and Fusion of Online Signatures and Fingerprints." Romanian Journal of Information Science and Technology 2024, no. 1 (2024): 37–49. http://dx.doi.org/10.59277/romjist.2024.1.03.

Full text
Abstract:
Multimodal biometrics employs multiple modalities within a single system to address the limitations of unimodal systems, such as incomplete data acquisition or deliberate fraud, while enhancing recognition accuracy. This study explores score normalization and its impact on system performance. To fuse scores effectively, prior normalization is necessary, followed by a weighted sum fusion technique that aligns impostor and genuine scores within a common range. Experiments conducted on three biometric databases demonstrate the promising efficacy of the proposed approach, particularly when combined with Empirical Modal Decomposition (EMD). The fusion system exhibits strong performance, with the best outcome achieved by merging the online signature and fingerprint modalities, resulting in a normalized Min-Max score-based Equal Error Rate (EER) of 1.69%.
APA, Harvard, Vancouver, ISO, and other styles
7

Nayak, Dillip Ranjan, Neelamadhab Padhy, Pradeep Kumar Mallick, Mikhail Zymbler, and Sachin Kumar. "Brain Tumor Classification Using Dense Efficient-Net." Axioms 11, no. 1 (2022): 34. http://dx.doi.org/10.3390/axioms11010034.

Full text
Abstract:
Brain tumors are most common in children and the elderly. It is a serious form of cancer caused by uncontrollable brain cell growth inside the skull. Tumor cells are notoriously difficult to classify due to their heterogeneity. Convolutional neural networks (CNNs) are the most widely used machine learning algorithm for visual learning and brain tumor recognition. This study proposed a CNN-based dense EfficientNet using min-max normalization to classify 3260 T1-weighted contrast-enhanced brain magnetic resonance images into four categories (glioma, meningioma, pituitary, and no tumor). The developed network is a variant of EfficientNet with dense and drop-out layers added. Similarly, the authors combined data augmentation with min-max normalization to increase the contrast of tumor cells. The benefit of the dense CNN model is that it can accurately categorize a limited database of pictures. As a result, the proposed approach provides exceptional overall performance. The experimental results indicate that the proposed model was 99.97% accurate during training and 98.78% accurate during testing. With high accuracy and a favorable F1 score, the newly designed EfficientNet CNN architecture can be a useful decision-making tool in the study of brain tumor diagnostic tests.
APA, Harvard, Vancouver, ISO, and other styles
8

Patanavijit, Vorapoj. "Denoising performance analysis of adaptive decision based inverse distance weighted interpolation (DBIDWI) algorithm for salt and pepper noise." Indonesian Journal of Electrical Engineering and Computer Science 15, no. 2 (2019): 804. http://dx.doi.org/10.11591/ijeecs.v15.i2.pp804-813.

Full text
Abstract:
<p>Due to its superior performance for denoising an image, which is contaminated by impulsive noise, an adaptive decision based inverse distance weighted interpolation (DBIDWI) algorithm is one of the most dominant and successful denoising algorithm, which is recently proposed in 2017, however this DBIDWI algorithm is not desired for denoising the full dynamic intensity range image, which is comprised of min or max intensity. Consequently, the research article aims to study the performance and its limitation of the DBIDWI algorithm when the DBIDWI algorithm is performed in both general images and the images, which are comprised of min or max intensity. In this simulation experiments, six noisy images (Lena, Mobile, Pepper, Pentagon, Girl and Resolution) under salt&pepper noise are used to evaluate the performance and its limitation of the DBIDWI algorithm in denoised image quality (PSNR) perspective.</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Lu, Kuan, Song Gao, Pang Xiangkun, Zhu lingkai, Xiangrong Meng, and Wenxue Sun. "Multi-layer Long Short-term Memory based Condenser Vacuum Degree Prediction Model on Power Plant." E3S Web of Conferences 136 (2019): 01012. http://dx.doi.org/10.1051/e3sconf/201913601012.

Full text
Abstract:
A multi-layer LSTM (Long short-term memory) model is proposed for condenser vacuum degree prediction of power plants. Firstly, Min-max normalization is used to pre-process the input data. Then, the model proposes the two-layer LSTM architecture to identify the time series pattern effectively. ADAM(Adaptive moment)optimizer is selected to find the optimum parameters for the model during training. Under the proposed forecasting framework, experiments illustrates that the two-layer LSTM model can give a more accurate forecast to the condenser vacuum degree compared with other simple RNN (Recurrent Neural Network) and one-layer LSTM model.
APA, Harvard, Vancouver, ISO, and other styles
10

Sharma, Nikhil, Prateek Jeet Singh Sohi, and Bharat Garg. "An Adaptive Weighted Min-Mid-Max Value Based Filter for Eliminating High Density Impulsive Noise." Wireless Personal Communications 119, no. 3 (2021): 1975–92. http://dx.doi.org/10.1007/s11277-021-08314-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Conference papers on the topic "Weighted adaptive min-max normalization"

1

Patel, Chetan, Aarsh Pandey, Rajesh Wadhvani, and Deepali Patil. "Forecasting Nonstationary Wind Data Using Adaptive Min-Max Normalization." In 2022 1st International Conference on Sustainable Technology for Power and Energy Systems (STPES). IEEE, 2022. http://dx.doi.org/10.1109/stpes54845.2022.10006473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography