Academic literature on the topic 'Class imbalance'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Class imbalance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Class imbalance"

1

Hosen, Md Saikat, and Sai Srujan Gutlapalli. "A Study of Innovative Class Imbalance Dataset Software Defect Prediction Methods." Asian Journal of Applied Science and Engineering 10, no. 1 (December 10, 2021): 52–55. http://dx.doi.org/10.18034/ajase.v10i1.52.

Full text
Abstract:
Data mining for software defect prediction is the best approach for detecting problematic modules. On-hand classification methods can speed up knowledge discovery on class balance datasets. Actual facts are not balanced since one class dominates the other. These are class imbalances or skewed data sources. As class imbalance increases, the fault prediction rate decreases. For class imbalance data streams, the suggested algorithms use unique oversampling and under-sampling strategies to remove noisy and weak examples from both the majority and minority. We test three techniques on class imbalance software defect datasets using four assessment measures. Results indicate that class-imbalanced software defect datasets can be solved.
APA, Harvard, Vancouver, ISO, and other styles
2

Dube, Lindani, and Tanja Verster. "Enhancing classification performance in imbalanced datasets: A comparative analysis of machine learning models." Data Science in Finance and Economics 3, no. 4 (2023): 354–79. http://dx.doi.org/10.3934/dsfe.2023021.

Full text
Abstract:
<abstract><p>In the realm of machine learning, where data-driven insights guide decision-making, addressing the challenges posed by class imbalance in datasets has emerged as a crucial concern. The effectiveness of classification algorithms hinges not only on their intrinsic capabilities but also on their adaptability to uneven class distributions, a common issue encountered across diverse domains. This study delves into the intricate interplay between varying class imbalance levels and the performance of ten distinct classification models, unravelling the critical impact of this imbalance on the landscape of predictive analytics. Results showed that random forest (RF) and decision tree (DT) models outperformed others, exhibiting robustness to class imbalance. Logistic regression (LR), stochastic gradient descent classifier (SGDC) and naïve Bayes (NB) models struggled with imbalanced datasets. Adaptive boosting (ADA), gradient boosting (GB), extreme gradient boosting (XGB), light gradient boosting machine (LGBM), and k-nearest neighbour (kNN) models improved with balanced data. Adaptive synthetic sampling (ADASYN) yielded more reliable predictions than the under-sampling (UNDER) technique. This study provides insights for practitioners and researchers dealing with imbalanced datasets, guiding model selection and data balancing techniques. RF and DT models demonstrate superior performance, while LR, SGDC and NB models have limitations. By leveraging the strengths of RF and DT models and addressing class imbalance, classification performance in imbalanced datasets can be enhanced. This study enriches credit risk modelling literature by revealing how class imbalance impacts default probability estimation. The research deepens our understanding of class imbalance's critical role in predictive analytics. Serving as a roadmap for practitioners and researchers dealing with imbalanced data, the findings guide model selection and data balancing strategies, enhancing classification performance despite class imbalance.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Linbin, Caiguang Zhang, Sinong Quan, Huaxin Xiao, Gangyao Kuang, and Li Liu. "A Class Imbalance Loss for Imbalanced Object Recognition." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13 (2020): 2778–92. http://dx.doi.org/10.1109/jstars.2020.2995703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xue, Jie, and Jinwei Ma. "Extreme Sample Imbalance Classification Model Based on Sample Skewness Self-Adaptation." Symmetry 15, no. 5 (May 14, 2023): 1082. http://dx.doi.org/10.3390/sym15051082.

Full text
Abstract:
This paper aims to solve the asymmetric problem of sample classification recognition in extreme class imbalance. Inspired by Krawczyk (2016)’s improvement direction of extreme sample imbalance classification, this paper adopts the AdaBoost model framework to optimize the sample weight update function in each iteration. This weight update not only takes into account the sampling weights of misclassified samples, but also pays more attention to the classification effect of misclassified minority sample classes. Thus, it makes the model more adaptable to imbalanced sample class distribution and the situation of extreme imbalance and make the weight adjustment in hard classification samples more adaptive as well as to generate a symmetry between the minority and majority samples in the imbalanced datasets by adjusting class distribution of the datasets. Based on this, the imbalance boosting model, the Imbalance AdaBoost (ImAdaBoost) model is constructed. In the experimental design stage, ImAdaBoost model is compared with the original model and the mainstream imbalance classification model based on imbalanced datasets with different ratio, including extreme imbalanced dataset. The results show that the ImAdaBoost model has good minority class recognition recall ability in the weakly extreme and general class imbalance sets. In addition, the average recall rate of minority class of the mainstream imbalance classification models is 7% lower than that of ImAdaBoost model in the weakly extreme imbalance set. The ImAdaBoost model ensures that the recall rate of the minority class is at the middle level of the comparison model, and the F1-score comprehensive index performs well, demonstrating the strong stability of the minority class classification in extreme imbalanced dataset.
APA, Harvard, Vancouver, ISO, and other styles
5

Munguía Mondragón, Julio Cesar, Eréndira Rendón Lara, Roberto Alejo Eleuterio, Everardo Efrén Granda Gutirrez, and Federico Del Razo López. "Density-Based Clustering to Deal with Highly Imbalanced Data in Multi-Class Problems." Mathematics 11, no. 18 (September 21, 2023): 4008. http://dx.doi.org/10.3390/math11184008.

Full text
Abstract:
In machine learning and data mining applications, an imbalanced distribution of classes in the training dataset can drastically affect the performance of learning models. The class imbalance problem is frequently observed during classification tasks in real-world scenarios when the available instances of one class are much fewer than the amount of data available in other classes. Machine learning algorithms that do not consider the class imbalance could introduce a strong bias towards the majority class, while the minority class is usually despised. Thus, sampling techniques have been extensively used in various studies to overcome class imbalances, mainly based on random undersampling and oversampling methods. However, there is still no final solution, especially in the domain of multi-class problems. A strategy that combines density-based clustering algorithms with random undersampling and oversampling techniques is studied in this work. To analyze the performance of the studied method, an experimental validation was achieved on a collection of hyperspectral remote sensing images, and a deep learning neural network was utilized as the classifier. This data bank contains six datasets with different imbalance ratios, from slight to severe. The experimental results outperform the classification measured by the geometric mean of the precision compared with other state-of-the-art methods, mainly for highly imbalanced datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

Lango, Mateusz. "Tackling the Problem of Class Imbalance in Multi-class Sentiment Classification: An Experimental Study." Foundations of Computing and Decision Sciences 44, no. 2 (June 1, 2019): 151–78. http://dx.doi.org/10.2478/fcds-2019-0009.

Full text
Abstract:
Abstract Sentiment classification is an important task which gained extensive attention both in academia and in industry. Many issues related to this task such as handling of negation or of sarcastic utterances were analyzed and accordingly addressed in previous works. However, the issue of class imbalance which often compromises the prediction capabilities of learning algorithms was scarcely studied. In this work, we aim to bridge the gap between imbalanced learning and sentiment analysis. An experimental study including twelve imbalanced learning preprocessing methods, four feature representations, and a dozen of datasets, is carried out in order to analyze the usefulness of imbalanced learning methods for sentiment classification. Moreover, the data difficulty factors — commonly studied in imbalanced learning — are investigated on sentiment corpora to evaluate the impact of class imbalance.
APA, Harvard, Vancouver, ISO, and other styles
7

Juba, Brendan, and Hai S. Le. "Precision-Recall versus Accuracy and the Role of Large Data Sets." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4039–48. http://dx.doi.org/10.1609/aaai.v33i01.33014039.

Full text
Abstract:
Practitioners of data mining and machine learning have long observed that the imbalance of classes in a data set negatively impacts the quality of classifiers trained on that data. Numerous techniques for coping with such imbalances have been proposed, but nearly all lack any theoretical grounding. By contrast, the standard theoretical analysis of machine learning admits no dependence on the imbalance of classes at all. The basic theorems of statistical learning establish the number of examples needed to estimate the accuracy of a classifier as a function of its complexity (VC-dimension) and the confidence desired; the class imbalance does not enter these formulas anywhere. In this work, we consider the measures of classifier performance in terms of precision and recall, a measure that is widely suggested as more appropriate to the classification of imbalanced data. We observe that whenever the precision is moderately large, the worse of the precision and recall is within a small constant factor of the accuracy weighted by the class imbalance. A corollary of this observation is that a larger number of examples is necessary and sufficient to address class imbalance, a finding we also illustrate empirically.
APA, Harvard, Vancouver, ISO, and other styles
8

Hartono, Hartono, Erianto Ongko, and Yeni Risyani. "Combining feature selection and hybrid approach redefinition in handling class imbalance and overlapping for multi-class imbalanced." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 3 (March 10, 2021): 1513. http://dx.doi.org/10.11591/ijeecs.v21.i3.pp1513-1522.

Full text
Abstract:
<span>In the classification process that contains class imbalance problems. In addition to the uneven distribution of instances which causes poor performance, overlapping problems also cause performance degradation. This paper proposes a method that combining feature selection and hybrid approach redefinition (HAR) method in handling class imbalance and overlapping for multi-class imbalanced. HAR was a hybrid ensembles method in handling class imbalance problem. The main contribution of this work is to produce a new method that can overcome the problem of class imbalance and overlapping in the multi-class imbalance problem. This method must be able to give better results in terms of classifier performance and overlap degrees in multi-class problems. This is achieved by improving an ensemble learning algorithm and a preprocessing technique in HAR <span>using minimizing overlapping selection under SMOTE (MOSS). MOSS was known as a very popular feature selection method in handling overlapping. To validate the accuracy of the proposed method, this research use augmented R-Value, Mean AUC, Mean F-Measure, Mean G-Mean, and Mean Precision. The performance of the model is evaluated against the hybrid method (MBP+CGE) as a popular method in handling class imbalance and overlapping for multi-class imbalanced. It is found that the proposed method is superior when subjected to classifier performance as indicate with better Mean AUC, F-Measure, G-Mean, and precision.</span></span>
APA, Harvard, Vancouver, ISO, and other styles
9

Dube, Lindani, and Tanja Verster. "Interpretability of the random forest model under class imbalance." Data Science in Finance and Economics 4, no. 3 (2024): 446–68. http://dx.doi.org/10.3934/dsfe.2024019.

Full text
Abstract:
<p>In predictive modeling, addressing class imbalance is a critical concern, particularly in applications where certain classes are disproportionately represented. This study delved into the implications of class imbalance on the interpretability of the random forest models. Class imbalance is a common challenge in machine learning, particularly in domains where certain classes are under-represented. This study investigated the impact of class imbalance on random forest model performance in churn and fraud detection scenarios. We trained and evaluated random forest models on churn datasets with class imbalances ranging from 20% to 50% and fraud datasets with imbalances from 1% to 15%. The results revealed consistent improvements in the precision, recall, F1-score, and accuracy as class imbalance decreases, indicating that models become more precise and accurate in identifying rare events with balanced datasets. Additionally, we employed interpretability techniques such as Shapley values, partial dependence plots (PDPs), and breakdown plots to elucidate the effect of class imbalance on model interpretability. Shapley values showed varying feature importance across different class distributions, with a general decrease as datasets became more balanced. PDPs illustrated a consistent upward trend in estimated values as datasets approached balance, indicating consistent relationships between input variables and predicted outcomes. Breakdown plots highlighted significant changes in individual predictions as class imbalance varied, underscoring the importance of considering class distribution in interpreting model outputs. These findings contribute to our understanding of the complex interplay between class balance, model performance, and interpretability, offering insights for developing more robust and reliable predictive models in real-world applications.</p>
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Hsien-I., and Mihn Cong Nguyen. "Boosting Minority Class Prediction on Imbalanced Point Cloud Data." Applied Sciences 10, no. 3 (February 2, 2020): 973. http://dx.doi.org/10.3390/app10030973.

Full text
Abstract:
Data imbalance during the training of deep networks can cause the network to skip directly to learning minority classes. This paper presents a novel framework by which to train segmentation networks using imbalanced point cloud data. PointNet, an early deep network used for the segmentation of point cloud data, proved effective in the point-wise classification of balanced data; however, performance degraded when imbalanced data was used. The proposed approach involves removing between-class data point imbalances and guiding the network to pay more attention to majority classes. Data imbalance is alleviated using a hybrid-sampling method involving oversampling, as well as undersampling, respectively, to decrease the amount of data in majority classes and increase the amount of data in minority classes. A balanced focus loss function is also used to emphasize the minority classes through the automated assignment of costs to the various classes based on their density in the point cloud. Experiments demonstrate the effectiveness of the proposed training framework when provided a point cloud dataset pertaining to six objects. The mean intersection over union (mIoU) test accuracy results obtained using PointNet training were as follows: XYZRGB data (91%) and XYZ data (86%). The mIoU test accuracy results obtained using the proposed scheme were as follows: XYZRGB data (98%) and XYZ data (93%).
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Class imbalance"

1

Wang, Shuo. "Ensemble diversity for class imbalance learning." Thesis, University of Birmingham, 2011. http://etheses.bham.ac.uk//id/eprint/1793/.

Full text
Abstract:
This thesis studies the diversity issue of classification ensembles for class imbalance learning problems. Class imbalance learning refers to learning from imbalanced data sets, in which some classes of examples (minority) are highly under-represented comparing to other classes (majority). The very skewed class distribution degrades the learning ability of many traditional machine learning methods, especially in the recognition of examples from the minority classes, which are often deemed to be more important and interesting. Although quite a few ensemble learning approaches have been proposed to handle the problem, no in-depth research exists to explain why and when they can be helpful. Our objectives are to understand how ensemble diversity affects the classification performance for a class imbalance problem according to single-class and overall performance measures, and to make best use of diversity to improve the performance. As the first stage, we study the relationship between ensemble diversity and generalization performance for class imbalance problems. We investigate mathematical links between single-class performance and ensemble diversity. It is found that how the single-class measures change along with diversity falls into six different situations. These findings are then verified in class imbalance scenarios through empirical studies. The impact of diversity on overall performance is also investigated empirically. Strong correlations between diversity and the performance measures are found. Diversity shows a positive impact on the recognition of the minority class and benefits the overall performance of ensembles in class imbalance learning. Our results help to understand if and why ensemble diversity can help to deal with class imbalance problems. Encouraged by the positive role of diversity in class imbalance learning, we then focus on a specific ensemble learning technique, the negative correlation learning (NCL) algorithm, which considers diversity explicitly when creating ensembles and has achieved great empirical success. We propose a new learning algorithm based on the idea of NCL, named AdaBoost.NC, for classification problems. An ``ambiguity" term decomposed from the 0-1 error function is introduced into the training framework of AdaBoost. It demonstrates superiority in both effectiveness and efficiency. Its good generalization performance is explained by theoretical and empirical evidences. It can be viewed as the first NCL algorithm specializing in classification problems. Most existing ensemble methods for class imbalance problems suffer from the problems of overfitting and over-generalization. To improve this situation, we address the class imbalance issue by making use of ensemble diversity. We investigate the generalization ability of NCL algorithms, including AdaBoost.NC, to tackle two-class imbalance problems. We find that NCL methods integrated with random oversampling are effective in recognizing minority class examples without losing the overall performance, especially the AdaBoost.NC tree ensemble. This is achieved by providing smoother and less overfitting classification boundaries for the minority class. The results here show the usefulness of diversity and open up a novel way to deal with class imbalance problems. Since the two-class imbalance is not the only scenario in real-world applications, multi-class imbalance problems deserve equal attention. To understand what problems multi-class can cause and how it affects the classification performance, we study the multi-class difficulty by analyzing the multi-minority and multi-majority cases respectively. Both lead to a significant performance reduction. The multi-majority case appears to be more harmful. The results reveal possible issues that a class imbalance learning technique could have when dealing with multi-class tasks. Following this part of analysis and the promising results of AdaBoost.NC on two-class imbalance problems, we apply AdaBoost.NC to a set of multi-class imbalance domains with the aim of solving them effectively and directly. Our method shows good generalization in minority classes and balances the performance across different classes well without using any class decomposition schemes. Finally, we conclude this thesis with how the study has contributed to class imbalance learning and ensemble learning, and propose several possible directions for future research that may improve and extend this work.
APA, Harvard, Vancouver, ISO, and other styles
2

Nataraj, Vismitha, and Sushmitha Narayanan. "Resolving Class Imbalance using Generative Adversarial Networks." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-41405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tran, Quang Duc. "One-class classification : an approach to handle class imbalance in multimodal biometric authentication." Thesis, City, University of London, 2014. http://openaccess.city.ac.uk/19662/.

Full text
Abstract:
Biometric verification is the process of authenticating a person‟s identity using his/her physiological and behavioural characteristics. It is well-known that multimodal biometric systems can further improve the authentication accuracy by combining information from multiple biometric traits at various levels, namely sensor, feature, match score and decision levels. Fusion at match score level is generally preferred due to the trade-off between information availability and fusion complexity. However, combining match scores poses a number of challenges, when treated as a two-class classification problem due to the highly imbalanced class distributions. Most conventional classifiers assume equally balanced classes. They do not work well when samples of one class vastly outnumber the samples of the other class. These challenges become even more significant, when the fusion is based on user-specific processing due to the limited availability of the genuine samples per user. This thesis aims at exploring the paradigm of one-class classification to advance the classification performance of imbalanced biometric data sets. The contributions of the research can be enumerated as follows. Firstly, a thorough investigation of the various one-class classifiers, including Gaussian Mixture Model, k-Nearest Neighbour, K-means clustering and Support Vector Data Description, has been provided. These classifiers are applied in learning the user-specific and user-independent descriptions for the biometric decision inference. It is demonstrated that the one-class classifiers are particularly useful in handling the imbalanced learning problem in multimodal biometric authentication. User-specific approach is a better alternative with respect to user-independent counterpart because it is able to overcome the so-called within-class sub-concepts problem, which arises very often in multimodal biometric systems due to the existence of user variation. Secondly, a novel adapted score fusion scheme that consists of one-class classifiers and is trained using both the genuine user and impostor samples has been proposed. This method also replaces user-independent by user-specific description to learn the characteristics of the impostor class, and thus, reducing the degree of imbalanced proportion of data for different classes. Extensive experiments are conducted on the BioSecure DS2 and XM2VTS databases to illustrate the potential of the proposed adapted score fusion scheme, which provides a relative improvement in terms of Equal Error Rate of 32% and 20% as compared to the standard sum of scores and likelihood ratio based score fusion, respectively. Thirdly, a hybrid boosting algorithm, called r-ABOC has been developed, which is capable of exploiting the natural capabilities of both the well-known Real AdaBoost and one-class classification to further improve the system performance without causing overfitting. However, unlike the conventional Real AdaBoost, the individual classifiers in the proposed schema are trained on the same data set, but with different parameter choices. This does not only generate a high diversity, which is vital to the success of r-ABOC, but also reduces the number of user-specified parameters. A comprehensive empirical study using the BioSecure DS2 and XM2VTS databases demonstrates that r-ABOC may achieve a performance gain in terms of Half Total Error Rate of up to 28% with respect to other state-of-the-art biometric score fusion techniques. Finally, a Robust Imputation based on Group Method of Data Handling (RIBG) has been proposed to handle the missing data problem in the BioSecure DS2 database. RIBG is able to provide accurate predictions of incomplete score vectors. It is observed to achieve a better performance with respect to the state-of-the-art imputation techniques, including mean, median and k-NN imputations. An important feature of RIBG is that it does not require any parameter fine-tuning, and hence, is amendable to immediate applications.
APA, Harvard, Vancouver, ISO, and other styles
4

SENG, Kruy. "Cost-sensitive deep neural network ensemble for class imbalance problem." Digital Commons @ Lingnan University, 2018. https://commons.ln.edu.hk/otd/32.

Full text
Abstract:
In data mining, classification is a task to build a model which classifies data into a given set of categories. Most classification algorithms assume the class distribution of data to be roughly balanced. In real-life applications such as direct marketing, fraud detection and churn prediction, class imbalance problem usually occurs. Class imbalance problem is referred to the issue that the number of examples belonging to a class is significantly greater than those of the others. When training a standard classifier with class imbalance data, the classifier is usually biased toward majority class. However, minority class is the class of interest and more significant than the majority class. In the literature, existing methods such as data-level, algorithmic-level and cost-sensitive learning have been proposed to address this problem. The experiments discussed in these studies were usually conducted on relatively small data sets or even on artificial data. The performance of the methods on modern real-life data sets, which are more complicated, is unclear. In this research, we study the background and some of the state-of-the-art approaches which handle class imbalance problem. We also propose two costsensitive methods to address class imbalance problem, namely Cost-Sensitive Deep Neural Network (CSDNN) and Cost-Sensitive Deep Neural Network Ensemble (CSDE). CSDNN is a deep neural network based on Stacked Denoising Autoencoders (SDAE). We propose CSDNN by incorporating cost information of majority and minority class into the cost function of SDAE to make it costsensitive. Another proposed method, CSDE, is an ensemble learning version of CSDNN which is proposed to improve the generalization performance on class imbalance problem. In the first step, a deep neural network based on SDAE is created for layer-wise feature extraction. Next, we perform Bagging’s resampling procedure with undersampling to split training data into a number of bootstrap samples. In the third step, we apply a layer-wise feature extraction method to extract new feature samples from each of the hidden layer(s) of the SDAE. Lastly, the ensemble learning is performed by using each of the new feature samples to train a CSDNN classifier with random cost vector. Experiments are conducted to compare the proposed methods with the existing methods. We examine their performance on real-life data sets in business domains. The results show that the proposed methods obtain promising results in handling class imbalance problem and also outperform all the other compared methods. There are three major contributions to this work. First, we proposed CSDNN method in which misclassification costs are considered in training process. Second, we incorporate random undersampling with layer-wise feature extraction to perform ensemble learning. Third, this is the first work that conducts experiments on class imbalance problem using large real-life data sets in different business domains ranging from direct marketing, churn prediction, credit scoring, fraud detection to fake review detection.
APA, Harvard, Vancouver, ISO, and other styles
5

Barnabé-Lortie, Vincent. "Active Learning for One-class Classification." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/33001.

Full text
Abstract:
Active learning is a common solution for reducing labeling costs and maximizing the impact of human labeling efforts in binary and multi-class classification settings. However, when we are faced with extreme levels of class imbalance, a situation in which it is not safe to assume that we have a representative sample of the minority class, it has been shown effective to replace the binary classifiers with a one-class classifiers. In such a setting, traditional active learning methods, and many previously proposed in the literature for one-class classifiers, prove to be inappropriate, as they rely on assumptions about the data that no longer stand. In this thesis, we propose a novel approach to active learning designed for one-class classification. The proposed method does not rely on many of the inappropriate assumptions of its predecessors and leads to more robust classification performance. The gist of this method consists of labeling, in priority, the instances considered to fit the learned class the least by previous iterations of a one-class classification model. Throughout the thesis, we provide evidence for the merits of our method, then deepen our understanding of these merits by exploring the properties of the method that allow it to outperform the alternatives.
APA, Harvard, Vancouver, ISO, and other styles
6

Dutta, Ila. "Data Mining Techniques to Identify Financial Restatements." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37342.

Full text
Abstract:
Data mining is a multi-disciplinary field of science and technology widely used in developing predictive models and data visualization in various domains. Although there are numerous data mining algorithms and techniques across multiple fields, it appears that there is no consensus on the suitability of a particular model, or the ways to address data preprocessing issues. Moreover, the effectiveness of data mining techniques depends on the evolving nature of data. In this study, we focus on the suitability and robustness of various data mining models for analyzing real financial data to identify financial restatements. From data mining perspective, it is quite interesting to study financial restatements for the following reasons: (i) the restatement data is highly imbalanced that requires adequate attention in model building, (ii) there are many financial and non-financial attributes that may affect financial restatement predictive models. This requires careful implementation of data mining techniques to develop parsimonious models, and (iii) the class imbalance issue becomes more complex in a dataset that includes both intentional and unintentional restatement instances. Most of the previous studies focus on fraudulent (or intentional) restatements and the literature has largely ignored unintentional restatements. Intentional (i.e. fraudulent) restatements instances are rare and likely to have more distinct features compared to non-restatement cases. However, unintentional cases are comparatively more prevalent and likely to have fewer distinct features that separate them from non-restatement cases. A dataset containing unintentional restatement cases is likely to have more class overlapping issues that may impact the effectiveness of predictive models. In this study, we developed predictive models based on all restatement cases (both intentional and unintentional restatements) using a real, comprehensive and novel dataset which includes 116 attributes and approximately 1,000 restatement and 19,517 non-restatement instances over a period of 2009 to 2014. To the best of our knowledge, no other study has developed predictive models for financial restatements using post-financial crisis events. In order to avoid redundant attributes, we use three feature selection techniques: Correlation based feature subset selection (CfsSubsetEval), Information gain attribute evaluation (InfoGainEval), Stepwise forward selection (FwSelect) and generate three datasets with reduced attributes. Our restatement dataset is highly skewed and highly biased towards non-restatement (majority) class. We applied various algorithms (e.g. random undersampling (RUS), Cluster based undersampling (CUS) (Sobhani et al., 2014), random oversampling (ROS), Synthetic minority oversampling technique (SMOTE) (Chawla et al., 2002), Adaptive synthetic sampling (ADASYN) (He et al., 2008), and Tomek links with SMOTE) to address class imbalance in the financial restatement dataset. We perform classification employing six different choices of classifiers, Decision three (DT), Artificial neural network (ANN), Naïve Bayes (NB), Random forest (RF), Bayesian belief network (BBN) and Support vector machine (SVM) using 10-fold cross validation and test the efficiency of various predictive models using minority class recall value, minority class F-measure and G-mean. We also experiment different ensemble methods (bagging and boosting) with the base classifiers and employ other meta-learning algorithms (stacking and cost-sensitive learning) to improve model performance. While applying cluster-based undersampling technique, we find that various classifiers (e.g. SVM, BBN) show a high success rate in terms of minority class recall value. For example, SVM classifier shows a minority recall value of 96% which is quite encouraging. However, the ability of these classifiers to detect majority class instances is dismal. We find that some variations of synthetic oversampling such as ‘Tomek Link + SMOTE’ and ‘ADASYN’ show promising results in terms of both minority recall value and G-mean. Using InfoGainEval feature selection method, RF classifier shows minority recall values of 92.6% for ‘Tomek Link + SMOTE’ and 88.9% for ‘ADASYN’ techniques, respectively. The corresponding G-mean values are 95.2% and 94.2% for these two oversampling techniques, which show that RF classifier is quite effective in predicting both minority and majority classes. We find further improvement in results for RF classifier with cost-sensitive learning algorithm using ‘Tomek Link + SMOTE’ oversampling technique. Subsequently, we develop some decision rules to detect restatement firms based on a subset of important attributes. To the best of our knowledge, only Kim et al. (2016) perform a data mining study using only pre-financial crisis restatement data. Kim et al. (2016) employed a matching sample based undersampling technique and used logistic regression, SVM and BBN classifiers to develop financial restatement predictive models. The study’s highest reported G-mean is 70%. Our results with clustering based undersampling are similar to the performance measures reported by Kim et al. (2016). However, our synthetic oversampling based results show a better predictive ability. The RF classifier shows a very high degree of predictive capability for minority class instances (97.4%) and a very high G-mean value (95.3%) with cost-sensitive learning. Yet, we recognize that Kim et al. (2016) use a different restatement dataset (with pre-crisis restatement cases) and hence a direct comparison of results may not be fully justified. Our study makes contributions to the data mining literature by (i) presenting predictive models for financial restatements with a comprehensive dataset, (ii) focussing on various datamining techniques and presenting a comparative analysis, and (iii) addressing class imbalance issue by identifying most effective technique. To the best of our knowledge, we used the most comprehensive dataset to develop our predictive models for identifying financial restatement.
APA, Harvard, Vancouver, ISO, and other styles
7

Batuwitage, Manohara Rukshan Kannangara. "Enhanced class imbalance learning methods for support vector machines application to human miRNA gene classification." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mathur, Tanmay. "Improving Classification Results Using Class Imbalance Solutions & Evaluating the Generalizability of Rationale Extraction Techniques." Miami University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=miami1420335486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Iosifidis, Vasileios [Verfasser], and Eirini [Akademischer Betreuer] Ntoutsi. "Semi-supervised learning and fairness-aware learning under class imbalance / Vasileios Iosifidis ; Betreuer: Eirini Ntoutsi." Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2020. http://d-nb.info/1217782168/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bellinger, Colin. "Beyond the Boundaries of SMOTE: A Framework for Manifold-based Synthetic Oversampling." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34643.

Full text
Abstract:
Within machine learning, the problem of class imbalance refers to the scenario in which one or more classes is significantly outnumbered by the others. In the most extreme case, the minority class is not only significantly outnumbered by the majority class, but it also considered to be rare, or absolutely imbalanced. Class imbalance appears in a wide variety of important domains, ranging from oil spill and fraud detection, to text classification and medical diagnosis. Given this, it has been deemed as one of the ten most important research areas in data mining, and for more than a decade now the machine learning community has been coming together in an attempt to unequivocally solve the problem. The fundamental challenge in the induction of a classifier from imbalanced training data is in managing the prediction bias. The current state-of-the-art methods deal with this by readjusting misclassification costs or by applying resampling methods. In cases of absolute imbalance, these methods are insufficient; rather, it has been observed that we need more training examples. The nature of class imbalance, however, dictates that additional examples cannot be acquired, and thus, synthetic oversampling becomes the natural choice. We recognize the importance of selecting algorithms with assumptions and biases that are appropriate for the properties of the target data, and argue that this is of absolute importance when it comes to developing synthetic oversampling methods because a large generative leap must be made from a relatively small training set. In particular, our research into gamma-ray spectral classification has demonstrated the benefits of incorporating prior knowledge of conformance to the manifold assumption into the synthetic oversampling algorithms. We empirically demonstrate the negative impact of the manifold property on the state-of-the-art methods, and propose a framework for manifold-based synthetic oversampling. We algorithmically present the generic form of the framework and demonstrate formalizations of it with PCA and the denoising autoencoder. Through use of the helix and swiss roll datasets, which are standards in the manifold learning community, we visualize and qualitatively analyze the benefits of our proposed framework. Moreover, we unequivocally show the framework to be superior on three real-world gamma-ray spectral datasets and on sixteen benchmark UCI datasets in general. Specifically, our results demonstrate that the framework for manifold-based synthetic oversampling produces higher area under the ROC results than the current state-of-the-art and degrades less on data that conforms to the manifold assumption.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Class imbalance"

1

Chakraborty, Sanjay, and Lopamudra Dey. Multi-objective, Multi-class and Multi-label Data Classification with Class Imbalance. Singapore: Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-97-9622-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Salick, Roydon. The Novels of Samuel Selvon. Greenwood Publishing Group, Inc., 2001. http://dx.doi.org/10.5040/9798400692314.

Full text
Abstract:
The author of such works as A Brighter Sun (1952), The Lonely Londoners (1956), and The Plains of Caroni (1970), West Indian novelist Samuel Selvon is attracting growing amounts of scholarly attention. Nonetheless, criticism of his works has largely been imbalanced, with most scholarship focusing primarily on his language. This book corrects that imbalance by placing Selvon's novels within historical, sociological, and ideological contexts. A new interpretation of Selvon's achievement as a novelist, the volume looks, for the first time, at his works in terms of categories of novels--peasant, middle-class, and immigrant. The book demonstrates that each category is different from the others, and that novels within categories are similar. Thus it provides a coherent vision of Selvon's canon. It illustrates, as well, the development of Selvon's philosophy of West Indians as peasant, bourgeois, and immigrant. In doing so, it explores the significance of ethnicity in his works and discusses Selvon's imaginative apotheosis of the Indo-Trinidadian peasant and the diminution of the Afro-Trinidadian immigrant. The volume also studies Selvon's fictional and rhetorical techniques and argues that his works range from Bildungsroman to picaresque to epic to satire.
APA, Harvard, Vancouver, ISO, and other styles
3

Watt, Paul. Estate Regeneration and its Discontents. Policy Press, 2021. http://dx.doi.org/10.1332/policypress/9781447329183.001.0001.

Full text
Abstract:
This book provides a theoretically informed, empirically rich account of the development, causes and consequences of public housing (council/local authority/social) estate regeneration within the context of London’s housing crisis and widening social inequality. It focuses on regeneration schemes involving comprehensive redevelopment – the demolition of council estates and their rebuilding as mixed-tenure neighbourhoods with large numbers of market properties which fuels socio-spatial inequalities via state-led gentrification. The book deploys an interdisciplinary perspective drawn from sociology, geography, urban policy and housing studies. By foregrounding estate residents’ lived experiences – mainly working-class tenants but also working- and middle-class homeowners – it highlights their multiple discontents with the seemingly never-ending regeneration process. As such, the book critiques the imbalances and silences within the official policy discourse in which there are only regeneration winners while the losers are airbrushed out of history. The book contains many illustrations and is based on over a decade of research undertaken at several London council-built estates. The book is divided into three parts. Part One (Chapters 2-4) examines housing policy and urban policy in relation to the expansion and contraction of public housing in London, and the development of estate regeneration. Part Two (Chapters 5-7) analyses residents’ experiences of living at London estates before regeneration begins. It argues that residents positively valued their homes and neighbourhoods, even though such valuation was neither unqualified nor universal. Part Three (Chapters 8-12) examines residents’ experiences of living through regeneration, and argues that comprehensive redevelopment results in degeneration, displacement, and fragmented rather than mixed communities.
APA, Harvard, Vancouver, ISO, and other styles
4

Kalinowski, Thomas. Why International Cooperation is Failing. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198714729.001.0001.

Full text
Abstract:
Ten years after the global financial crisis of 2008/9 there is widespread scepticism about the ability to curb volatile financial markets and international cooperation in general. Changes in the global rules of finance discussed in the G20 during the last ten years remain limited, and it is doubtful whether they are suitable to help mitigate and manage future crisis to come. This book argues that this failure is not simply the result of bad leadership and clash of national egoisms but rather the result of a much more fundamental competition of capitalisms. US finance-led, EU integration-led, and East Asian state-led capitalism complement each other globally, but at the same time they have conflicting preferences on how to complement their distinct domestic regulations at the international level. This interdependence of capitalist models is both relatively stable but also prone to crisis caused by volatile financial flows, global economic imbalances, and ‘currency wars’. This book shows that regulating international finance is not a technocratic exercise of finetuning the machinery of international institutions but a political process depending on the dynamic of domestic institutions and power relations. If we want to understand international economic cooperation, we need to understand the diversity of domestic dynamics of the different models of capitalism, not just concerning financial markets but also in connected areas such as corporate structure, labour markets, and welfare regimes. Ultimately, international cooperation is both desirable and possible, but needs to go hand in hand with fundamental changes at the domestic level.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Class imbalance"

1

Shultz, Thomas R., Scott E. Fahlman, Susan Craw, Periklis Andritsos, Panayiotis Tsaparas, Ricardo Silva, Chris Drummond, et al. "Class Imbalance Problem." In Encyclopedia of Machine Learning, 171. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ling, Charles X., and Victor S. Sheng. "Class Imbalance Problem." In Encyclopedia of Machine Learning and Data Mining, 204–5. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sotiropoulos, Dionisios N., and George A. Tsihrintzis. "The Class Imbalance Problem." In Machine Learning Paradigms, 51–78. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47194-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Attenberg, Josh, and Şeyda Ertekin. "Class Imbalance and Active Learning." In Imbalanced Learning, 101–49. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118646106.ch6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Xu-Ying, and Zhi-Hua Zhou. "Ensemble Methods for Class Imbalance Learning." In Imbalanced Learning, 61–82. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118646106.ch4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kuhn, Max, and Kjell Johnson. "Remedies for Severe Class Imbalance." In Applied Predictive Modeling, 419–43. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-6849-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sotiropoulos, Dionisios N., and George A. Tsihrintzis. "Addressing the Class Imbalance Problem." In Machine Learning Paradigms, 79–106. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47194-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bellinger, Colin, Paula Branco, and Luis Torgo. "The CURE for Class Imbalance." In Discovery Science, 3–17. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33778-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Roy, Arjun, Vasileios Iosifidis, and Eirini Ntoutsi. "Multi-fairness Under Class-Imbalance." In Discovery Science, 286–301. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18840-4_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cruz, Ricardo, Kelwin Fernandes, Joaquim F. Pinto Costa, María Pérez Ortiz, and Jaime S. Cardoso. "Ordinal Class Imbalance with Ranking." In Pattern Recognition and Image Analysis, 3–12. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58838-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Class imbalance"

1

Sun, Shu, Jinchuan Qian, Xinmin Zhang, and Zhihuan Song. "Class-Imbalance and Client-Imbalance Federated Learning for Fault Diagnosis." In 2024 IEEE 13th Data Driven Control and Learning Systems Conference (DDCLS), 860–65. IEEE, 2024. http://dx.doi.org/10.1109/ddcls61622.2024.10606790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

El Yamani, Yassine, Yousra Fadili, Jihad Kilani, Najib El Kamoun, Youssef Baddi, and Faycal Bensalah. "Hybrid Models for IoT Security: Tackling Class Imbalance." In 2024 11th International Conference on Wireless Networks and Mobile Communications (WINCOM), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/wincom62286.2024.10656654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Shihong, Ruixun Liu, Kaiyu Li, Jiawei Jiang, and Xiangyong Cao. "Class Similarity Transition: Decoupling Class Similarities and Imbalance from Generalized Few-shot Segmentation." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2762–70. IEEE, 2024. http://dx.doi.org/10.1109/cvprw63382.2024.00282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wallace, Byron C., Kevin Small, Carla E. Brodley, and Thomas A. Trikalinos. "Class Imbalance, Redux." In 2011 IEEE 11th International Conference on Data Mining (ICDM). IEEE, 2011. http://dx.doi.org/10.1109/icdm.2011.33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shrinidhi, M., T. K. Kaushik Jegannathan, and R. Jeya. "Classification of Imbalanced Datasets Using Various Techniques along with Variants of SMOTE Oversampling and ANN." In International Research Conference on IOT, Cloud and Data Science. Switzerland: Trans Tech Publications Ltd, 2023. http://dx.doi.org/10.4028/p-338i7w.

Full text
Abstract:
Using Machine Learning and / or Deep Learning for early detection of diseases can help save people’s lives. AI has already been making progress in healthcare as there are newer and improved software to maintain patient records, produce better imaging for error free diagnosis and treatment. One drawback working with real-life datasets is that they are predominantly imbalanced in nature. Most ML and DL algorithms are defined keeping in mind that the dataset is equally distributed. Working on such imbalanced datasets cause the models to end up having high type-1 and type-2 error which is not ideal in the medical field as it can misdiagnose and be fatal. Handling class imbalance thus becomes a necessity lest the ML/DL model fails to learn and starts memorizing the features and noises belonging to the majority class. PIMA Dataset is one such dataset with imbalances in classes as it contains 500 instances of one type and 268 instances of another type. Similarly, the Wisconsin Breast Cancer (Original) Dataset is also a dataset containing imbalanced data related to breast cancer with a total of 699 instances where 458 instances are of one class (Benign tumor images) while 241 instances belong to the other class (Malignant tumor images). Prediction/detection of onset of diabetes or breast cancer with these datasets would be grossly erroneous and hence the need for handling class imbalance increases. We aim at handling the class imbalance problem in this study using various techniques available like weighted class approach, SMOTE (and its variants) with a simple Artificial Neural Network model as the classifier.
APA, Harvard, Vancouver, ISO, and other styles
6

Cruz, Ricardo, Kelwin Fernandes, Jaime S. Cardoso, and Joaquim F. Pinto Costa. "Tackling class imbalance with ranking." In 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016. http://dx.doi.org/10.1109/ijcnn.2016.7727469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dong, Yadong, Huaping Guo, Weimei Zhi, and Ming Fan. "Class Imbalance Oriented Logistic Regression." In 2014 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC). IEEE, 2014. http://dx.doi.org/10.1109/cyberc.2014.42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Guo, Xinjian, Yilong Yin, Cailing Dong, Gongping Yang, and Guangtong Zhou. "On the Class Imbalance Problem." In 2008 Fourth International Conference on Natural Computation. IEEE, 2008. http://dx.doi.org/10.1109/icnc.2008.871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ansari, Faizanuddin, Swagatam Das, and Pourya Shamsolmoali. "Handling Class Imbalance by Estimating Minority Class Statistics." In 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023. http://dx.doi.org/10.1109/ijcnn54540.2023.10191975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lusa, Lara, and Rok Blagus. "The Class-Imbalance Problem for High-Dimensional Class Prediction." In 2012 Eleventh International Conference on Machine Learning and Applications (ICMLA). IEEE, 2012. http://dx.doi.org/10.1109/icmla.2012.223.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Class imbalance"

1

Baskota, Mohit. Classification Of Ad Tone in Political Video Advertisements Under Class Imbalance and Low Data Samples. Ames (Iowa): Iowa State University, January 2019. http://dx.doi.org/10.31274/cc-20240624-361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ogunbire, Abimbola, Panick Kalambay, Hardik Gajera, and Srinivas Pulugurtha. Deep Learning, Machine Learning, or Statistical Models for Weather-related Crash Severity Prediction. Mineta Transportation Institute, December 2023. http://dx.doi.org/10.31979/mti.2023.2320.

Full text
Abstract:
Nearly 5,000 people are killed and more than 418,000 are injured in weather-related traffic incidents each year. Assessments of the effectiveness of statistical models applied to crash severity prediction compared to machine learning (ML) and deep learning techniques (DL) help researchers and practitioners know what models are most effective under specific conditions. Given the class imbalance in crash data, the synthetic minority over-sampling technique for nominal (SMOTE-N) data was employed to generate synthetic samples for the minority class. The ordered logit model (OLM) and the ordered probit model (OPM) were evaluated as statistical models, while random forest (RF) and XGBoost were evaluated as ML models. For DL, multi-layer perceptron (MLP) and TabNet were evaluated. The performance of these models varied across severity levels, with property damage only (PDO) predictions performing the best and severe injury predictions performing the worst. The TabNet model performed best in predicting severe injury and PDO crashes, while RF was the most effective in predicting moderate injury crashes. However, all models struggled with severe injury classification, indicating the potential need for model refinement and exploration of other techniques. Hence, the choice of model depends on the specific application and the relative costs of false negatives and false positives. This conclusion underscores the need for further research in this area to improve the prediction accuracy of severe and moderate injury incidents, ultimately improving available data that can be used to increase road safety.
APA, Harvard, Vancouver, ISO, and other styles
3

Baral, Aniruddha, Jeffery Roesler, and Junryu Fu. Early-age Properties of High-volume Fly Ash Concrete Mixes for Pavement: Volume 2. Illinois Center for Transportation, September 2021. http://dx.doi.org/10.36501/0197-9191/21-031.

Full text
Abstract:
High-volume fly ash concrete (HVFAC) is more cost-efficient, sustainable, and durable than conventional concrete. This report presents a state-of-the-art review of HVFAC properties and different fly ash characterization methods. The main challenges identified for HVFAC for pavements are its early-age properties such as air entrainment, setting time, and strength gain, which are the focus of this research. Five fly ash sources in Illinois have been repeatedly characterized through x-ray diffraction, x-ray fluorescence, and laser diffraction over time. The fly ash oxide compositions from the same source but different quarterly samples were overall consistent with most variations observed in SO3 and MgO content. The minerals present in various fly ash sources were similar over multiple quarters, with the mineral content varying. The types of carbon present in the fly ash were also characterized through x-ray photoelectron spectroscopy, loss on ignition, and foam index tests. A new computer vision–based digital foam index test was developed to automatically capture and quantify a video of the foam layer for better operator and laboratory reliability. The heat of hydration and setting times of HVFAC mixes for different cement and fly ash sources as well as chemical admixtures were investigated using an isothermal calorimeter. Class C HVFAC mixes had a higher sulfate imbalance than Class F mixes. The addition of chemical admixtures (both PCE- and lignosulfonate-based) delayed the hydration, with the delay higher for the PCE-based admixture. Both micro- and nano-limestone replacement were successful in accelerating the setting times, with nano-limestone being more effective than micro-limestone. A field test section constructed of HVFAC showed the feasibility and importance of using the noncontact ultrasound device to measure the final setting time as well as determine the saw-cutting time. Moreover, field implementation of the maturity method based on wireless thermal sensors demonstrated its viability for early opening strength, and only a few sensors with pavement depth are needed to estimate the field maturity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography