Journal articles on the topic 'Keypoints detection,machine learning,random forest,convolutional neural network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 46 journal articles for your research on the topic 'Keypoints detection,machine learning,random forest,convolutional neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Yingying, Yibin Li, Yong Song, and Xuewen Rong. "Facial Expression Recognition Based on Random Forest and Convolutional Neural Network." Information 10, no. 12 (November 28, 2019): 375. http://dx.doi.org/10.3390/info10120375.

Full text
Abstract:
As an important part of emotion research, facial expression recognition is a necessary requirement in human–machine interface. Generally, a face expression recognition system includes face detection, feature extraction, and feature classification. Although great success has been made by the traditional machine learning methods, most of them have complex computational problems and lack the ability to extract comprehensive and abstract features. Deep learning-based methods can realize a higher recognition rate for facial expressions, but a large number of training samples and tuning parameters are needed, and the hardware requirement is very high. For the above problems, this paper proposes a method combining features that extracted by the convolutional neural network (CNN) with the C4.5 classifier to recognize facial expressions, which not only can address the incompleteness of handcrafted features but also can avoid the high hardware configuration in the deep learning model. Considering some problems of overfitting and weak generalization ability of the single classifier, random forest is applied in this paper. Meanwhile, this paper makes some improvements for C4.5 classifier and the traditional random forest in the process of experiments. A large number of experiments have proved the effectiveness and feasibility of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
2

Yan, Guobing, Qiang Sun, Jianying Huang, and Yonghong Chen. "Helmet Detection Based on Deep Learning and Random Forest on UAV for Power Construction Safety." Journal of Advanced Computational Intelligence and Intelligent Informatics 25, no. 1 (January 20, 2021): 40–49. http://dx.doi.org/10.20965/jaciii.2021.p0040.

Full text
Abstract:
Image recognition is one of the key technologies for worker’s helmet detection using an unmanned aerial vehicle (UAV). By analyzing the image feature extraction method for workers’ helmet detection based on convolutional neural network (CNN), a double-channel convolutional neural network (DCNN) model is proposed to improve the traditional image processing methods. On the basis of AlexNet model, the image features of the worker can be extracted using two independent CNNs, and the essential image features can be better reflected considering the abstraction degree of the features. Combining a traditional machine learning method and random forest (RF), an intelligent recognition algorithm based on DCNN and RF is proposed for workers’ helmet detection. The experimental results show that deep learning (DL) is closely related to the traditional machine learning methods. Moreover, adding a DL module to the traditional machine learning framework can improve the recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
3

Mehbodniya, Abolfazl, Izhar Alam, Sagar Pande, Rahul Neware, Kantilal Pitambar Rane, Mohammad Shabaz, and Mangena Venu Madhavan. "Financial Fraud Detection in Healthcare Using Machine Learning and Deep Learning Techniques." Security and Communication Networks 2021 (September 9, 2021): 1–8. http://dx.doi.org/10.1155/2021/9293877.

Full text
Abstract:
Healthcare sector is one of the prominent sectors in which a lot of data can be collected not only in terms of health but also in terms of finances. Major frauds happen in the healthcare sector due to the utilization of credit cards as the continuous enhancement of electronic payments, and credit card fraud monitoring has been a challenge in terms of financial condition to the different service providers. Hence, continuous enhancement is necessary for the system for detecting frauds. Various fraud scenarios happen continuously, which has a massive impact on financial losses. Many technologies such as phishing or virus-like Trojans are mostly used to collect sensitive information about credit cards and their owner details. Therefore, efficient technology should be there for identifying the different types of fraudulent conduct in credit cards. In this paper, various machine learning and deep learning approaches are used for detecting frauds in credit cards and different algorithms such as Naive Bayes, Logistic Regression, K-Nearest Neighbor (KNN), Random Forest, and the Sequential Convolutional Neural Network are skewed for training the other standard and abnormal features of transactions for detecting the frauds in credit cards. For evaluating the accuracy of the model, publicly available data are used. The different algorithm results visualized the accuracy as 96.1%, 94.8%, 95.89%, 97.58%, and 92.3%, corresponding to various methodologies such as Naive Bayes, Logistic Regression, K-Nearest Neighbor (KNN), Random Forest, and the Sequential Convolutional Neural Network, respectively. The comparative analysis visualized that the KNN algorithm generates better results than other approaches.
APA, Harvard, Vancouver, ISO, and other styles
4

Gaifilina, Diana, and Igor Kotenko. "Analysis of deep learning models for network anomaly detection in Internet of Things." Information and Control Systems, no. 1 (March 3, 2021): 28–37. http://dx.doi.org/10.31799/1684-8853-2021-1-28-37.

Full text
Abstract:
Introduction: The article discusses the problem of choosing deep learning models for detecting anomalies in Internet of Things (IoT) network traffic. This problem is associated with the necessity to analyze a large number of security events in order to identify the abnormal behavior of smart devices. A powerful technology for analyzing such data is machine learning and, in particular, deep learning. Purpose: Development of recommendations for the selection of deep learning models for anomaly detection in IoT network traffic. Results: The main results of the research are comparative analysis of deep learning models, and recommendations on the use of deep learning models for anomaly detection in IoT network traffic. Multilayer perceptron, convolutional neural network, recurrent neural network, long short-term memory, gated recurrent units, and combined convolutional-recurrent neural network were considered the basic deep learning models. Additionally, the authors analyzed the following traditional machine learning models: naive Bayesian classifier, support vector machines, logistic regression, k-nearest neighbors, boosting, and random forest. The following metrics were used as indicators of anomaly detection efficiency: accuracy, precision, recall, and F-measure, as well as the time spent on training the model. The constructed models demonstrated a higher accuracy rate for anomaly detection in large heterogeneous traffic typical for IoT, as compared to conventional machine learning methods. The authors found that with an increase in the number of neural network layers, the completeness of detecting anomalous connections rises. This has a positive effect on the recognition of unknown anomalies, but increases the number of false positives. In some cases, preparing traditional machine learning models takes less time. This is due to the fact that the application of deep learning methods requires more resources and computing power. Practical relevance: The results obtained can be used to build systems for network anomaly detection in Internet of Things traffic.
APA, Harvard, Vancouver, ISO, and other styles
5

Akhtar, Shamila, Fawad Hussain, Fawad Riasat Raja, Muhammad Ehatisham-ul-haq, Naveed Khan Baloch, Farruh Ishmanov, and Yousaf Bin Zikria. "Improving Mispronunciation Detection of Arabic Words for Non-Native Learners Using Deep Convolutional Neural Network Features." Electronics 9, no. 6 (June 9, 2020): 963. http://dx.doi.org/10.3390/electronics9060963.

Full text
Abstract:
Computer-Aided Language Learning (CALL) is growing nowadays because learning new languages is essential for communication with people of different linguistic backgrounds. Mispronunciation detection is an integral part of CALL, which is used for automatic pointing of errors for the non-native speaker. In this paper, we investigated the mispronunciation detection of Arabic words using deep Convolution Neural Network (CNN). For automated pronunciation error detection, we proposed CNN features-based model and extracted features from different layers of Alex Net (layers 6, 7, and 8) to train three machine learning classifiers; K-nearest neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). We also used a transfer learning-based model in which feature extraction and classification are performed automatically. To evaluate the performance of the proposed method, a comprehensive evaluation is provided on these methods with a traditional machine learning-based method using Mel Frequency Cepstral Coefficients (MFCC) features. We used the same three classifiers KNN, SVM, and RF in the baseline method for mispronunciation detection. Experimental results show that with handcrafted features, transfer learning-based method and classification based on deep features extracted from Alex Net achieved an average accuracy of 73.67, 85 and 93.20 on Arabic words, respectively. Moreover, these results reveal that the proposed method with feature selection achieved the best average accuracy of 93.20% than all other methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Shiyao, and Hao Wu. "Texture Recognition Based on Perception Data from a Bionic Tactile Sensor." Sensors 21, no. 15 (August 2, 2021): 5224. http://dx.doi.org/10.3390/s21155224.

Full text
Abstract:
Texture recognition is important for robots to discern the characteristics of the object surface and adjust grasping and manipulation strategies accordingly. It is still challenging to develop texture classification approaches that are accurate and do not require high computational costs. In this work, we adopt a bionic tactile sensor to collect vibration data while sliding against materials of interest. Under a fixed contact pressure and speed, a total of 1000 sets of vibration data from ten different materials were collected. With the tactile perception data, four types of texture recognition algorithms are proposed. Three machine learning algorithms, including support vector machine, random forest, and K-nearest neighbor, are established for texture recognition. The test accuracy of those three methods are 95%, 94%, 94%, respectively. In the detection process of machine learning algorithms, the asamoto and polyester are easy to be confused with each other. A convolutional neural network is established to further increase the test accuracy to 98.5%. The three machine learning models and convolutional neural network demonstrate high accuracy and excellent robustness.
APA, Harvard, Vancouver, ISO, and other styles
7

Ghorbanzadeh, Omid, Thomas Blaschke, Khalil Gholamnia, Sansar Meena, Dirk Tiede, and Jagannath Aryal. "Evaluation of Different Machine Learning Methods and Deep-Learning Convolutional Neural Networks for Landslide Detection." Remote Sensing 11, no. 2 (January 20, 2019): 196. http://dx.doi.org/10.3390/rs11020196.

Full text
Abstract:
There is a growing demand for detailed and accurate landslide maps and inventories around the globe, but particularly in hazard-prone regions such as the Himalayas. Most standard mapping methods require expert knowledge, supervision and fieldwork. In this study, we use optical data from the Rapid Eye satellite and topographic factors to analyze the potential of machine learning methods, i.e., artificial neural network (ANN), support vector machines (SVM) and random forest (RF), and different deep-learning convolution neural networks (CNNs) for landslide detection. We use two training zones and one test zone to independently evaluate the performance of different methods in the highly landslide-prone Rasuwa district in Nepal. Twenty different maps are created using ANN, SVM and RF and different CNN instantiations and are compared against the results of extensive fieldwork through a mean intersection-over-union (mIOU) and other common metrics. This accuracy assessment yields the best result of 78.26% mIOU for a small window size CNN, which uses spectral information only. The additional information from a 5 m digital elevation model helps to discriminate between human settlements and landslides but does not improve the overall classification accuracy. CNNs do not automatically outperform ANN, SVM and RF, although this is sometimes claimed. Rather, the performance of CNNs strongly depends on their design, i.e., layer depth, input window sizes and training strategies. Here, we conclude that the CNN method is still in its infancy as most researchers will either use predefined parameters in solutions like Google TensorFlow or will apply different settings in a trial-and-error manner. Nevertheless, deep-learning can improve landslide mapping in the future if the effects of the different designs are better understood, enough training samples exist, and the effects of augmentation strategies to artificially increase the number of existing samples are better understood.
APA, Harvard, Vancouver, ISO, and other styles
8

Vishwanath, Manoj, Salar Jafarlou, Ikhwan Shin, Miranda M. Lim, Nikil Dutt, Amir M. Rahmani, and Hung Cao. "Investigation of Machine Learning Approaches for Traumatic Brain Injury Classification via EEG Assessment in Mice." Sensors 20, no. 7 (April 4, 2020): 2027. http://dx.doi.org/10.3390/s20072027.

Full text
Abstract:
Due to the difficulties and complications in the quantitative assessment of traumatic brain injury (TBI) and its increasing relevance in today’s world, robust detection of TBI has become more significant than ever. In this work, we investigate several machine learning approaches to assess their performance in classifying electroencephalogram (EEG) data of TBI in a mouse model. Algorithms such as decision trees (DT), random forest (RF), neural network (NN), support vector machine (SVM), K-nearest neighbors (KNN) and convolutional neural network (CNN) were analyzed based on their performance to classify mild TBI (mTBI) data from those of the control group in wake stages for different epoch lengths. Average power in different frequency sub-bands and alpha:theta power ratio in EEG were used as input features for machine learning approaches. Results in this mouse model were promising, suggesting similar approaches may be applicable to detect TBI in humans in practical scenarios.
APA, Harvard, Vancouver, ISO, and other styles
9

Nagarajan, G., Dr A. Mahabub Basha, and R. Poornima. "Autism Spectrum Disorder Identification Using Polynomial Distribution based Convolutional Neural Network." NeuroQuantology 19, no. 2 (March 20, 2021): 19–30. http://dx.doi.org/10.14704/nq.2021.19.2.nq21013.

Full text
Abstract:
One main psychiatric disorder found in humans is ASD (Autistic Spectrum Disorder). The disease manifests in a mental disorder that restricts humans from communications, language, speech in terms of their individual abilities. Even though its cure is complex and literally impossible, its early detection is required for mitigating its intensity. ASD does not have a pre-defined age for affecting humans. A system for effectively predicting ASD based on MLTs (Machine Learning Techniques) is proposed in this work. Hybrid APMs (Autism Prediction Models) combining multiple techniques like RF (Random Forest), CART (Classification and Regression Trees), RF-ID3 (RF-Iterative Dichotomiser 3) perform well, but face issues in memory usage, execution times and inadequate feature selections. Taking these issues into account, this work overcomes these hurdles in this proposed work with a hybrid technique that combines MCSO (Modified Chicken Swarm Optimization) and PDCNN (Polynomial Distribution based Convolution Neural Network) algorithms for its objective. The proposed scheme’s experimental results prove its higher levels of accuracy, precision, sensitivity, specificity, FPRs (False Positive Rates) and lowered time complexity when compared to other methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Jamil, Ramish, Imran Ashraf, Furqan Rustam, Eysha Saad, Arif Mehmood, and Gyu Sang Choi. "Detecting sarcasm in multi-domain datasets using convolutional neural networks and long short term memory network model." PeerJ Computer Science 7 (August 25, 2021): e645. http://dx.doi.org/10.7717/peerj-cs.645.

Full text
Abstract:
Sarcasm emerges as a common phenomenon across social networking sites because people express their negative thoughts, hatred and opinions using positive vocabulary which makes it a challenging task to detect sarcasm. Although various studies have investigated the sarcasm detection on baseline datasets, this work is the first to detect sarcasm from a multi-domain dataset that is constructed by combining Twitter and News Headlines datasets. This study proposes a hybrid approach where the convolutional neural networks (CNN) are used for feature extraction while the long short-term memory (LSTM) is trained and tested on those features. For performance analysis, several machine learning algorithms such as random forest, support vector classifier, extra tree classifier and decision tree are used. The performance of both the proposed model and machine learning algorithms is analyzed using the term frequency-inverse document frequency, bag of words approach, and global vectors for word representations. Experimental results indicate that the proposed model surpasses the performance of the traditional machine learning algorithms with an accuracy of 91.60%. Several state-of-the-art approaches for sarcasm detection are compared with the proposed model and results suggest that the proposed model outperforms these approaches concerning the precision, recall and F1 scores. The proposed model is accurate, robust, and performs sarcasm detection on a multi-domain dataset.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Zhao, Paulo Flores, C. Igathinathane, Dayakar L. Naik, Ravi Kiran, and Joel K. Ransom. "Wheat Lodging Detection from UAS Imagery Using Machine Learning Algorithms." Remote Sensing 12, no. 11 (June 5, 2020): 1838. http://dx.doi.org/10.3390/rs12111838.

Full text
Abstract:
The current mainstream approach of using manual measurements and visual inspections for crop lodging detection is inefficient, time-consuming, and subjective. An innovative method for wheat lodging detection that can overcome or alleviate these shortcomings would be welcomed. This study proposed a systematic approach for wheat lodging detection in research plots (372 experimental plots), which consisted of using unmanned aerial systems (UAS) for aerial imagery acquisition, manual field evaluation, and machine learning algorithms to detect the occurrence or not of lodging. UAS imagery was collected on three different dates (23 and 30 July 2019, and 8 August 2019) after lodging occurred. Traditional machine learning and deep learning were evaluated and compared in this study in terms of classification accuracy and standard deviation. For traditional machine learning, five types of features (i.e. gray level co-occurrence matrix, local binary pattern, Gabor, intensity, and Hu-moment) were extracted and fed into three traditional machine learning algorithms (i.e., random forest (RF), neural network, and support vector machine) for detecting lodged plots. For the datasets on each imagery collection date, the accuracies of the three algorithms were not significantly different from each other. For any of the three algorithms, accuracies on the first and last date datasets had the lowest and highest values, respectively. Incorporating standard deviation as a measurement of performance robustness, RF was determined as the most satisfactory. Regarding deep learning, three different convolutional neural networks (simple convolutional neural network, VGG-16, and GoogLeNet) were tested. For any of the single date datasets, GoogLeNet consistently had superior performance over the other two methods. Further comparisons between RF and GoogLeNet demonstrated that the detection accuracies of the two methods were not significantly different from each other (p > 0.05); hence, the choice of any of the two would not affect the final detection accuracies. However, considering the fact that the average accuracy of GoogLeNet (93%) was larger than RF (91%), it was recommended to use GoogLeNet for wheat lodging detection. This research demonstrated that UAS RGB imagery, coupled with the GoogLeNet machine learning algorithm, can be a novel, reliable, objective, simple, low-cost, and effective (accuracy > 90%) tool for wheat lodging detection.
APA, Harvard, Vancouver, ISO, and other styles
12

Ashik, Mathew, A. Jyothish, S. Anandaram, P. Vinod, Francesco Mercaldo, Fabio Martinelli, and Antonella Santone. "Detection of Malicious Software by Analyzing Distinct Artifacts Using Machine Learning and Deep Learning Algorithms." Electronics 10, no. 14 (July 15, 2021): 1694. http://dx.doi.org/10.3390/electronics10141694.

Full text
Abstract:
Malware is one of the most significant threats in today’s computing world since the number of websites distributing malware is increasing at a rapid rate. Malware analysis and prevention methods are increasingly becoming necessary for computer systems connected to the Internet. This software exploits the system’s vulnerabilities to steal valuable information without the user’s knowledge, and stealthily send it to remote servers controlled by attackers. Traditionally, anti-malware products use signatures for detecting known malware. However, the signature-based method does not scale in detecting obfuscated and packed malware. Considering that the cause of a problem is often best understood by studying the structural aspects of a program like the mnemonics, instruction opcode, API Call, etc. In this paper, we investigate the relevance of the features of unpacked malicious and benign executables like mnemonics, instruction opcodes, and API to identify a feature that classifies the executable. Prominent features are extracted using Minimum Redundancy and Maximum Relevance (mRMR) and Analysis of Variance (ANOVA). Experiments were conducted on four datasets using machine learning and deep learning approaches such as Support Vector Machine (SVM), Naïve Bayes, J48, Random Forest (RF), and XGBoost. In addition, we also evaluate the performance of the collection of deep neural networks like Deep Dense network, One-Dimensional Convolutional Neural Network (1D-CNN), and CNN-LSTM in classifying unknown samples, and we observed promising results using APIs and system calls. On combining APIs/system calls with static features, a marginal performance improvement was attained comparing models trained only on dynamic features. Moreover, to improve accuracy, we implemented our solution using distinct deep learning methods and demonstrated a fine-tuned deep neural network that resulted in an F1-score of 99.1% and 98.48% on Dataset-2 and Dataset-3, respectively.
APA, Harvard, Vancouver, ISO, and other styles
13

Yang, Hao, Qin He, Zhenyan Liu, and Qian Zhang. "Malicious Encryption Traffic Detection Based on NLP." Security and Communication Networks 2021 (August 3, 2021): 1–10. http://dx.doi.org/10.1155/2021/9960822.

Full text
Abstract:
The development of Internet and network applications has brought the development of encrypted communication technology. But on this basis, malicious traffic also uses encryption to avoid traditional security protection and detection. Traditional security protection and detection methods cannot accurately detect encrypted malicious traffic. In recent years, the rise of artificial intelligence allows us to use machine learning and deep learning methods to detect encrypted malicious traffic without decryption, and the detection results are very accurate. At present, the research on malicious encrypted traffic detection mainly focuses on the characteristics’ analysis of encrypted traffic and the selection of machine learning algorithms. In this paper, a method combining natural language processing and machine learning is proposed; that is, a detection method based on TF-IDF is proposed to build a detection model. In the process of data preprocessing, this method introduces the natural language processing method, namely, the TF-IDF model, to extract data information, obtain the importance of keywords, and then reconstruct the characteristics of data. The detection method based on the TF-IDF model does not need to analyze each field of the data set. Compared with the general machine learning data preprocessing method, that is, data encoding processing, the experimental results show that using natural language processing technology to preprocess data can effectively improve the accuracy of detection. Gradient boosting classifier, random forest classifier, AdaBoost classifier, and the ensemble model based on these three classifiers are, respectively, used in the construction of the later models. At the same time, CNN neural network in deep learning is also used for training, and CNN can effectively extract data information. Under the condition that the input data of the classifier and neural network are consistent, through the comparison and analysis of various methods, the accuracy of the one-dimensional convolutional network based on CNN is slightly higher than that of the classifier based on machine learning.
APA, Harvard, Vancouver, ISO, and other styles
14

Praticò, Filippo Giammaria, Rosario Fedele, Vitalii Naumov, and Tomas Sauer. "Detection and Monitoring of Bottom-Up Cracks in Road Pavement Using a Machine-Learning Approach." Algorithms 13, no. 4 (March 31, 2020): 81. http://dx.doi.org/10.3390/a13040081.

Full text
Abstract:
The current methods that aim at monitoring the structural health status (SHS) of road pavements allow detecting surface defects and failures. This notwithstanding, there is a lack of methods and systems that are able to identify concealed cracks (particularly, bottom-up cracks) and monitor their growth over time. For this reason, the objective of this study is to set up a supervised machine learning (ML)-based method for the identification and classification of the SHS of a differently cracked road pavement based on its vibro-acoustic signature. The method aims at collecting these signatures (using acoustic-sensors, located at the roadside) and classifying the pavement’s SHS through ML models. Different ML classifiers (i.e., multilayer perceptron, MLP, convolutional neural network, CNN, random forest classifier, RFC, and support vector classifier, SVC) were used and compared. Results show the possibility of associating with great accuracy (i.e., MLP = 91.8%, CNN = 95.6%, RFC = 91.0%, and SVC = 99.1%) a specific vibro-acoustic signature to a differently cracked road pavement. These results are encouraging and represent the bases for the application of the proposed method in real contexts, such as monitoring roads and bridges using wireless sensor networks, which is the target of future studies.
APA, Harvard, Vancouver, ISO, and other styles
15

Bansal, Priti, Sumit Kumar, Ritesh Srivastava, and Saksham Agarwal. "Using Transfer Learning and Hierarchical Classifier to Diagnose Melanoma From Dermoscopic Images." International Journal of Healthcare Information Systems and Informatics 16, no. 2 (April 2021): 73–86. http://dx.doi.org/10.4018/ijhisi.20210401.oa4.

Full text
Abstract:
The deadliest form of skin cancer is melanoma, and if detected in time, it is curable. Detection of melanoma using biopsy is a painful and time-consuming task. Alternate means are being used by medical experts to diagnose melanoma by extracting features from skin lesion images. Medical image diagnosis requires intelligent systems. Many intelligent systems based on image processing and machine learning have been proposed by researchers in the past to detect different kinds of diseases that are successfully used by healthcare organisations worldwide. Intelligent systems to detect melanoma from skin lesion images are also evolving with the aim of improving the accuracy of melanoma detection. Feature extraction plays a critical role. In this paper, a model is proposed in which features are extracted using convolutional neural network (CNN) with transfer learning and a hierarchical classifier consisting of random forest (RF), k-nearest neighbor (KNN), and adaboost is used to detect melanoma using the extracted features. Experimental results show the effectiveness of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
16

Arnold, M., M. Hoyer, and S. Keller. "CONVOLUTIONAL NEURAL NETWORKS FOR DETECTING BRIDGE CROSSING EVENTS WITH GROUND-BASED INTERFEROMETRIC RADAR DATA." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-1-2021 (June 17, 2021): 31–38. http://dx.doi.org/10.5194/isprs-annals-v-1-2021-31-2021.

Full text
Abstract:
Abstract. This study focuses on detecting vehicle crossings (events) with ground-based interferometric radar (GBR) time series data recorded at bridges in the course of critical infrastructure monitoring. To address the challenging event detection and time series classification task, we rely on a deep learning (DL) architecture. The GBR-displacement data originates from real-world measurements at two German bridges under normal traffic conditions. As preprocessing, we only apply a low-pass filter. We develop and evaluate a one-dimensional convolutional neural network (CNN) to achieve a solely data-driven event detection. As a baseline machine learning approach, we use a Random Forest (RF) with a selected feature-based input. Both models’ performance is evaluated on two datasets by focusing on identifying events and pure bridge oscillations. Generally, the event classification results are promising, and the CNN outperforms the RF with an overall accuracy of 94.7% on the test subset. By relying on an entirely unknown second dataset, we focus on the models’ performances regarding the distinction between events and decays. On this dataset, the CNN meets this challenge successfully, while the feature-based RF classifies the majority of non-event decays as events. To sum up, the presented results reveal the potential of a data-driven DL approach concerning the detection of bridge crossing events in GBR-based displacement time series data. Based on such an event detection, a prospective assessment of bridge conditions seems feasible as an extension to previous structural health monitoring approaches.
APA, Harvard, Vancouver, ISO, and other styles
17

Contla Hernández, Brenda, Nicolas Lopez-Villalobos, and Matthieu Vignes. "Identifying Health Status in Grazing Dairy Cows from Milk Mid-Infrared Spectroscopy by Using Machine Learning Methods." Animals 11, no. 8 (July 21, 2021): 2154. http://dx.doi.org/10.3390/ani11082154.

Full text
Abstract:
The early detection of health problems in dairy cattle is crucial to reduce economic losses. Mid-infrared (MIR) spectrometry has been used for identifying the composition of cow milk in routine tests. As such, it is a potential tool to detect diseases at an early stage. Partial least squares discriminant analysis (PLS-DA) has been widely applied to identify illness such as lameness by using MIR spectrometry data. However, this method suffers some limitations. In this study, a series of machine learning techniques—random forest, support vector machine, neural network (NN), convolutional neural network and ensemble models—were used to test the feasibility of identifying cow sickness from 1909 milk sample MIR spectra from Holstein-Friesian, Jersey and crossbreed cows under grazing conditions. PLS-DA was also performed to compare the results. The sick cow records had a time window of 21 days before and 7 days after the milk sample was analysed. NN showed a sensitivity of 61.74%, specificity of 97% and positive predicted value (PPV) of nearly 60%. Although the sensitivity of the PLS-DA was slightly higher than NN (65.6%), the specificity and PPV were lower (79.59% and 15.25%, respectively). This indicates that by using NN, it is possible to identify a health problem with a reasonable level of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
18

Sriram, K. V., and R. H. Havaldar. "Analytical review and study on object detection techniques in the image." International Journal of Modeling, Simulation, and Scientific Computing 12, no. 05 (May 21, 2021): 2150031. http://dx.doi.org/10.1142/s1793962321500318.

Full text
Abstract:
Object detection is the most fundamental but challenging issues in the field of computer vision. Object detection identifies the presence of various individual objects in an image. Great success is attained for object detection/recognition problems in the controlled environment, but still, the problem remains unsolved in the uncontrolled places, particularly, when the objects are placed in arbitrary poses in an occluded and cluttered environment. In the last few years, a lots of efforts are made by researchers to resolve this issue, because of its wide range of applications in computer vision tasks, like content-enabled image retrieval, event or activity recognition, scene understanding, and so on. This review provides a detailed survey of 50 research papers presenting the object detection techniques, like machine learning-based techniques, gradient-based techniques, Fast Region-based Convolutional Neural Network (Fast R-CNN) detector, and the foreground-based techniques. Here, the machine learning-based approaches are classified into deep learning-based approaches, random forest, Support Vector Machine (SVM), and so on. Moreover, the challenges faced by the existing techniques are explained in the gaps and issues section. The analysis based on the classification, toolset, datasets utilized, published year, and the performance metrics are discussed. The future dimension of the research is based on the gaps and issues identified from the existing research works.
APA, Harvard, Vancouver, ISO, and other styles
19

Ahsan, Mostofa, Rahul Gomes, Md Minhaz Chowdhury, and Kendall E. Nygard. "Enhancing Machine Learning Prediction in Cybersecurity Using Dynamic Feature Selector." Journal of Cybersecurity and Privacy 1, no. 1 (March 21, 2021): 199–218. http://dx.doi.org/10.3390/jcp1010011.

Full text
Abstract:
Machine learning algorithms are becoming very efficient in intrusion detection systems with their real time response and adaptive learning process. A robust machine learning model can be deployed for anomaly detection by using a comprehensive dataset with multiple attack types. Nowadays datasets contain many attributes. Such high dimensionality of datasets poses a significant challenge to information extraction in terms of time and space complexity. Moreover, having so many attributes may be a hindrance towards creation of a decision boundary due to noise in the dataset. Large scale data with redundant or insignificant features increases the computational time and often decreases goodness of fit which is a critical issue in cybersecurity. In this research, we have proposed and implemented an efficient feature selection algorithm to filter insignificant variables. Our proposed Dynamic Feature Selector (DFS) uses statistical analysis and feature importance tests to reduce model complexity and improve prediction accuracy. To evaluate DFS, we conducted experiments on two datasets used for cybersecurity research namely Network Security Laboratory (NSL-KDD) and University of New South Wales (UNSW-NB15). In the meta-learning stage, four algorithms were compared namely Bidirectional Long Short-Term Memory (Bi-LSTM), Gated Recurrent Units, Random Forest and a proposed Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) for accuracy estimation. For NSL-KDD, experiments revealed an increment in accuracy from 99.54% to 99.64% while reducing feature size of one-hot encoded features from 123 to 50. In UNSW-NB15 we observed an increase in accuracy from 90.98% to 92.46% while reducing feature size from 196 to 47. The proposed approach is thus able to achieve higher accuracy while significantly lowering number of features required for processing.
APA, Harvard, Vancouver, ISO, and other styles
20

Gul, Hira, Nadeem Javaid, Ibrar Ullah, Ali Mustafa Qamar, Muhammad Khalil Afzal, and Gyanendra Prasad Joshi. "Detection of Non-Technical Losses Using SOSTLink and Bidirectional Gated Recurrent Unit to Secure Smart Meters." Applied Sciences 10, no. 9 (April 30, 2020): 3151. http://dx.doi.org/10.3390/app10093151.

Full text
Abstract:
Energy consumption is increasing exponentially with the increase in electronic gadgets. Losses occur during generation, transmission, and distribution. The energy demand leads to increase in electricity theft (ET) in distribution side. Data analysis is the process of assessing the data using different analytical and statistical tools to extract useful information. Fluctuation in energy consumption patterns indicates electricity theft. Utilities bear losses of millions of dollar every year. Hardware-based solutions are considered to be the best; however, the deployment cost of these solutions is high. Software-based solutions are data-driven and cost-effective. We need big data for analysis and artificial intelligence and machine learning techniques. Several solutions have been proposed in existing studies; however, low detection performance and high false positive rate are the major issues. In this paper, we first time employ bidirectional Gated Recurrent Unit for ET detection for classification using real time-series data. We also propose a new scheme, which is a combination of oversampling technique Synthetic Minority Oversampling TEchnique (SMOTE) and undersampling technique Tomek Link: “Smote Over Sampling Tomik Link (SOSTLink) sampling technique”. The Kernel Principal Component Analysis is used for feature extraction. In order to evaluate the proposed model’s performance, five performance metrics are used, including precision, recall, F1-score, Root Mean Square Error (RMSE), and receiver operating characteristic curve. Experiments show that our proposed model outperforms the state-of-the-art techniques: logistic regression, decision tree, random forest, support vector machine, convolutional neural network, long short-term memory, hybrid of multilayer perceptron and convolutional neural network.
APA, Harvard, Vancouver, ISO, and other styles
21

Kilimci, Zeynep Hilal, Aykut Güven, Mitat Uysal, and Selim Akyokus. "Mood Detection from Physical and Neurophysical Data Using Deep Learning Models." Complexity 2019 (December 14, 2019): 1–15. http://dx.doi.org/10.1155/2019/6434578.

Full text
Abstract:
Nowadays, smart devices as a part of daily life collect data about their users with the help of sensors placed on them. Sensor data are usually physical data but mobile applications collect more than physical data like device usage habits and personal interests. Collected data are usually classified as personal, but they contain valuable information about their users when it is analyzed and interpreted. One of the main purposes of personal data analysis is to make predictions about users. Collected data can be divided into two major categories: physical and behavioral data. Behavioral data are also named as neurophysical data. Physical and neurophysical parameters are collected as a part of this study. Physical data contains measurements of the users like heartbeats, sleep quality, energy, movement/mobility parameters. Neurophysical data contain keystroke patterns like typing speed and typing errors. Users’ emotional/mood statuses are also investigated by asking daily questions. Six questions are asked to the users daily in order to determine the mood of them. These questions are emotion-attached questions, and depending on the answers, users’ emotional states are graded. Our aim is to show that there is a connection between users’ physical/neurophysical parameters and mood/emotional conditions. To prove our hypothesis, we collect and measure physical and neurophysical parameters of 15 users for 1 year. The novelty of this work to the literature is the usage of both combinations of physical and neurophysical parameters. Another novelty is that the emotion classification task is performed by both conventional machine learning algorithms and deep learning models. For this purpose, Feedforward Neural Network (FFNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Long Short-Term Memory (LSTM) neural network are employed as deep learning methodologies. Multinomial Naïve Bayes (MNB), Support Vector Regression (SVR), Decision Tree (DT), Random Forest (RF), and Decision Integration Strategy (DIS) are evaluated as conventional machine learning algorithms. To the best of our knowledge, this is the very first attempt to analyze the neurophysical conditions of the users by evaluating deep learning models for mood analysis and enriching physical characteristics with neurophysical parameters. Experiment results demonstrate that the utilization of deep learning methodologies and the combination of both physical and neurophysical parameters enhances the classification success of the system to interpret the mood of the users. A wide range of comparative and extensive experiments shows that the proposed model exhibits noteworthy results compared to the state-of-art studies.
APA, Harvard, Vancouver, ISO, and other styles
22

T, Anitha, Charlyn Pushpa Latha G, and Surendra Prasad M. "A Proficient Adaptive K-means based Brain Tumor Segmentation and Detection Using Deep Learning Scheme with PSO." Journal of Computational Science and Intelligent Technologies 1, no. 3 (2020): 9–14. http://dx.doi.org/10.53409/mnaa.jcsit20201302.

Full text
Abstract:
Determining the size of the tumor is a significant obstacle in brain tumour preparation and objective assessment. Magnetic Resonance Imaging (MRI) is one of the non-invasive methods that has emanated without ionizing radiation as a front-line diagnostic method for brain tumour. Several approaches have been applied in modern years to segment MRI brain tumours automatically. These methods can be divided into two groups based on conventional learning, such as support vector machine (SVM) and random forest, respectively hand-crafted features and classifier method. However, after deciding hand-crafted features, it uses manually separated features and is given to classifiers as input. These are the time consuming activity, and their output is heavily dependent upon the experience of the operator. This research proposes fully automated detection of brain tumor using Convolutional Neural Network (CNN) to avoid this problem. It also uses brain image of high grade gilomas from the BRATS 2015 database. The suggested research performs brain tumor segmentation using clustering of k-means and patient survival rates are increased with this proposed early diagnosis of brain tumour using CNN.
APA, Harvard, Vancouver, ISO, and other styles
23

Lu, Yijie, Zhen Zhang, Donghui Shangguan, and Junhua Yang. "Novel Machine Learning Method Integrating Ensemble Learning and Deep Learning for Mapping Debris-Covered Glaciers." Remote Sensing 13, no. 13 (July 2, 2021): 2595. http://dx.doi.org/10.3390/rs13132595.

Full text
Abstract:
Glaciers in High Mountain Asia (HMA) have a significant impact on human activity. Thus, a detailed and up-to-date inventory of glaciers is crucial, along with monitoring them regularly. The identification of debris-covered glaciers is a fundamental and yet challenging component of research into glacier change and water resources, but it is limited by spectral similarities with surrounding bedrock, snow-affected areas, and mountain-shadowed areas, along with issues related to manual discrimination. Therefore, to use fewer human, material, and financial resources, it is necessary to develop better methods to determine the boundaries of debris-covered glaciers. This study focused on debris-covered glacier mapping using a combination of related technologies such as random forest (RF) and convolutional neural network (CNN) models. The models were tested on Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) data and the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), selecting Eastern Pamir and Nyainqentanglha as typical glacier areas on the Tibetan Plateau to construct a glacier classification system. The performances of different classifiers were compared, the different classifier construction strategies were optimized, and multiple single-classifier outputs were obtained with slight differences. Using the relationship between the surface area covered by debris and the machine learning model parameters, it was found that the debris coverage directly determined the performance of the machine learning model and mitigated the issues affecting the detection of active and inactive debris-covered glaciers. Various classification models were integrated to ascertain the best model for the classification of glaciers.
APA, Harvard, Vancouver, ISO, and other styles
24

Taheri, Rahim, Reza Javidan, Mohammad Shojafar, Zahra Pooranian, Ali Miri, and Mauro Conti. "On defending against label flipping attacks on malware detection systems." Neural Computing and Applications 32, no. 18 (July 28, 2020): 14781–800. http://dx.doi.org/10.1007/s00521-020-04831-9.

Full text
Abstract:
Abstract Label manipulation attacks are a subclass of data poisoning attacks in adversarial machine learning used against different applications, such as malware detection. These types of attacks represent a serious threat to detection systems in environments having high noise rate or uncertainty, such as complex networks and Internet of Thing (IoT). Recent work in the literature has suggested using the K-nearest neighboring algorithm to defend against such attacks. However, such an approach can suffer from low to miss-classification rate accuracy. In this paper, we design an architecture to tackle the Android malware detection problem in IoT systems. We develop an attack mechanism based on silhouette clustering method, modified for mobile Android platforms. We proposed two convolutional neural network-type deep learning algorithms against this Silhouette Clustering-based Label Flipping Attack. We show the effectiveness of these two defense algorithms—label-based semi-supervised defense and clustering-based semi-supervised defense—in correcting labels being attacked. We evaluate the performance of the proposed algorithms by varying the various machine learning parameters on three Android datasets: Drebin, Contagio, and Genome and three types of features: API, intent, and permission. Our evaluation shows that using random forest feature selection and varying ratios of features can result in an improvement of up to 19% accuracy when compared with the state-of-the-art method in the literature.
APA, Harvard, Vancouver, ISO, and other styles
25

Lee, Jangho, Yingxi Rona Shi, Changjie Cai, Pubu Ciren, Jianwu Wang, Aryya Gangopadhyay, and Zhibo Zhang. "Machine Learning Based Algorithms for Global Dust Aerosol Detection from Satellite Images: Inter-Comparisons and Evaluation." Remote Sensing 13, no. 3 (January 28, 2021): 456. http://dx.doi.org/10.3390/rs13030456.

Full text
Abstract:
Identifying dust aerosols from passive satellite images is of great interest for many applications. In this study, we developed five different machine-learning (ML) based algorithms, including Logistic Regression, K Nearest Neighbor, Random Forest (RF), Feed Forward Neural Network (FFNN), and Convolutional Neural Network (CNN), to identify dust aerosols in the daytime satellite images from the Visible Infrared Imaging Radiometer Suite (VIIRS) under cloud-free conditions on a global scale. In order to train the ML algorithms, we collocated the state-of-the-art dust detection product from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) with the VIIRS observations along the CALIOP track. The 16 VIIRS M-band observations with the center wavelength ranging from deep blue to thermal infrared, together with solar-viewing geometries and pixel time and locations, are used as the predictor variables. Four different sets of training input data are constructed based on different combinations of VIIRS pixel and predictor variables. The validation and comparison results based on the collocated CALIOP data indicate that the FFNN method based on all available predictor variables is the best performing one among all methods. It has an averaged dust detection accuracy of about 81%, 89%, and 85% over land, ocean and whole globe, respectively, compared with collocated CALIOP. When applied to off-track VIIRS pixels, the FFNN method retrieves geographical distributions of dust that are in good agreement with on-track results as well as CALIOP statistics. For further evaluation, we compared our results based on the ML algorithms to NOAA’s Aerosol Detection Product (ADP), which is a product that classifies dust, smoke, and ash using physical-based methods. The comparison reveals both similarity and differences. Overall, this study demonstrates the great potential of ML methods for dust detection and proves that these methods can be trained on the CALIOP track and then applied to the whole granule of VIIRS granule.
APA, Harvard, Vancouver, ISO, and other styles
26

Ma, Nan, Lin Sun, Chenghu Zhou, and Yawen He. "Cloud Detection Algorithm for Multi-Satellite Remote Sensing Imagery Based on a Spectral Library and 1D Convolutional Neural Network." Remote Sensing 13, no. 16 (August 22, 2021): 3319. http://dx.doi.org/10.3390/rs13163319.

Full text
Abstract:
Automatic cloud detection in remote sensing images is of great significance. Deep-learning-based methods can achieve cloud detection with high accuracy; however, network training heavily relies on a large number of labels. Manually labelling pixel-wise level cloud and non-cloud annotations for many remote sensing images is laborious and requires expert-level knowledge. Different types of satellite images cannot share a set of training data, due to the difference in spectral range and spatial resolution between them. Hence, labelled samples in each upcoming satellite image are required to train a new deep-learning-based model. In order to overcome such a limitation, a novel cloud detection algorithm based on a spectral library and convolutional neural network (CD-SLCNN) was proposed in this paper. In this method, the residual learning and one-dimensional CNN (Res-1D-CNN) was used to accurately capture the spectral information of the pixels based on the prior spectral library, effectively preventing errors due to the uncertainties in thin clouds, broken clouds, and clear-sky pixels during remote sensing interpretation. Benefiting from data simulation, the method is suitable for the cloud detection of different types of multispectral data. A total of 62 Landsat-8 Operational Land Imagers (OLI), 25 Moderate Resolution Imaging Spectroradiometers (MODIS), and 20 Sentinel-2 satellite images acquired at different times and over different types of underlying surfaces, such as a high vegetation coverage, urban area, bare soil, water, and mountains, were used for cloud detection validation and quantitative analysis, and the cloud detection results were compared with the results from the function of the mask, MODIS cloud mask, support vector machine, and random forest. The comparison revealed that the CD-SLCNN method achieved the best performance, with a higher overall accuracy (95.6%, 95.36%, 94.27%) and mean intersection over union (77.82%, 77.94%, 77.23%) on the Landsat-8 OLI, MODIS, and Sentinel-2 data, respectively. The CD-SLCNN algorithm produced consistent results with a more accurate cloud contour on thick, thin, and broken clouds over a diverse underlying surface, and had a stable performance regarding bright surfaces, such as buildings, ice, and snow.
APA, Harvard, Vancouver, ISO, and other styles
27

Almalki, Sultan, Nasser Assery, and Kaushik Roy. "An Empirical Evaluation of Online Continuous Authentication and Anomaly Detection Using Mouse Clickstream Data Analysis." Applied Sciences 11, no. 13 (June 30, 2021): 6083. http://dx.doi.org/10.3390/app11136083.

Full text
Abstract:
While the password-based authentication used in social networks, e-mail, e-commerce, and online banking is vulnerable to hackings, biometric-based continuous authentication systems have been used successfully to handle the rise in unauthorized accesses. In this study, an empirical evaluation of online continuous authentication (CA) and anomaly detection (AD) based on mouse clickstream data analysis is presented. This research started by gathering a set of online mouse-dynamics information from 20 participants by using software developed for collecting mouse information, extracting approximately 87 features from the raw dataset. In contrast to previous work, the efficiency of CA and AD was studied using different machine learning (ML) and deep learning (DL) algorithms, namely, decision tree classifier (DT), k-nearest neighbor classifier (KNN), random forest classifier (RF), and convolutional neural network classifier (CNN). User identification was determined by using three scenarios: Scenario A, a single mouse movement action; Scenario B, a single point-and-click action; and Scenario C, a set of mouse movement and point-and-click actions. The results show that each classifier is capable of distinguishing between an authentic user and a fraudulent user with a comparatively high degree of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
28

Anitha, M., V. Karpagam, and P. Tamije Selvy. "Diagnostic Framework for Automatic Classification and Visualization of Alzheimer’s Disease with Feature Extraction Using Wavelet Transform." NeuroQuantology 19, no. 7 (August 11, 2021): 84–95. http://dx.doi.org/10.14704/nq.2021.19.7.nq21088.

Full text
Abstract:
Alzheimer’s Disease (AD) is a serious disease that destroys brain and is classified as the most widespread type of dementia. Manual evaluation of image scans relies on visual reading and semi-quantitative investigation of various human brain sections, leading to wrong diagnoses. Neuroimaging plays a significant part in AD detection, using image processing approaches that succeed the drawback of traditional diagnosis methods. Feature extraction is done through Wavelet Transform (WT). Feature selection is an important step in machine learning, where best features set from all possible features is determined. Mutual Information based feature selection (MI) and Correlation-based Feature Selection (CFS) captures the ‘correlation’ between random variables. Machine Learning techniques are broadly used in a classification problem, as it is simple, effective mechanisms and capability to train to contribute intelligence to the arrangement. Classifiers used in this proposed work are Artificial Neural Network (ANN), Random Forest, Convolutional Neural Network (CNN), and Wavelet-based CNN. The superior ability of ANN is high-speed processing achieved through extensive parallel implementation, and this has emphasized necessity of research in this field. CNN has encouraged tackling this issue. This work proves that wavelet-based CNN performs better with a classification accuracy of 91.87%, the sensitivity of 0.94 for normal brain and 0.88 for AD affected brain, the positive predictive value of 0.91 for normal brain and 0.92 for AD affected brain, and F measure of 0.92 for normal brain and 0.90 for AD affected brain on ADNI MRI dataset of the human brain in detecting AD.
APA, Harvard, Vancouver, ISO, and other styles
29

S. Masad, Ihssan, Amin Alqudah, Ali Mohammad Alqudah, and Sami Almashaqbeh. "A hybrid deep learning approach towards building an intelligent system for pneumonia detection in chest X-ray images." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (December 1, 2021): 5530. http://dx.doi.org/10.11591/ijece.v11i6.pp5530-5540.

Full text
Abstract:
<span>Pneumonia is a major cause for the death of children. In order to overcome the subjectivity and time consumption of the traditional detection of pneumonia from chest X-ray images; this work hypothesized that a hybrid deep learning system that consists of a convolutional neural network (CNN) model with another type of classifiers will improve the performance of the detection system. Three types of classifiers (support vector machine (SVM), k-nearest neighbor (KNN), and random forest (RF) were used along with the traditional CNN classification system (Softmax) to automatically detect pneumonia from chest X-ray images. The performance of the hybrid systems was comparable to that of the traditional CNN model with Softmax in terms of accuracy, precision, and specificity; except for the RF hybrid system which had less performance than the others. On the other hand, KNN hybrid system had the best consumption time, followed by the SVM, Softmax, and lastly the RF system. However, this improvement in consumption time (up to 4 folds) was in the expense of the sensitivity. A new hybrid artificial intelligence methodology for pneumonia detection has been implemented using small-sized chest X-ray images. The novel system achieved a very efficient performance with a short classification consumption time.</span>
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Pengfei, Fenghua Li, Rongjian Zhao, Ruishi Zhou, Lidong Du, Zhan Zhao, Xianxiang Chen, and Zhen Fang. "Real-Time Psychological Stress Detection According to ECG Using Deep Learning." Applied Sciences 11, no. 9 (April 23, 2021): 3838. http://dx.doi.org/10.3390/app11093838.

Full text
Abstract:
Today, excessive psychological stress has become a universal threat to humans. That stress can heavily affect work and study when a person repeatedly is exposed to high stress. If that exposure is long enough, it can even cause cardiovascular disease and cancer. Therefore, both monitoring and managing of stress is imperative to reduce the bad outcomes from excessive psychological stress. Conventional monitoring methods firstly extract the characteristics of the RR interval of an electrocardiogram (ECG) from a time domain and a frequency domain, then use machine learning models, like SVM, random forest, and decision tree, to distinguish the level of that stress. The biggest limitation of using these methods is that at least one minute of ECG data and other signals are indispensable to ensure the high accuracy of the results. This will greatly affect the real-time application of the models. To satisfy real-time detection of stress with high accuracy, we proposed a framework based on deep learning technology. The proposed monitoring framework is based on convolutional neural networks (CNN) and bidirectional long short-term memory (BiLSTM). To evaluate the performance of this network, we conducted the experiments applying conventional methods. The data for the 34 subjects were collected on the server platform created by the group at the Institute of Psychology of the Chinese Academy of Sciences and our group. The accuracy of the proposed framework was up to 0.865 on three levels of stress using a 10 s ECG signal, a 0.228 improvement compared with conventional methods. Therefore, our proposed framework is more suitable for real-time applications
APA, Harvard, Vancouver, ISO, and other styles
31

Lu, Yuzhen, and Renfu Lu. "Detection of Surface and Subsurface Defects of Apples Using Structured- Illumination Reflectance Imaging with Machine Learning Algorithms." Transactions of the ASABE 61, no. 6 (2018): 1831–42. http://dx.doi.org/10.13031/trans.12930.

Full text
Abstract:
Abstract. Machine vision technology coupled with uniform illumination is now widely used for automatic sorting and grading of apples and other fruits, but it still does not have satisfactory performance for defect detection because of the large variety of defects, some of which are difficult to detect under uniform illumination. Structured-illumination reflectance imaging (SIRI) offers a new modality for imaging by using sinusoidally modulated structured illumination to obtain two sets of independent images: direct component (DC), which corresponds to conventional uniform illumination, and amplitude component (AC), which is unique for structured illumination. The objective of this study was to develop machine learning classification algorithms using DC and AC images and their combinations for enhanced detection of surface and subsurface defects of apples. A multispectral SIRI system with two phase-shifted sinusoidal illumination patterns was used to acquire images of ‘Delicious’ and ‘Golden Delicious’ apples with various types of surface and subsurface defects. DC and AC images were extracted through demodulation of the acquired images and were then enhanced using fast bi-dimensional empirical mode decomposition and subsequent image reconstruction. Defect detection algorithms were developed using random forest (RF), support vector machine (SVM), and convolutional neural network (CNN), for DC, AC, and ratio (AC divided by DC) images and their combinations. Results showed that AC images were superior to DC images for detecting subsurface defects, DC images were overall better than AC images for detecting surface defects, and ratio images were comparable to, or better than, DC and AC images for defect detection. The ensemble of DC, AC, and ratio images resulted in significantly better detection accuracies over using them individually. Among the three classifiers, CNN performed the best, with 98% detection accuracies for both varieties of apples, followed by SVM and RF. This research demonstrated that SIRI, coupled with a machine learning algorithm, can be a new, versatile, and effective modality for fruit defect detection. Keywords: Apple, Defect, Bi-dimensional empirical mode decomposition, Machine learning, Structured illumination.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Yueting, Minzan Li, Ronghua Ji, Minjuan Wang, and Lihua Zheng. "Comparison of Soil Total Nitrogen Content Prediction Models Based on Vis-NIR Spectroscopy." Sensors 20, no. 24 (December 10, 2020): 7078. http://dx.doi.org/10.3390/s20247078.

Full text
Abstract:
Visible-near-infrared spectrum (Vis-NIR) spectroscopy technology is one of the most important methods for non-destructive and rapid detection of soil total nitrogen (STN) content. In order to find a practical way to build STN content prediction model, three conventional machine learning methods and one deep learning approach are investigated and their predictive performances are compared and analyzed by using a public dataset called LUCAS Soil (19,019 samples). The three conventional machine learning methods include ordinary least square estimation (OLSE), random forest (RF), and extreme learning machine (ELM), while for the deep learning method, three different structures of convolutional neural network (CNN) incorporated Inception module are constructed and investigated. In order to clarify effectiveness of different pre-treatments on predicting STN content, the three conventional machine learning methods are combined with four pre-processing approaches (including baseline correction, smoothing, dimensional reduction, and feature selection) are investigated, compared, and analyzed. The results indicate that the baseline-corrected and smoothed ELM model reaches practical precision (coefficient of determination (R2) = 0.89, root mean square error of prediction (RMSEP) = 1.60 g/kg, and residual prediction deviation (RPD) = 2.34). While among three different structured CNN models, the one with more 1 × 1 convolutions preforms better (R2 = 0.93; RMSEP = 0.95 g/kg; and RPD = 3.85 in optimal case). In addition, in order to evaluate the influence of data set characteristics on the model, the LUCAS data set was divided into different data subsets according to dataset size, organic carbon (OC) content and countries, and the results show that the deep learning method is more effective and practical than conventional machine learning methods and, on the premise of enough data samples, it can be used to build a robust STN content prediction model with high accuracy for the same type of soil with similar agricultural treatment.
APA, Harvard, Vancouver, ISO, and other styles
33

Si, Xiuhua April, and Jinxiang Xi. "Deciphering Exhaled Aerosol Fingerprints for Early Diagnosis and Personalized Therapeutics of Obstructive Respiratory Diseases in Small Airways." Journal of Nanotheranostics 2, no. 3 (June 22, 2021): 94–117. http://dx.doi.org/10.3390/jnt2030007.

Full text
Abstract:
Respiratory diseases often show no apparent symptoms at their early stages and are usually diagnosed when permanent damages have been made to the lungs. A major site of lung pathogenesis is the small airways, which make it highly challenging to detect using current techniques due to the diseases’ location (inaccessibility to biopsy) and size (below normal CT/MRI resolution). In this review, we present a new method for lung disease detection and treatment in small airways based on exhaled aerosols, whose patterns are uniquely related to the health of the lungs. Proof-of-concept studies are first presented in idealized lung geometries. We subsequently describe the recent developments in feature extraction and classification of the exhaled aerosol images to establish the relationship between the images and the underlying airway remodeling. Different feature extraction algorithms (aerosol density, fractal dimension, principal mode analysis, and dynamic mode decomposition) and machine learning approaches (support vector machine, random forest, and convolutional neural network) are elaborated upon. Finally, future studies and frequent questions related to clinical applications of the proposed aerosol breath testing are discussed from the authors’ perspective. The proposed breath testing has clinical advantages over conventional approaches, such as easy-to-perform, non-invasive, providing real-time feedback, and is promising in detecting symptomless lung diseases at early stages.
APA, Harvard, Vancouver, ISO, and other styles
34

Díaz-San Martín, Guillermo, Luis Reyes-González, Sergio Sainz-Ruiz, Luis Rodríguez-Cobo, and José M. López-Higuera. "Automatic Ankle Angle Detection by Integrated RGB and Depth Camera System." Sensors 21, no. 5 (March 9, 2021): 1909. http://dx.doi.org/10.3390/s21051909.

Full text
Abstract:
Depth cameras are developing widely. One of their main virtues is that, based on their data and by applying machine learning algorithms and techniques, it is possible to perform body tracking and make an accurate three-dimensional representation of body movement. Specifically, this paper will use the Kinect v2 device, which incorporates a random forest algorithm for 25 joints detection in the human body. However, although Kinect v2 is a powerful tool, there are circumstances in which the device’s design does not allow the extraction of such data or the accuracy of the data is low, as is usually the case with foot position. We propose a method of acquiring this data in circumstances where the Kinect v2 device does not recognize the body when only the lower limbs are visible, improving the ankle angle’s precision employing projection lines. Using a region-based convolutional neural network (Mask RCNN) for body recognition, raw data extraction for automatic ankle angle measurement has been achieved. All angles have been evaluated by inertial measurement units (IMUs) as gold standard. For the six tests carried out at different fixed distances between 0.5 and 4 m to the Kinect, we have obtained (mean ± SD) a Pearson’s coefficient, r = 0.89 ± 0.04, a Spearman’s coefficient, ρ = 0.83 ± 0.09, a root mean square error, RMSE = 10.7 ± 2.6 deg and a mean absolute error, MAE = 7.5 ± 1.8 deg. For the walking test, or variable distance test, we have obtained a Pearson’s coefficient, r = 0.74, a Spearman’s coefficient, ρ = 0.72, an RMSE = 6.4 deg and an MAE = 4.7 deg.
APA, Harvard, Vancouver, ISO, and other styles
35

Alibabaei, Khadijeh, Pedro D. Gaspar, and Tânia M. Lima. "Modeling Soil Water Content and Reference Evapotranspiration from Climate Data Using Deep Learning Method." Applied Sciences 11, no. 11 (May 29, 2021): 5029. http://dx.doi.org/10.3390/app11115029.

Full text
Abstract:
In recent years, deep learning algorithms have been successfully applied in the development of decision support systems in various aspects of agriculture, such as yield estimation, crop diseases, weed detection, etc. Agriculture is the largest consumer of freshwater. Due to challenges such as lack of natural resources and climate change, an efficient decision support system for irrigation is crucial. Evapotranspiration and soil water content are the most critical factors in irrigation scheduling. In this paper, the ability of Long Short-Term Memory (LSTM) and Bidirectional LSTM (BLSTM) to model daily reference evapotranspiration and soil water content is investigated. The application of these techniques to predict these parameters was tested for three sites in Portugal. A single-layer BLSTM with 512 nodes was selected. Bayesian optimization was used to determine the hyperparameters, such as learning rate, decay, batch size, and dropout size.The model achieved the values of mean square error values within the range of 0.014 to 0.056 and R2 ranging from 0.96 to 0.98. A Convolutional Neural Network (CNN) model was added to the LSTM to investigate potential performance improvement. Performance dropped in all datasets due to the complexity of the model. The performance of the models was also compared with CNN, traditional machine learning algorithms Support Vector Regression, and Random Forest. LSTM achieved the best performance. Finally, the impact of the loss function on the performance of the proposed models was investigated. The model with the mean square error as loss function performed better than the model with other loss functions.
APA, Harvard, Vancouver, ISO, and other styles
36

Luo, Hongyu, Pierre-Alexandre Lee, Ieuan Clay, Martin Jaggi, and Valeria De Luca. "Assessment of Fatigue Using Wearable Sensors: A Pilot Study." Digital Biomarkers 4, no. 1 (November 26, 2020): 59–72. http://dx.doi.org/10.1159/000512166.

Full text
Abstract:
<b><i>Background:</i></b> Fatigue is a broad, multifactorial concept encompassing feelings of reduced physical and mental energy levels. Fatigue strongly impacts patient health-related quality of life across a huge range of conditions, yet, to date, tools available to understand fatigue are severely limited. <b><i>Methods:</i></b> After using a recurrent neural network-based algorithm to impute missing time series data form a multisensor wearable device, we compared supervised and unsupervised machine learning approaches to gain insights on the relationship between self-reported non-pathological fatigue and multimodal sensor data. <b><i>Results:</i></b> A total of 27 healthy subjects and 405 recording days were analyzed. Recorded data included continuous multimodal wearable sensor time series on physical activity, vital signs, and other physiological parameters, and daily questionnaires on fatigue. The best results were obtained when using the causal convolutional neural network model for unsupervised representation learning of multivariate sensor data, and random forest as a classifier trained on subject-reported physical fatigue labels (weighted precision of 0.70 ± 0.03 and recall of 0.73 ± 0.03). When using manually engineered features on sensor data to train our random forest (weighted precision of 0.70 ± 0.05 and recall of 0.72 ± 0.01), both physical activity (energy expenditure, activity counts, and steps) and vital signs (heart rate, heart rate variability, and respiratory rate) were important parameters to measure. Furthermore, vital signs contributed the most as top features for predicting mental fatigue compared to physical ones. These results support the idea that fatigue is a highly multimodal concept. Analysis of clusters from sensor data highlighted a digital phenotype indicating the presence of fatigue (95% of observations) characterized by a high intensity of physical activity. Mental fatigue followed similar trends but was less predictable. Potential future directions could focus on anomaly detection assuming longer individual monitoring periods. <b><i>Conclusion:</i></b> Taken together, these results are the first demonstration that multimodal digital data can be used to inform, quantify, and augment subjectively captured non-pathological fatigue measures.
APA, Harvard, Vancouver, ISO, and other styles
37

Mahum, Rabbia, Saeed Ur Rehman, Talha Meraj, Hafiz Tayyab Rauf, Aun Irtaza , Ahmed M. El-Sherbeeny, and Mohammed A. El-Meligy . "A Novel Hybrid Approach Based on Deep CNN Features to Detect Knee Osteoarthritis." Sensors 21, no. 18 (September 15, 2021): 6189. http://dx.doi.org/10.3390/s21186189.

Full text
Abstract:
In the recent era, various diseases have severely affected the lifestyle of individuals, especially adults. Among these, bone diseases, including Knee Osteoarthritis (KOA), have a great impact on quality of life. KOA is a knee joint problem mainly produced due to decreased Articular Cartilage between femur and tibia bones, producing severe joint pain, effusion, joint movement constraints and gait anomalies. To address these issues, this study presents a novel KOA detection at early stages using deep learning-based feature extraction and classification. Firstly, the input X-ray images are preprocessed, and then the Region of Interest (ROI) is extracted through segmentation. Secondly, features are extracted from preprocessed X-ray images containing knee joint space width using hybrid feature descriptors such as Convolutional Neural Network (CNN) through Local Binary Patterns (LBP) and CNN using Histogram of oriented gradient (HOG). Low-level features are computed by HOG, while texture features are computed employing the LBP descriptor. Lastly, multi-class classifiers, that is, Support Vector Machine (SVM), Random Forest (RF), and K-Nearest Neighbour (KNN), are used for the classification of KOA according to the Kellgren–Lawrence (KL) system. The Kellgren–Lawrence system consists of Grade I, Grade II, Grade III, and Grade IV. Experimental evaluation is performed on various combinations of the proposed framework. The experimental results show that the HOG features descriptor provides approximately 97% accuracy for the early detection and classification of KOA for all four grades of KL.
APA, Harvard, Vancouver, ISO, and other styles
38

Mujahid, Muhammad, Ernesto Lee, Furqan Rustam, Patrick Bernard Washington, Saleem Ullah, Aijaz Ahmad Reshi, and Imran Ashraf. "Sentiment Analysis and Topic Modeling on Tweets about Online Education during COVID-19." Applied Sciences 11, no. 18 (September 12, 2021): 8438. http://dx.doi.org/10.3390/app11188438.

Full text
Abstract:
Amid the worldwide COVID-19 pandemic lockdowns, the closure of educational institutes leads to an unprecedented rise in online learning. For limiting the impact of COVID-19 and obstructing its widespread, educational institutions closed their campuses immediately and academic activities are moved to e-learning platforms. The effectiveness of e-learning is a critical concern for both students and parents, specifically in terms of its suitability to students and teachers and its technical feasibility with respect to different social scenarios. Such concerns must be reviewed from several aspects before e-learning can be adopted at such a larger scale. This study endeavors to investigate the effectiveness of e-learning by analyzing the sentiments of people about e-learning. Due to the rise of social media as an important mode of communication recently, people’s views can be found on platforms such as Twitter, Instagram, Facebook, etc. This study uses a Twitter dataset containing 17,155 tweets about e-learning. Machine learning and deep learning approaches have shown their suitability, capability, and potential for image processing, object detection, and natural language processing tasks and text analysis is no exception. Machine learning approaches have been largely used both for annotation and text and sentiment analysis. Keeping in view the adequacy and efficacy of machine learning models, this study adopts TextBlob, VADER (Valence Aware Dictionary for Sentiment Reasoning), and SentiWordNet to analyze the polarity and subjectivity score of tweets’ text. Furthermore, bearing in mind the fact that machine learning models display high classification accuracy, various machine learning models have been used for sentiment classification. Two feature extraction techniques, TF-IDF (Term Frequency-Inverse Document Frequency) and BoW (Bag of Words) have been used to effectively build and evaluate the models. All the models have been evaluated in terms of various important performance metrics such as accuracy, precision, recall, and F1 score. The results reveal that the random forest and support vector machine classifier achieve the highest accuracy of 0.95 when used with Bow features. Performance comparison is carried out for results of TextBlob, VADER, and SentiWordNet, as well as classification results of machine learning models and deep learning models such as CNN (Convolutional Neural Network), LSTM (Long Short Term Memory), CNN-LSTM, and Bi-LSTM (Bidirectional-LSTM). Additionally, topic modeling is performed to find the problems associated with e-learning which indicates that uncertainty of campus opening date, children’s disabilities to grasp online education, and lagging efficient networks for online education are the top three problems.
APA, Harvard, Vancouver, ISO, and other styles
39

Kerle, N., F. Nex, D. Duarte, and A. Vetrivel. "UAV-BASED STRUCTURAL DAMAGE MAPPING – RESULTS FROM 6 YEARS OF RESEARCH IN TWO EUROPEAN PROJECTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W8 (August 21, 2019): 187–94. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w8-187-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Structural disaster damage detection and characterisation is one of the oldest remote sensing challenges, and the utility of virtually every type of active and passive sensor deployed on various air- and spaceborne platforms has been assessed. The proliferation and growing sophistication of UAV in recent years has opened up many new opportunities for damage mapping, due to the high spatial resolution, the resulting stereo images and derivatives, and the flexibility of the platform. We have addressed the problem in the context of two European research projects, RECONASS and INACHUS. In this paper we synthesize and evaluate the progress of 6 years of research focused on advanced image analysis that was driven by progress in computer vision, photogrammetry and machine learning, but also by constraints imposed by the needs of first responder and other civil protection end users. The projects focused on damage to individual buildings caused by seismic activity but also explosions, and our work centred on the processing of 3D point cloud information acquired from stereo imagery. Initially focusing on the development of both supervised and unsupervised damage detection methods built on advanced texture features and basic classifiers such as Support Vector Machine and Random Forest, the work moved on to the use of deep learning. In particular the coupling of image-derived features and 3D point cloud information in a Convolutional Neural Network (CNN) proved successful in detecting also subtle damage features. In addition to the detection of standard rubble and debris, CNN-based methods were developed to detect typical façade damage indicators, such as cracks and spalling, including with a focus on multi-temporal and multi-scale feature fusion. We further developed a processing pipeline and mobile app to facilitate near-real time damage mapping. The solutions were tested in a number of pilot experiments and evaluated by a variety of stakeholders.</p>
APA, Harvard, Vancouver, ISO, and other styles
40

Matrone, Francesca, and Massimo Martini. "Transfer learning and performance enhancement techniques for deep semantic segmentation of built heritage point clouds." Virtual Archaeology Review 12, no. 25 (July 14, 2021): 73. http://dx.doi.org/10.4995/var.2021.15318.

Full text
Abstract:
<p class="VARAbstract">The growing availability of three-dimensional (3D) data, such as point clouds, coming from Light Detection and Ranging (LiDAR), Mobile Mapping Systems (MMSs) or Unmanned Aerial Vehicles (UAVs), provides the opportunity to rapidly generate 3D models to support the restoration, conservation, and safeguarding activities of cultural heritage (CH). The so-called scan-to-BIM process can, in fact, benefit from such data, and they can themselves be a source for further analyses or activities on the archaeological and built heritage. There are several ways to exploit this type of data, such as Historic Building Information Modelling (HBIM), mesh creation, rasterisation, classification, and semantic segmentation. The latter, referring to point clouds, is a trending topic not only in the CH domain but also in other fields like autonomous navigation, medicine or retail. Precisely in these sectors, the task of semantic segmentation has been mainly exploited and developed with artificial intelligence techniques. In particular, machine learning (ML) algorithms, and their deep learning (DL) subset, are increasingly applied and have established a solid state-of-the-art in the last half-decade. However, applications of DL techniques on heritage point clouds are still scarce; therefore, we propose to tackle this framework within the built heritage field. Starting from some previous tests with the Dynamic Graph Convolutional Neural Network (DGCNN), in this contribution close attention is paid to: i) the investigation of fine-tuned models, used as a transfer learning technique, ii) the combination of external classifiers, such as Random Forest (RF), with the artificial neural network, and iii) the evaluation of the data augmentation results for the domain-specific ArCH dataset. Finally, after taking into account the main advantages and criticalities, considerations are made on the possibility to profit by this methodology also for non-programming or domain experts.</p><p>Highlights:</p><ul><li><p>Semantic segmentation of built heritage point clouds through deep neural networks can provide performances comparable to those of more consolidated state-of-the-art ML classifiers.</p></li><li><p>Transfer learning approaches, as fine-tuning, can considerably reduce computational time also for CH domain-specific datasets, as well as improve metrics for some challenging categories (i.e. windows or mouldings).</p></li><li><p>Data augmentation techniques do not significantly improve overall performances.</p></li></ul>
APA, Harvard, Vancouver, ISO, and other styles
41

Tontini, Gian Eugenio, Alessandro Rimondi, Marta Vernero, Helmut Neumann, Maurizio Vecchi, Cristina Bezzio, and Flaminia Cavallaro. "Artificial intelligence in gastrointestinal endoscopy for inflammatory bowel disease: a systematic review and new horizons." Therapeutic Advances in Gastroenterology 14 (January 2021): 175628482110177. http://dx.doi.org/10.1177/17562848211017730.

Full text
Abstract:
Introduction: Since the advent of artificial intelligence (AI) in clinical studies, luminal gastrointestinal endoscopy has made great progress, especially in the detection and characterization of neoplastic and preneoplastic lesions. Several studies have recently shown the potential of AI-driven endoscopy for the investigation of inflammatory bowel disease (IBD). This systematic review provides an overview of the current position and future potential of AI in IBD endoscopy. Methods: A systematic search was carried out in PubMed and Scopus up to 2 December 2020 using the following search terms: artificial intelligence, machine learning, computer-aided, inflammatory bowel disease, ulcerative colitis (UC), Crohn’s disease (CD). All studies on human digestive endoscopy were included. A qualitative analysis and a narrative description were performed for each selected record according to the Joanna Briggs Institute methodologies and the PRISMA statement. Results: Of 398 identified records, 18 were ultimately included. Two-thirds of these (12/18) were published in 2020 and most were cross-sectional studies (15/18). No relevant bias at the study level was reported, although the risk of publication bias across studies cannot be ruled out at this early stage. Eleven records dealt with UC, five with CD and two with both. Most of the AI systems involved convolutional neural network, random forest and deep neural network architecture. Most studies focused on capsule endoscopy readings in CD ( n = 5) and on the AI-assisted assessment of mucosal activity in UC ( n = 10) for automated endoscopic scoring or real-time prediction of histological disease. Discussion: AI-assisted endoscopy in IBD is a rapidly evolving research field with promising technical results and additional benefits when tested in an experimental clinical scenario. External validation studies being conducted in large and prospective cohorts in real-life clinical scenarios will help confirm the added value of AI in assessing UC mucosal activity and in CD capsule reading. Plain language summary Artificial intelligence for inflammatory bowel disease endoscopy Artificial intelligence (AI) is a promising technology in many areas of medicine. In recent years, AI-assisted endoscopy has been introduced into several research fields, including inflammatory bowel disease (IBD) endoscopy, with promising applications that have the potential to revolutionize clinical practice and gastrointestinal endoscopy. We have performed the first systematic review of AI and its application in the field of IBD and endoscopy. A formal process of paper selection and analysis resulted in the assessment of 18 records. Most of these (12/18) were published in 2020 and were cross-sectional studies (15/18). No relevant biases were reported. All studies showed positive results concerning the novel technology evaluated, so the risk of publication bias cannot be ruled out at this early stage. Eleven records dealt with UC, five with CD and two with both. Most studies focused on capsule endoscopy reading in CD patients ( n = 5) and on AI-assisted assessment of mucosal activity in UC patients ( n = 10) for automated endoscopic scoring and real-time prediction of histological disease. We found that AI-assisted endoscopy in IBD is a rapidly growing research field. All studies indicated promising technical results. When tested in an experimental clinical scenario, AI-assisted endoscopy showed it could potentially improve the management of patients with IBD. Confirmatory evidence from real-life clinical scenarios should be obtained to verify the added value of AI-assisted IBD endoscopy in assessing UC mucosal activity and in CD capsule reading.
APA, Harvard, Vancouver, ISO, and other styles
42

"Deep Learning Model to Analyze Customer’s Satisfaction." International Journal of Engineering and Advanced Technology 9, no. 4 (April 30, 2020): 1709–14. http://dx.doi.org/10.35940/ijeat.c6610.049420.

Full text
Abstract:
Nowadays, measuring customer satisfaction is an important strategic tool for companies; many manual methods exist to measure customer’s satisfaction. However, the results have not effective and efficient. In this paper, we propose a new method for facial emotion detection to recognize customer’s satisfaction using a deep learning model. We used a convolutional neural network to detect facial key points. These key points help us to extract geometric features from customer’s emotional faces. Indeed, we computed distances between neutral face and negative or positive feedback. After that, we classified these distances by using Support Vector Machine (SVM), KNN, Random Forest, and Decision Tree. To evaluate the performance of our approach, we tested our algorithm by using FACEDB and JAFFE datasets. We found that SVM is the most performant classifier. We obtained 96% as accuracy by using FACEDB dataset and 95% by using JAFFE dataset.
APA, Harvard, Vancouver, ISO, and other styles
43

Gregoire, J. M., C. Gilon, S. Carlier, and H. Bersini. "Unravelling the black box of machine learning for atrial fibrillation forecast: role of heart rate variability and of premature beats." European Heart Journal 41, Supplement_2 (November 1, 2020). http://dx.doi.org/10.1093/ehjci/ehaa946.0671.

Full text
Abstract:
Abstract Background The identification of patients still in sinus rhythm who will present one month later an atrial fibrillation episode is possible using machine learning (ML) techniques. However, these new ML algorithms do not provide any relevant information about the underlying pathophysiology. Purpose To compare the predictive performance for forecasting AF between a machine learning algorithm and other parameters whose pathophysiological mechanisms are known to play a role in the triggering of arrhythmias (i.e. the count of premature beats (PB) and heart rate variability (HRV) parameters) Material and methods We conducted a retrospective study from an outpatient clinic. 10484 Holter ECG recordings were screened. 250 analysable AF onsets were labelled. We developed a deep neural network model composed of convolutional neural network layers and bidirectional gated recurrent units as recurrent neural network layers that was trained for the forecast of paroxysmal AF episodes, using RR intervals variations. This model works like a black box. For comparison purposes, we used a “random forest” (RF) model of ML to obtain forecast results using HRV parameters with and without PB. This model allows the evaluation of the relevance of HRV parameters and of PB used for the forecast. We calculated the area under the curve of the receiving operating characteristic curve for the different time windows counted in RR intervals before the AF onset. Results As shown in the table, the forecasting value of the deep neural network model (ML) was not superior to the random forest algorithm. Prediction value of both decreased when analyzing the RR intervals further away from the onset of AF Conclusions These results suggest that HRV plays a predominant role in triggering AF episodes and that premature beats could add minor information. Moreover, the closer the window from AF onset, the better the accuracy, regardless of the method used. Such detection algorithms once implemented in pacemakers, might prove useful to prevent AF onset by changing pacing sequence while patients would still be in sinus rhythm, however this remains to be demonstrated Funding Acknowledgement Type of funding source: None
APA, Harvard, Vancouver, ISO, and other styles
44

Sepulvene, Luis, Isabela Drummond, Bruno Kuehne, Rafael Frinhani, Dionisio Leite Filho, Maycon Peixoto, Stephan Reiff-Marganiec, and Bruno Batista. "Performance Evaluation of Machine Learning Techniques for Fault Diagnosis in Vehicle Fleet Tracking Modules." Computer Journal, May 14, 2021. http://dx.doi.org/10.1093/comjnl/bxab047.

Full text
Abstract:
Abstract With industry 4.0, data-based approaches are in vogue. However, extracting the essential features is not a trivial task and greatly influences the final result. There is also a need for specialized system knowledge to monitor the environment and diagnose faults. In this context, the diagnosis of faults is significant, for example, in a vehicle fleet monitoring system, since it is possible to diagnose faults even before the customer is aware of the fault, minimizing the maintenance costs of the modules. In this paper, several models using machine learning (ML) techniques were applied and analyzed during the fault diagnosis process in vehicle fleet tracking modules. Two approaches were proposed: ‘With Knowledge’ and ‘Without Knowledge’, to explore the dataset using ML techniques to generate classifiers that can assist in the fault diagnosis process. The approach ‘With Knowledge’ performs the feature extraction manually, using the ML techniques: random forest, naive Bayes, support vector machine and Multi Layer Perceptron; on the other hand, the approach ‘Without Knowledge’ performs an automatic feature extraction, through a convolutional neural network. The results showed that the proposed approaches are promising. The best models with manual feature extraction obtained a precision of 99.76% and 99.68% for detection and detection and isolation of faults, respectively, in the provided dataset. The best models performing an automatic feature extraction obtained, respectively, 88.43% and 54.98% for detection and detection and isolation of failures.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Pei, Mohamed Abdel-Aty, and Zubayer Islam. "Driving Maneuvers Detection using Semi-Supervised Long Short-Term Memory and Smartphone Sensors." Transportation Research Record: Journal of the Transportation Research Board, April 28, 2021, 036119812110074. http://dx.doi.org/10.1177/03611981211007483.

Full text
Abstract:
Driving maneuvers detection is an important component of proactive traffic safety management and connected vehicle systems. Most of the existing studies used supervised learning concepts to train their models with labeled data. These methods achieved promising results but were limited by the heavy dependence on the labeled data. With the development of mobile sensing technologies, massive traffic-related data can be efficiently collected by mobile devices (e.g., smartphones, tablets, etc.). Considering the high costs of labeling data, this paper proposed a semi-supervised deep learning method to learn from the unlabeled data. Data from a smartphone’s accelerometer and gyroscope were collected by different drivers with a variety of smartphones, vehicles, and locations. Three long short-term memory (LSTM) models were trained with the proposed semi-supervised learning algorithm. Experimental results indicated that the proposed semi-supervised LSTM could learn from the unlabeled data and achieve outstanding results with only a small portion of the labeled data. Using much fewer labeled data, semi-supervised LSTM could achieve similar results compared with the supervised method. Moreover, the proposed method outperformed other machine learning methods (e.g., convolutional neural network, XGBoost, random forest) on precision, recall, F1-score, and area under curve. More and more traffic data will be available in the future, the proposed method is expected to make use of the undiscovered potential from the massive unlabeled data.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Mengya, Haiyan He, Guorong Huang, Bo Lin, Huiyan Tian, Ke Xia, Changjing Yuan, Xinyu Zhan, Yang Zhang, and Weiling Fu. "A Novel and Rapid Serum Detection Technology for Non-Invasive Screening of Gastric Cancer Based on Raman Spectroscopy Combined With Different Machine Learning Methods." Frontiers in Oncology 11 (September 27, 2021). http://dx.doi.org/10.3389/fonc.2021.665176.

Full text
Abstract:
Gastric cancer (GC) is the fifth most common cancer in the world and a serious threat to human health. Due to its high morbidity and mortality, a simple, rapid and accurate early screening method for GC is urgently needed. In this study, the potential of Raman spectroscopy combined with different machine learning methods was explored to distinguish serum samples from GC patients and healthy controls. Serum Raman spectra were collected from 109 patients with GC (including 35 in stage I, 14 in stage II, 35 in stage III, and 25 in stage IV) and 104 healthy volunteers matched for age, presenting for a routine physical examination. We analyzed the difference in serum metabolism between GC patients and healthy people through a comparative study of the average Raman spectra of the two groups. Four machine learning methods, one-dimensional convolutional neural network, random forest, support vector machine, and K-nearest neighbor were used to explore identifying two sets of Raman spectral data. The classification model was established by using 70% of the data as a training set and 30% as a test set. Using unseen data to test the model, the RF model yielded an accuracy of 92.8%, and the sensitivity and specificity were 94.7% and 90.8%. The performance of the RF model was further confirmed by the receiver operating characteristic (ROC) curve, with an area under the curve (AUC) of 0.9199. This exploratory work shows that serum Raman spectroscopy combined with RF has great potential in the machine-assisted classification of GC, and is expected to provide a non-destructive and convenient technology for the screening of GC patients.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography