To see the other types of publications on this topic, follow the link: Classifier paradigms.

Journal articles on the topic 'Classifier paradigms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Classifier paradigms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhao, Xianfeng, Jie Zhu, and Haibo Yu. "On More Paradigms of Steganalysis." International Journal of Digital Crime and Forensics 8, no. 2 (April 2016): 1–15. http://dx.doi.org/10.4018/ijdcf.2016040101.

Full text
Abstract:
Up to now, most researches on steganalysis concentrate on one extreme case. Typically, the a priori knowledge of the embedding way and cover-media is assumed known in the classifier training and even feature design stage. However, the steganalysis in the real world is done with different levels of such knowledge so that there can be various paradigms for doing it. Although some researchers have addressed the situations, there is still a lack of a systematic approach to defining the various paradigms. In this paper, the authors give such an approach by first defining four extreme paradigms, and then defining the rest among them. Each paradigm is related with two sets of assumed known a priori knowledge respectively about the steganographic algorithm and cover-media, and each paradigm corresponds to a particular case of steganalysis. Also we will see that different paradigms can have very different aims so that the designs may be various.
APA, Harvard, Vancouver, ISO, and other styles
2

Martišius, Ignas, and Robertas Damaševičius. "A Prototype SSVEP Based Real Time BCI Gaming System." Computational Intelligence and Neuroscience 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/3861425.

Full text
Abstract:
Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Minpeng, Jing Liu, Long Chen, Hongzhi Qi, Feng He, Peng Zhou, Baikun Wan, and Dong Ming. "Incorporation of Inter-Subject Information to Improve the Accuracy of Subject-Specific P300 Classifiers." International Journal of Neural Systems 26, no. 03 (April 7, 2016): 1650010. http://dx.doi.org/10.1142/s0129065716500106.

Full text
Abstract:
Although the inter-subject information has been demonstrated to be effective for a rapid calibration of the P300-based brain–computer interface (BCI), it has never been comprehensively tested to find if the incorporation of heterogeneous data could enhance the accuracy. This study aims to improve the subject-specific P300 classifier by adding other subject’s data. A classifier calibration strategy, weighted ensemble learning generic information (WELGI), was developed, in which elementary classifiers were constructed by using both the intra- and inter-subject information and then integrated into a strong classifier with a weight assessment. 55 subjects were recruited to spell 20 characters offline using the conventional P300-based BCI, i.e. the P300-speller. Four different metrics, the P300 accuracy and precision, the round accuracy, and the character accuracy, were performed for a comprehensive investigation. The results revealed that the classifier constructed on the training dataset in combination with adding other subject’s data was significantly superior to that without the inter-subject information. Therefore, the WELGI is an effective classifier calibration strategy which uses the inter-subject information to improve the accuracy of subject-specific P300 classifiers, and could also be applied to other BCI paradigms.
APA, Harvard, Vancouver, ISO, and other styles
4

Govindarajan, M., and RM Chandrasekaran. "A Hybrid Multilayer Perceptron Neural Network for Direct Marketing." International Journal of Knowledge-Based Organizations 2, no. 3 (July 2012): 63–73. http://dx.doi.org/10.4018/ijkbo.2012070104.

Full text
Abstract:
Data Mining is the use of algorithms to extract the information and patterns derived by the knowledge discovery in database process. It is often referred to as supervised learning because the classes are determined before examining the data. In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given set of possible features and structure parameters of the classifier must be adapted with respect to these features and a given data set. This paper describes feature selection and model selection simultaneously for Multilayer Perceptron (MLP) classifiers. In order to reduce the optimization effort, various techniques are integrated that accelerate and improve the classifier significantly. The feasibility and the benefits of the proposed approach are demonstrated by means of data mining problem: Direct Marketing in Customer Relationship Management. It is shown that, compared to earlier MLP technique, the run time is reduced with respect to learning data and with validation data for the proposed Multilayer Perceptron (MLP) classifiers. Similarly, the error rate is relatively low with respect to learning data and with validation data in direct marketing dataset. The algorithm is independent of specific applications so that many ideas and solutions can be transferred to other classifier paradigms.
APA, Harvard, Vancouver, ISO, and other styles
5

Yenkikar, Anuradha, C. Narendra Babu, and D. Jude Hemanth. "Semantic relational machine learning model for sentiment analysis using cascade feature selection and heterogeneous classifier ensemble." PeerJ Computer Science 8 (September 20, 2022): e1100. http://dx.doi.org/10.7717/peerj-cs.1100.

Full text
Abstract:
The exponential rise in social media via microblogging sites like Twitter has sparked curiosity in sentiment analysis that exploits user feedback towards a targeted product or service. Considering its significance in business intelligence and decision-making, numerous efforts have been made in this area. However, lack of dictionaries, unannotated data, large-scale unstructured data, and low accuracies have plagued these approaches. Also, sentiment classification through classifier ensemble has been underexplored in literature. In this article, we propose a Semantic Relational Machine Learning (SRML) model that automatically classifies the sentiment of tweets by using classifier ensemble and optimal features. The model employs the Cascaded Feature Selection (CFS) strategy, a novel statistical assessment approach based on Wilcoxon rank sum test, univariate logistic regression assisted significant predictor test and cross-correlation test. It further uses the efficacy of word2vec-based continuous bag-of-words and n-gram feature extraction in conjunction with SentiWordNet for finding optimal features for classification. We experiment on six public Twitter sentiment datasets, the STS-Gold dataset, the Obama-McCain Debate (OMD) dataset, the healthcare reform (HCR) dataset and the SemEval2017 Task 4A, 4B and 4C on a heterogeneous classifier ensemble comprising fourteen individual classifiers from different paradigms. Results from the experimental study indicate that CFS supports in attaining a higher classification accuracy with up to 50% lesser features compared to count vectorizer approach. In Intra-model performance assessment, the Artificial Neural Network-Gradient Descent (ANN-GD) classifier performs comparatively better than other individual classifiers, but the Best Trained Ensemble (BTE) strategy outperforms on all metrics. In inter-model performance assessment with existing state-of-the-art systems, the proposed model achieved higher accuracy and outperforms more accomplished models employing quantum-inspired sentiment representation (QSR), transformer-based methods like BERT, BERTweet, RoBERTa and ensemble techniques. The research thus provides critical insights into implementing similar strategy into building more generic and robust expert system for sentiment analysis that can be leveraged across industries.
APA, Harvard, Vancouver, ISO, and other styles
6

Et. al., G. Stalin Babu,. "Exploiting of Classification Paradigms for Early diagnosis of Alzheimer’s disease." INFORMATION TECHNOLOGY IN INDUSTRY 9, no. 2 (March 25, 2021): 281–88. http://dx.doi.org/10.17762/itii.v9i2.345.

Full text
Abstract:
Alzheimer’s disorder is an incurable neurodegenerative disease that ordinarily affects the aged population. Coherent automated assessment methods are essential for Alzheimer's disease diagnosis in early from distinct images modalities using Machine Learning. This article focuses on exploring various feature extraction and classification methods for early detection of AD proposed by researchers and proposes a modern predictive model that includes Voxel based Texture analysis of brain images for extract features and Optimized Classifier Deep Convolution Neural Network (DCNN) employed for enhance accuracy.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yang, and Peter I. Rockett. "A Generic Multi-dimensional Feature Extraction Method Using Multiobjective Genetic Programming." Evolutionary Computation 17, no. 1 (March 2009): 89–115. http://dx.doi.org/10.1162/evco.2009.17.1.89.

Full text
Abstract:
In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
8

Fisch, Dominik, Bernhard Kühbeck, Bernhard Sick, and Seppo J. Ovaska. "So near and yet so far: New insight into properties of some well-known classifier paradigms." Information Sciences 180, no. 18 (September 2010): 3381–401. http://dx.doi.org/10.1016/j.ins.2010.05.030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stojic, Filip, and Tom Chau. "Nonspecific Visuospatial Imagery as a Novel Mental Task for Online EEG-Based BCI Control." International Journal of Neural Systems 30, no. 06 (May 27, 2020): 2050026. http://dx.doi.org/10.1142/s0129065720500264.

Full text
Abstract:
Brain–computer interfaces (BCIs) can provide a means of communication to individuals with severe motor disorders, such as those presenting as locked-in. Many BCI paradigms rely on motor neural pathways, which are often impaired in these individuals. However, recent findings suggest that visuospatial function may remain intact. This study aimed to determine whether visuospatial imagery, a previously unexplored task, could be used to signify intent in an online electroencephalography (EEG)-based BCI. Eighteen typically developed participants imagined checkerboard arrow stimuli in four quadrants of the visual field in 5-s trials, while signals were collected using 16 dry electrodes over the visual cortex. In online blocks, participants received graded visual feedback based on their performance. An initial BCI pipeline (visuospatial imagery classifier I) attained a mean accuracy of [Formula: see text]% classifying rest against visuospatial imagery in online trials. This BCI pipeline was further improved using restriction to alpha band features (visuospatial imagery classifier II), resulting in a mean pseudo-online accuracy of [Formula: see text]%. Accuracies exceeded the threshold for practical BCIs in 12 participants. This study supports the use of visuospatial imagery as a real-time, binary EEG-BCI control paradigm.
APA, Harvard, Vancouver, ISO, and other styles
10

Pramukantoro, Eko Sakti, and Akio Gofuku. "A Heartbeat Classifier for Continuous Prediction Using a Wearable Device." Sensors 22, no. 14 (July 6, 2022): 5080. http://dx.doi.org/10.3390/s22145080.

Full text
Abstract:
Heartbeat monitoring may play an essential role in the early detection of cardiovascular disease. When using a traditional monitoring system, an abnormal heartbeat may not appear during a recording in a healthcare facility due to the limited time. Thus, continuous and long-term monitoring is needed. Moreover, the conventional equipment may not be portable and cannot be used at arbitrary times and locations. A wearable sensor device such as Polar H10 offers the same capability as an alternative. It has gold-standard heartbeat recording and communication ability but still lacks analytical processing of the recorded data. An automatic heartbeat classification system can play as an analyzer and is still an open problem in the development stage. This paper proposes a heartbeat classifier based on RR interval data for real-time and continuous heartbeat monitoring using the Polar H10 wearable device. Several machine learning and deep learning methods were used to train the classifier. In the training process, we also compare intra-patient and inter-patient paradigms on the original and oversampling datasets to achieve higher classification accuracy and the fastest computation speed. As a result, with a constrain in RR interval data as the feature, the random forest-based classifier implemented in the system achieved up to 99.67% for accuracy, precision, recall, and F1-score. We are also conducting experiments involving healthy people to evaluate the classifier in a real-time monitoring system.
APA, Harvard, Vancouver, ISO, and other styles
11

ISRAEL, P., and C. KOUTSOUGERAS. "A HYBRID ELECTRO-OPTICAL ARCHITECTURE FOR CLASSIFICATION TREES AND ASSOCIATIVE MEMORY MECHANISMS." International Journal on Artificial Intelligence Tools 02, no. 03 (September 1993): 373–93. http://dx.doi.org/10.1142/s0218213093000199.

Full text
Abstract:
An architecture is presented here which can be used for some important paradigms of intelligent systems. This architecture targets applications which require real time processing of stream inputs with versatile hardware which exploits parallelism. The architecture is particularly suited for pattern recognition paradigms which are based on the use of decision trees. Artificially intelligent systems based on decision trees interestingly present some common computational requirements which can be served very efficiently by a Data Flow architecture. A small set of different functions is computed repeatedly with simple result tokens passed from one computation to successive ones. Developments in optical processing have introduced elements which are particularly suited to the computational requirements of some of these systems, and therefore they can be effectively employed in this architecture. The architecture presented here is based on Data Flow design principles and is enhanced with optical processing elements. The function of the architecture is illustrated by discussing the mapping of two specific AI paradigms—a pattern classifier and an associative recall mechanism.
APA, Harvard, Vancouver, ISO, and other styles
12

VATEEKUL, PEERAPON, SAREEWAN DENDAMRONGVIT, and MIROSLAV KUBAT. "IMPROVING SVM PERFORMANCE IN MULTI-LABEL DOMAINS: THRESHOLD ADJUSTMENT." International Journal on Artificial Intelligence Tools 22, no. 01 (February 2013): 1250038. http://dx.doi.org/10.1142/s0218213012500388.

Full text
Abstract:
In “multi-label domains,” where the same example can simultaneously belong to two or more classes, it is customary to induce a separate binary classifier for each class, and then use them all in parallel. As a result, some of these classifiers are induced from imbalanced training sets where one class outnumbers the other – a circumstance known to hurt some machine learning paradigms. In the case of Support Vector Machines (SVM), this suboptimal behavior is explained by the fact that SVM seeks to minimize error rate, a criterion that is in domains of this type misleading. This is why several research groups have studied mechanisms to readjust the bias of SVM's hyperplane. The best of these achieves very good classification performance at the price of impractically high computational costs. We propose here an improvement where these cost are reduced to a small fraction without significantly impairing classification.
APA, Harvard, Vancouver, ISO, and other styles
13

Su, Yanhong, and Lijing Yu. "Security System of Logistics Service Transaction Record Based on Wireless Network." Mobile Information Systems 2022 (September 1, 2022): 1–12. http://dx.doi.org/10.1155/2022/8141190.

Full text
Abstract:
Wireless Networks (WNs) and their associated technology paradigms are employed for smart and secure logistics services. The wireless logistics transactions are secured through end-to-end authentication, verification, and third-party watchdog systems. This manuscript introduces a Preemptive Security Scheme for Transaction Verification (PSS-TV) in wireless network-aided logistics services. Different logistics services are secured based on the sender and receiver’s signature in mutual consent. Signature generation and implications are varied using the key size and validity based on the previous transaction recommendation. The conventional random forest classifier learning is used for detecting transaction breaches and validity requirements. This is feasible based on the transaction interruptions and failed mutual verifications. These classifications are performed using the learning paradigm for improving the key size in generating stealthy signatures. In the signature generation, the conventional elliptic curve cryptography is relied upon. The proposed scheme’s performance is analyzed using success ratio, failure rate, verification and authentication time, and complexity.
APA, Harvard, Vancouver, ISO, and other styles
14

Mahfouz, Ahmed, Abdullah Abuhussein, Deepak Venugopal, and Sajjan Shiva. "Ensemble Classifiers for Network Intrusion Detection Using a Novel Network Attack Dataset." Future Internet 12, no. 11 (October 26, 2020): 180. http://dx.doi.org/10.3390/fi12110180.

Full text
Abstract:
Due to the extensive use of computer networks, new risks have arisen, and improving the speed and accuracy of security mechanisms has become a critical need. Although new security tools have been developed, the fast growth of malicious activities continues to be a pressing issue that creates severe threats to network security. Classical security tools such as firewalls are used as a first-line defense against security problems. However, firewalls do not entirely or perfectly eliminate intrusions. Thus, network administrators rely heavily on intrusion detection systems (IDSs) to detect such network intrusion activities. Machine learning (ML) is a practical approach to intrusion detection that, based on data, learns how to differentiate between abnormal and regular traffic. This paper provides a comprehensive analysis of some existing ML classifiers for identifying intrusions in network traffic. It also produces a new reliable dataset called GTCS (Game Theory and Cyber Security) that matches real-world criteria and can be used to assess the performance of the ML classifiers in a detailed experimental evaluation. Finally, the paper proposes an ensemble and adaptive classifier model composed of multiple classifiers with different learning paradigms to address the issue of the accuracy and false alarm rate in IDSs. Our classifiers show high precision and recall rates and use a comprehensive set of features compared to previous work.
APA, Harvard, Vancouver, ISO, and other styles
15

Ma, Shihan, and Jidong J. Yang. "Image-Based Vehicle Classification by Synergizing Features from Supervised and Self-Supervised Learning Paradigms." Eng 4, no. 1 (February 1, 2023): 444–56. http://dx.doi.org/10.3390/eng4010027.

Full text
Abstract:
This paper introduces a novel approach to leveraging features learned from both supervised and self-supervised paradigms, to improve image classification tasks, specifically for vehicle classification. Two state-of-the-art self-supervised learning methods, DINO and data2vec, were evaluated and compared for their representation learning of vehicle images. The former contrasts local and global views while the latter uses masked prediction on multiple layered representations. In the latter case, supervised learning is employed to finetune a pretrained YOLOR object detector for detecting vehicle wheels, from which definitive wheel positional features are retrieved. The representations learned from these self-supervised learning methods were combined with the wheel positional features for the vehicle classification task. Particularly, a random wheel masking strategy was utilized to finetune the previously learned representations in harmony with the wheel positional features during the training of the classifier. Our experiments show that the data2vec-distilled representations, which are consistent with our wheel masking strategy, outperformed the DINO counterpart, resulting in a celebrated Top-1 classification accuracy of 97.2% for classifying the 13 vehicle classes defined by the Federal Highway Administration.
APA, Harvard, Vancouver, ISO, and other styles
16

Poppenk, Jordan, and Kenneth A. Norman. "Multiple-object Tracking as a Tool for Parametrically Modulating Memory Reactivation." Journal of Cognitive Neuroscience 29, no. 8 (August 2017): 1339–54. http://dx.doi.org/10.1162/jocn_a_01132.

Full text
Abstract:
Converging evidence supports the “nonmonotonic plasticity” hypothesis, which states that although complete retrieval may strengthen memories, partial retrieval weakens them. Yet, the classic experimental paradigms used to study effects of partial retrieval are not ideally suited to doing so, because they lack the parametric control needed to ensure that the memory is activated to the appropriate degree (i.e., that there is some retrieval but not enough to cause memory strengthening). Here, we present a novel procedure designed to accommodate this need. After participants learned a list of word–scene associates, they completed a cued mental visualization task that was combined with a multiple-object tracking (MOT) procedure, which we selected for its ability to interfere with mental visualization in a parametrically adjustable way (by varying the number of MOT targets). We also used fMRI data to successfully train an “associative recall” classifier for use in this task: This classifier revealed greater memory reactivation during trials in which associative memories were cued while participants tracked one, rather than five, MOT targets. However, the classifier was insensitive to task difficulty when recall was not taking place, suggesting that it had indeed tracked memory reactivation rather than task difficulty per se. Consistent with the classifier findings, participants' introspective ratings of visualization vividness were modulated by MOT task difficulty. In addition, we observed reduced classifier output and slowing of responses in a postreactivation memory test, consistent with the hypothesis that partial reactivation, induced by MOT, weakened memory. These results serve as a “proof of concept” that MOT can be used to parametrically modulate memory retrieval—a property that may prove useful in future investigation of partial retrieval effects, for example, in closed-loop experiments.
APA, Harvard, Vancouver, ISO, and other styles
17

Bălan, Oana, Gabriela Moise, Alin Moldoveanu, Marius Leordeanu, and Florica Moldoveanu. "Fear Level Classification Based on Emotional Dimensions and Machine Learning Techniques." Sensors 19, no. 7 (April 11, 2019): 1738. http://dx.doi.org/10.3390/s19071738.

Full text
Abstract:
There has been steady progress in the field of affective computing over the last two decades that has integrated artificial intelligence techniques in the construction of computational models of emotion. Having, as a purpose, the development of a system for treating phobias that would automatically determine fear levels and adapt exposure intensity based on the user’s current affective state, we propose a comparative study between various machine and deep learning techniques (four deep neural network models, a stochastic configuration network, Support Vector Machine, Linear Discriminant Analysis, Random Forest and k-Nearest Neighbors), with and without feature selection, for recognizing and classifying fear levels based on the electroencephalogram (EEG) and peripheral data from the DEAP (Database for Emotion Analysis using Physiological signals) database. Fear was considered an emotion eliciting low valence, high arousal and low dominance. By dividing the ratings of valence/arousal/dominance emotion dimensions, we propose two paradigms for fear level estimation—the two-level (0—no fear and 1—fear) and the four-level (0—no fear, 1—low fear, 2—medium fear, 3—high fear) paradigms. Although all the methods provide good classification accuracies, the highest F scores have been obtained using the Random Forest Classifier—89.96% and 85.33% for the two-level and four-level fear evaluation modality.
APA, Harvard, Vancouver, ISO, and other styles
18

Martínez-Cagigal, Víctor, Eduardo Santamaría-Vázquez, and Roberto Hornero. "Asynchronous Control of P300-Based Brain–Computer Interfaces Using Sample Entropy." Entropy 21, no. 3 (February 27, 2019): 230. http://dx.doi.org/10.3390/e21030230.

Full text
Abstract:
Brain–computer interfaces (BCI) have traditionally worked using synchronous paradigms. In recent years, much effort has been put into reaching asynchronous management, providing users with the ability to decide when a command should be selected. However, to the best of our knowledge, entropy metrics have not yet been explored. The present study has a twofold purpose: (i) to characterize both control and non-control states by examining the regularity of electroencephalography (EEG) signals; and (ii) to assess the efficacy of a scaled version of the sample entropy algorithm to provide asynchronous control for BCI systems. Ten healthy subjects participated in the study, who were asked to spell words through a visual oddball-based paradigm, attending (i.e., control) and ignoring (i.e., non-control) the stimuli. An optimization stage was performed for determining a common combination of hyperparameters for all subjects. Afterwards, these values were used to discern between both states using a linear classifier. Results show that control signals are more complex and irregular than non-control ones, reaching an average accuracy of 94.40 % in classification. In conclusion, the present study demonstrates that the proposed framework is useful in monitoring the attention of a user, and granting the asynchrony of the BCI system.
APA, Harvard, Vancouver, ISO, and other styles
19

López-Larraz, Eduardo, Jaime Ibáñez, Fernando Trincado-Alonso, Esther Monge-Pereira, José Luis Pons, and Luis Montesano. "Comparing Recalibration Strategies for Electroencephalography-Based Decoders of Movement Intention in Neurological Patients with Motor Disability." International Journal of Neural Systems 28, no. 07 (July 18, 2018): 1750060. http://dx.doi.org/10.1142/s0129065717500605.

Full text
Abstract:
Motor rehabilitation based on the association of electroencephalographic (EEG) activity and proprioceptive feedback has been demonstrated as a feasible therapy for patients with paralysis. To promote long-lasting motor recovery, these interventions have to be carried out across several weeks or even months. The success of these therapies partly relies on the performance of the system decoding movement intentions, which normally has to be recalibrated to deal with the nonstationarities of the cortical activity. Minimizing the recalibration times is important to reduce the setup preparation and maximize the effective therapy time. To date, a systematic analysis of the effect of recalibration strategies in EEG-driven interfaces for motor rehabilitation has not yet been performed. Data from patients with stroke (4 patients, 8 sessions) and spinal cord injury (SCI) (4 patients, 5 sessions) undergoing two different paradigms (self-paced and cue-guided, respectively) are used to study the performance of the EEG-based classification of motor intentions. Four calibration schemes are compared, considering different combinations of training datasets from previous and/or the validated session. The results show significant differences in classifier performances in terms of the true and false positives (TPs) and (FPs). Combining training data from previous sessions with data from the validation session provides the best compromise between the amount of data needed for calibration and the classifier performance. With this scheme, the average true (false) positive rates obtained are 85.3% (17.3%) and 72.9% (30.3%) for the self-paced and the cue-guided protocols, respectively. These results suggest that the use of optimal recalibration schemes for EEG-based classifiers of motor intentions leads to enhanced performances of these technologies, while not requiring long calibration phases prior to starting the intervention.
APA, Harvard, Vancouver, ISO, and other styles
20

Vaegter, Henrik Bjarke, Kristian Kjær Petersen, Carsten Dahl Mørch, Yosuke Imai, and Lars Arendt-Nielsen. "Assessment of CPM reliability: quantification of the within-subject reliability of 10 different protocols." Scandinavian Journal of Pain 18, no. 4 (October 25, 2018): 729–37. http://dx.doi.org/10.1515/sjpain-2018-0087.

Full text
Abstract:
Abstract Background and aims Conditioned Pain Modulation (CPM) is a well-established phenomenon and several protocols have shown acceptable between-subject reliability [based on intraclass correlation coefficient (ICC) values] in pain-free controls. Recently, it was recommended that future CPM test-retest reliability studies should explicitly report CPM reliability based on CPM responders and non-responders (within-subject reliability) based on measurement error of the test stimulus. Identification of reliable CPM paradigms based on responders and non-responders may be a step towards using CPM as a mechanistic marker in diagnosis and individualized pain management regimes. The primary aim of this paper is to investigate the frequency of CPM responders/non-responders, and to quantify the agreements in the classification of responders/non-responders between 2 different days for 10 different CPM protocols. Methods Data from a previous study investigating reliability of CPM protocols in healthy subjects was used. In 26 healthy men, the test-stimuli used on both days were: Pain thresholds to electrical stimulation, heat stimulation, manual algometry, and computer-controlled cuff algometry as well as pain tolerance to cuff algometry. Two different conditioning stimuli (CS; cold water immersion and a computer-controlled tourniquet) were used in a randomized and counterbalanced order in both sessions. CPM responders were defined as a larger increase in the test stimulus response during the CS than the standard error of measurement (SEM) for the test-stimuli between repeated baseline tests without CS. Results Frequency of responders and non-responders showed large variations across protocols. Across the studied CPM protocols, a large proportion (from 11.5 to 73.1%) of subjects was classified as CPM non-responders when the test stimuli standard error of measurements (SEM) was considered as classifier. The combination of manual pressure algometry and cold water immersion induced a CPM effect in most participants on both days (n=16). However, agreement in the classification of CPM responders versus non-responders between days was only significant when assessed with computer-controlled pressure pain threshold as test-stimulus and tourniquet cuff as CS (κ=0.36 [95% CI, 0.04–0.68], p=0.037). Conclusions and implications Agreements in classification of CPM responders/non-responders using SEM as classifier between days were generally poor suggesting considerable intra-individual variation in CPM. The most reliable paradigm was computer-controlled pressure pain threshold as test-stimulus and tourniquet cuff as conditioning stimulus. However while this CPM protocol had the greatest degree of agreement of classification of CPM responders and non-responders across days, this protocol also failed to induce a CPM response in more than half of the sample. In contrast, the commonly used combination of manual pressure algometry and cold water immersion induced a CPM effect in most participants however it was inconsistent in doing so. Further exploration of the two paradigms and classification of responders and non-responders in a larger heterogeneous sample also including women would further inform the clinical usefulness of these CPM protocols. Future research in this area may be an important step towards using CPM as a mechanistic marker in diagnosis and in developing individualized pain management regimes.
APA, Harvard, Vancouver, ISO, and other styles
21

Dhina, M. M., and S. Sumathi. "An innovative approach to classify hierarchical remarks with multi-class using BERT and customized naïve bayes classifier." International Journal of Engineering, Science and Technology 13, no. 4 (May 30, 2022): 32–45. http://dx.doi.org/10.4314/ijest.v13i4.4.

Full text
Abstract:
Text classification is the process of grouping text into distinct categories. Text classifiers may automatically assess text input and allocate a set of pre-defined tags or categories depending on its content or a pre-trained model using Natural Language Processing (NLP), which actually is a subset of Machine Learning (ML). The notion of text categorization is becoming increasingly essential in enterprises since it helps firms to get ideas from facts and automate company operations, lowering manual labor and expenses. Linguistic Detectors (the technique of determining the language of a given document), Sentiment Analysis (the process of identifying whether a text is favorable or unfavorable about a given subject), Topic Detection (determining the theme or topic of a group of texts), and so on are common applications of text classification in industry. The nature of the dataset is Multi-class and multi-hierarchical, which means that the hierarchies are in multiple levels, each level of hierarchy is multiple class in nature. One of ML’s most successful paradigms is supervised learning from which one can build a generalization model. Hence, a custom model is built, so that the model fits with the problem. Deep learning (DL), part of Artificial Intelligence (AI) , does functions that replicate the human brain's data processing capabilities in order to identify text or artifacts, translate languages, detect voice, draw conclusions and so on. Bidirectional Encoder Representations from Transformers (BERT), a Deep Learning Algorithm performs an extra-ordinary task in NLP text classification and results in high accuracy. Therefore, BERT is combined with the Custom Model developed and compared with the native algorithm to ensure the increase in accuracy rates.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Zhi-Hao, and Jyh-Ching Juang. "AE-RTISNet: Aeronautics Engine Radiographic Testing Inspection System Net with an Improved Fast Region-Based Convolutional Neural Network Framework." Applied Sciences 10, no. 23 (December 5, 2020): 8718. http://dx.doi.org/10.3390/app10238718.

Full text
Abstract:
To ensure safety in aircraft flying, we aimed to use deep learning methods of nondestructive examination with multiple defect detection paradigms for X-ray image detection. The use of the fast region-based convolutional neural network (Fast R-CNN)-driven model was to augment and improve the existing automated non-destructive testing (NDT) diagnosis. Within the context of X-ray screening, limited numbers and insufficient types of X-ray aeronautics engine defect data samples can, thus, pose another problem in the performance accuracy of training models tackling multiple detections. To overcome this issue, we employed a deep learning paradigm of transfer learning tackling both single and multiple detection. Overall, the achieved results obtained more than 90% accuracy based on the aeronautics engine radiographic testing inspection system net (AE-RTISNet) retrained with eight types of defect detection. Caffe structure software was used to perform network tracking detection over multiple Fast R-CNNs. We determined that the AE-RTISNet provided the best results compared with the more traditional multiple Fast R-CNN approaches, which were simple to translate to C++ code and installed in the Jetson™ TX2 embedded computer. With the use of the lightning memory-mapped database (LMDB) format, all input images were 640 × 480 pixels. The results achieved a 0.9 mean average precision (mAP) on eight types of material defect classifier problems and required approximately 100 microseconds.
APA, Harvard, Vancouver, ISO, and other styles
23

Zuev, Mikhail B. "EVERYDAY LIFE AND ROUTINE IN THE ASSIMILATION OF LANGUAGE PARADIGMS OF THE SPANISH LANGUAGE." RSUH/RGGU Bulletin. Series Psychology. Pedagogics. Education, no. 3 (2022): 113–20. http://dx.doi.org/10.28995/2073-6398-2022-3-113-120.

Full text
Abstract:
The research is connected with the implementation of state standards of personnel training according to the “Political Sciences and regional studies” classifier and the constant process of improving pedagogical techniques in teaching a foreign language in the professional training of higher education. The article discusses the information and technological possibilities of immersing students in the cultural environment of Spain in order to master the universal competencies of intercultural communication. The culture of everyday life and daily routine displayed in the linguistic paradigms of the Spanish language is proposed for consideration in the practice of language teaching. The author, using a set of methodological techniques and approaches, demonstrates teaching know-how and procedures for designing a communication environment. The technology of immersion in the everyday culture of the country through linguistic forms – universals allows us to realize the trend of modern education within the framework of democratization and humanization. The study of everyday life, its manifestations in the culture of a native speaker, their daily activities reveals opportunities to expand cultural idioms based on the principles of equality and partnership between the teacher and the student in the pedagogical process. Such a concept involves joint work on the goal and achievement of tasks in the learning process, determining the success effect of cooperation in the professional training of bachelors in the areas of “regional studies”, “international relations”, “political science”. The purpose of the study is to illustrate through the everyday culture of native Spanish speakers the possibilities of enhancing the cognitive activity of students and arranging a sequence of achieving indicators of universal and professional competence of a bachelor. Within the framework of the study, the author applied an interdisciplinary approach and used a set of methodological materials from Spanish and Russian foreign language teaching schools based on comparative, cultural and linguistic analysis.
APA, Harvard, Vancouver, ISO, and other styles
24

Sarnovsky, Martin, and Marek Olejnik. "Improvement in the Efficiency of a Distributed Multi-Label Text Classification Algorithm Using Infrastructure and Task-Related Data." Informatics 6, no. 1 (March 18, 2019): 12. http://dx.doi.org/10.3390/informatics6010012.

Full text
Abstract:
Distributed computing technologies allow a wide variety of tasks that use large amounts of data to be solved. Various paradigms and technologies are already widely used, but many of them are lacking when it comes to the optimization of resource usage. The aim of this paper is to present the optimization methods used to increase the efficiency of distributed implementations of a text-mining model utilizing information about the text-mining task extracted from the data and information about the current state of the distributed environment obtained from a computational node, and to improve the distribution of the task on the distributed infrastructure. Two optimization solutions are developed and implemented, both based on the prediction of the expected task duration on the existing infrastructure. The solutions are experimentally evaluated in a scenario where a distributed tree-based multi-label classifier is built based on two standard text data collections.
APA, Harvard, Vancouver, ISO, and other styles
25

Pierce, Karen, Teresa H. Wen, Javad Zahiri, Charlene Andreason, Eric Courchesne, Cynthia C. Barnes, Linda Lopez, Steven J. Arias, Ahtziry Esquivel, and Amanda Cheng. "Level of Attention to Motherese Speech as an Early Marker of Autism Spectrum Disorder." JAMA Network Open 6, no. 2 (February 8, 2023): e2255125. http://dx.doi.org/10.1001/jamanetworkopen.2022.55125.

Full text
Abstract:
ImportanceCaregivers have long captured the attention of their infants by speaking in motherese, a playful speech style characterized by heightened affect. Reduced attention to motherese in toddlers with autism spectrum disorder (ASD) may be a contributor to downstream language and social challenges and could be diagnostically revealing.ObjectiveTo investigate whether attention toward motherese speech can be used as a diagnostic classifier of ASD and is associated with language and social ability.Design, Setting, and ParticipantsThis diagnostic study included toddlers aged 12 to 48 months, spanning ASD and non-ASD diagnostic groups, at a research center. Data were collected from February 2018 to April 2021 and analyzed from April 2021 to March 2022.ExposuresGaze-contingent eye-tracking test.Main Outcomes and MeasuresUsing gaze-contingent eye tracking wherein the location of a toddler’s fixation triggered a specific movie file, toddlers participated in 1 or more 1-minute eye-tracking tests designed to quantify attention to motherese speech, including motherese vs traffic (ie, noisy vehicles on a highway) and motherese vs techno (ie, abstract shapes with music). Toddlers were also diagnostically and psychometrically evaluated by psychologists. Levels of fixation within motherese and nonmotherese movies and mean number of saccades per second were calculated. Receiver operating characteristic (ROC) curves were used to evaluate optimal fixation cutoff values and associated sensitivity, specificity, positive predictive value (PPV), and negative predictive value. Within the ASD group, toddlers were stratified based on low, middle, or high levels of interest in motherese speech, and associations with social and language abilities were examined.ResultsA total of 653 toddlers were included (mean [SD] age, 26.45 [8.37] months; 480 males [73.51%]). Unlike toddlers without ASD, who almost uniformly attended to motherese speech with a median level of 82.25% and 80.75% across the 2 tests, among toddlers with ASD, there was a wide range, spanning 0% to 100%. Both the traffic and techno paradigms were effective diagnostic classifiers, with large between-group effect sizes (eg, ASD vs typical development: Cohen d, 1.0 in the techno paradigm). Across both paradigms, a cutoff value of 30% or less fixation on motherese resulted in an area under the ROC curve (AUC) of 0.733 (95% CI, 0.693-0.773) and 0.761 (95% CI, 0.717-0.804), respectively; specificity of 98% (95% CI, 95%-99%) and 96% (95% CI, 92%-98%), respectively; and PPV of 94% (95% CI, 86%-98%). Reflective of heterogeneity and expected subtypes in ASD, sensitivity was lower at 18% (95% CI, 14%-22%) and 29% (95% CI, 24%-34%), respectively. Combining metrics increased the AUC to 0.841 (95% CI, 0.805-0.877). Toddlers with ASD who showed the lowest levels of attention to motherese speech had weaker social and language abilities.Conclusions and RelevanceIn this diagnostic study, a subset of toddlers showed low levels of attention toward motherese speech. When a cutoff level of 30% or less fixation on motherese speech was used, toddlers in this range were diagnostically classified as having ASD with high accuracy. Insight into which toddlers show unusually low levels of attention to motherese may be beneficial not only for early ASD diagnosis and prognosis but also as a possible therapeutic target.
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Chunguang, Yongliang Xu, Liujin He, Yue Zhu, Shaolong Kuang, and Lining Sun. "Research on fNIRS Recognition Method of Upper Limb Movement Intention." Electronics 10, no. 11 (May 24, 2021): 1239. http://dx.doi.org/10.3390/electronics10111239.

Full text
Abstract:
This paper aims at realizing upper limb rehabilitation training by using an fNIRS-BCI system. This article mainly focuses on the analysis and research of the cerebral blood oxygen signal in the system, and gradually extends the analysis and recognition method of the movement intention in the cerebral blood oxygen signal to the actual brain-computer interface system. Fifty subjects completed four upper limb movement paradigms: Lifting-up, putting down, pulling back, and pushing forward. Then, their near-infrared data and movement trigger signals were collected. In terms of the recognition algorithm for detecting the initial intention of upper limb movements, gradient boosting tree (GBDT) and random forest (RF) were selected for classification experiments. Finally, RF classifier with better comprehensive indicators was selected as the final classification algorithm. The best offline recognition rate was 94.4% (151/160). The ReliefF algorithm based on distance measurement and the genetic algorithm proposed in the genetic theory were used to select features. In terms of upper limb motion state recognition algorithms, logistic regression (LR), support vector machine (SVM), naive Bayes (NB), and linear discriminant analysis (LDA) were selected for experiments. Kappa coefficient was used as the classification index to evaluate the performance of the classifier. Finally, SVM classification got the best performance, and the four-class recognition accuracy rate was 84.4%. The results show that RF and SVM can achieve high recognition accuracy in motion intentions and the upper limb rehabilitation system designed in this paper has great application significance.
APA, Harvard, Vancouver, ISO, and other styles
27

Sarajcev, Petar, Antonijo Kunac, Goran Petrovic, and Marin Despalatovic. "Power System Transient Stability Assessment Using Stacked Autoencoder and Voting Ensemble." Energies 14, no. 11 (May 27, 2021): 3148. http://dx.doi.org/10.3390/en14113148.

Full text
Abstract:
Increased integration of renewable energy sources brings new challenges to the secure and stable power system operation. Operational challenges emanating from the reduced system inertia, in particular, will have important repercussions on the power system transient stability assessment (TSA). At the same time, a rise of the “big data” in the power system, from the development of wide area monitoring systems, introduces new paradigms for dealing with these challenges. Transient stability concerns are drawing attention of various stakeholders as they can be the leading causes of major outages. The aim of this paper is to address the power system TSA problem from the perspective of data mining and machine learning (ML). A novel 3.8 GB open dataset of time-domain phasor measurements signals is built from dynamic simulations of the IEEE New England 39-bus test case power system. A data processing pipeline is developed for features engineering and statistical post-processing. A complete ML model is proposed for the TSA analysis, built from a denoising stacked autoencoder and a voting ensemble classifier. Ensemble consist of pooling predictions from a support vector machine and a random forest. Results from the classifier application on the test case power system are reported and discussed. The ML application to the TSA problem is promising, since it is able to ingest huge amounts of data while retaining the ability to generalize and support real-time decisions.
APA, Harvard, Vancouver, ISO, and other styles
28

Sotiropoulos, Dionisios N., and George A. Tsihrintzis. "Artificial Immune System-Based Classification in Extremely Imbalanced Classification Problems." International Journal on Artificial Intelligence Tools 26, no. 03 (January 24, 2017): 1750009. http://dx.doi.org/10.1142/s0218213017500099.

Full text
Abstract:
This paper focuses on a special category of machine learning problems arising in cases where the set of available training instances is significantly biased towards a particular class of patterns. Our work addresses the so-called Class Imbalance Problem through the utilization of an Artificial Immune System-(AIS)based classification algorithm which encodes the inherent ability of the Adaptive Immune System to mediate the exceptionally imbalanced “self” / “non-self” discrimination process. From a computational point of view, this process constitutes an extremely imbalanced pattern classification task since the vast majority of molecular patterns pertain to the “non-self” space. Our work focuses on investigating the effect of the class imbalance problem on the AIS-based classification algorithm by assessing its relative ability to deal with extremely skewed datasets when compared against two state-of-the-art machine learning paradigms such as Support Vector Machines (SVMs) and Multi-Layer Perceptrons (MLPs). To this end, we conducted a series of experiments on a music-related dataset where a small fraction of positive samples was to be recognized against the vast volume of negative samples. The results obtained indicate that the utilized bio-inspired classifier outperforms SVMs in detecting patterns from the minority class while its performance on the same task is competently close to the one exhibited by MLPs. Our findings suggest that the AIS-based classifier relies on its intrinsic resampling and class-balancing functionality in order to address the class imbalance problem.
APA, Harvard, Vancouver, ISO, and other styles
29

Gangal, Varun, Abhinav Arora, Arash Einolghozati, and Sonal Gupta. "Likelihood Ratios and Generative Classifiers for Unsupervised Out-of-Domain Detection in Task Oriented Dialog." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7764–71. http://dx.doi.org/10.1609/aaai.v34i05.6280.

Full text
Abstract:
The task of identifying out-of-domain (OOD) input examples directly at test-time has seen renewed interest recently due to increased real world deployment of models. In this work, we focus on OOD detection for natural language sentence inputs to task-based dialog systems. Our findings are three-fold:First, we curate and release ROSTD (Real Out-of-Domain Sentences From Task-oriented Dialog) - a dataset of 4K OOD examples for the publicly available dataset from (Schuster et al. 2019). In contrast to existing settings which synthesize OOD examples by holding out a subset of classes, our examples were authored by annotators with apriori instructions to be out-of-domain with respect to the sentences in an existing dataset.Second, we explore likelihood ratio based approaches as an alternative to currently prevalent paradigms. Specifically, we reformulate and apply these approaches to natural language inputs. We find that they match or outperform the latter on all datasets, with larger improvements on non-artificial OOD benchmarks such as our dataset. Our ablations validate that specifically using likelihood ratios rather than plain likelihood is necessary to discriminate well between OOD and in-domain data.Third, we propose learning a generative classifier and computing a marginal likelihood (ratio) for OOD detection. This allows us to use a principled likelihood while at the same time exploiting training-time labels. We find that this approach outperforms both simple likelihood (ratio) based and other prior approaches. We are hitherto the first to investigate the use of generative classifiers for OOD detection at test-time.
APA, Harvard, Vancouver, ISO, and other styles
30

Rendón-Cardona, Paula, Julian Gil-Gonzalez, Julián Páez-Valdez, and Mauricio Rivera-Henao. "Self-Supervised Sentiment Analysis in Spanish to Understand the University Narrative of the Colombian Conflict." Applied Sciences 12, no. 11 (May 28, 2022): 5472. http://dx.doi.org/10.3390/app12115472.

Full text
Abstract:
Sentiment analysis is a relevant area in the natural language processing context–(NLP) that allows extracting opinions about different topics such as customer service and political elections. Sentiment analysis is usually carried out through supervised learning approaches and using labeled data. However, obtaining such labels is generally expensive or even infeasible. The above problems can be faced by using models based on self-supervised learning, which aims to deal with various machine learning paradigms in the absence of labels. Accordingly, we propose a self-supervised approach for sentiment analysis in Spanish that comprises a lexicon-based method and a supervised classifier. We test our proposal over three corpora; the first two are labeled datasets, namely, CorpusCine and PaperReviews. Further, we use an unlabeled corpus conformed by news related to the Colombian conflict to understand the university journalistic narrative of the war in Colombia. Obtained results demonstrate that our proposal can deal with sentiment analysis settings in scenarios with unlabeled corpus; in fact, it acquires competitive performance compared with state-of-the-art techniques in partially-labeled datasets.
APA, Harvard, Vancouver, ISO, and other styles
31

Kazemimoghadam, Mahdieh, and Nicholas P. Fey. "An Activity Recognition Framework for Continuous Monitoring of Non-Steady-State Locomotion of Individuals with Parkinson’s Disease." Applied Sciences 12, no. 9 (May 6, 2022): 4682. http://dx.doi.org/10.3390/app12094682.

Full text
Abstract:
Fundamental knowledge in activity recognition of individuals with motor disorders such as Parkinson’s disease (PD) has been primarily limited to detection of steady-state/static tasks (e.g., sitting, standing, walking). To date, identification of non-steady-state locomotion on uneven terrains (stairs, ramps) has not received much attention. Furthermore, previous research has mainly relied on data from a large number of body locations which could adversely affect user convenience and system performance. Here, individuals with mild stages of PD and healthy subjects performed non-steady-state circuit trials comprising stairs, ramp, and changes of direction. An offline analysis using a linear discriminant analysis (LDA) classifier and a Long-Short Term Memory (LSTM) neural network was performed for task recognition. The performance of accelerographic and gyroscopic information from varied lower/upper-body segments were tested across a set of user-independent and user-dependent training paradigms. Comparing the F1 score of a given signal across classifiers showed improved performance using LSTM compared to LDA. Using LSTM, even a subset of information (e.g., feet data) in subject-independent training appeared to provide F1 score > 0.8. However, employing LDA was shown to be at the expense of being limited to using a subject-dependent training and/or biomechanical data from multiple body locations. The findings could inform a number of applications in the field of healthcare monitoring and developing advanced lower-limb assistive devices by providing insights into classification schemes capable of handling non-steady-state and unstructured locomotion in individuals with mild Parkinson’s disease.
APA, Harvard, Vancouver, ISO, and other styles
32

Basak, Jayanta, and Ravi Kothari. "A Classification Paradigm for Distributed Vertically Partitioned Data." Neural Computation 16, no. 7 (July 1, 2004): 1525–44. http://dx.doi.org/10.1162/089976604323057470.

Full text
Abstract:
In general, pattern classification algorithms assume that all the features are available during the construction of a classifier and its subsequent use. In many practical situations, data are recorded in different servers that are geographically apart, and each server observes features of local interest. The underlying infrastructure and other logistics (such as access control) in many cases do not permit continual synchronization. Each server thus has a partial view of the data in the sense that feature subsets (not necessarily disjoint) are available at each server. In this article, we present a classification algorithm for this distributed vertically partitioned data. We assume that local classifiers can be constructed based on the local partial views of the data available at each server. These local classifiers can be any one of the many standard classifiers (e.g., neuralnetworks, decision tree, k nearest neighbor). Often these local classifiers are constructed to support decision making at each location, and our focus is not on these individual local classifiers. Rather, our focus is constructing a classifier that can use these local classifiers to achieve an error rate that is as close as possible to that of a classifier having access to the entire feature set. We empirically demonstrate the efficacy of the proposed algorithm and also provide theoretical results quantifying the loss that results as compared to the situation where the entire feature set is available to any single classifier.
APA, Harvard, Vancouver, ISO, and other styles
33

Mansouri, Majdi, Khaled Dhibi, Hazem Nounou, and Mohamed Nounou. "An Effective Fault Diagnosis Technique for Wind Energy Conversion Systems Based on an Improved Particle Swarm Optimization." Sustainability 14, no. 18 (September 7, 2022): 11195. http://dx.doi.org/10.3390/su141811195.

Full text
Abstract:
The current paper proposes intelligent Fault Detection and Diagnosis (FDD) approaches, aimed to ensure the high-performance operation of Wind energy conversion (WEC) systems. First, an efficient feature selection algorithm based on particle swarm optimization (PSO) is proposed. The main idea behind the use of the PSO algorithm is to remove irrelevant features and extract only the most significant ones from raw data in order to improve the classification task using a neural networks classifier. Then, to overcome the problem of premature convergence and local sub-optimal areas when using the classical PSO optimization algorithm, an improved extension of the PSO algorithm is proposed. The basic idea behind this proposal is to use the Euclidean distance as a dissimilarity metric between observations in which a single observation is kept in case of redundancies. In addition, the proposed reduced PSO-NN (RPSO-NN) technique not only enhances the results in terms of accuracy but also provides a significant reduction in computation time and storage cost by reducing the size of the training dataset and removing irrelevant and redundant samples. The experimental results showed the robustness and high performance of the proposed diagnosis paradigms.
APA, Harvard, Vancouver, ISO, and other styles
34

Noyce, Genevieve L., Mats B. Küssner, and Peter Sollich. "Quantifying Shapes: Mathematical Techniques for Analysing Visual Representations of Sound and Music." Empirical Musicology Review 8, no. 2 (October 24, 2013): 128. http://dx.doi.org/10.18061/emr.v8i2.3932.

Full text
Abstract:
Research on auditory-visual correspondences has a long tradition but innovative experimental paradigms and analytic tools are sparse. In this study, we explore different ways of analysing real-time visual representations of sound and music drawn by both musically-trained and untrained individuals. To that end, participants’ drawing responses captured by an electronic graphics tablet were analysed using various regression, clustering, and classification techniques. Results revealed that a Gaussian process (GP) regression model with a linear plus squared-exponential covariance function was able to model the data sufficiently, whereas a simpler GP was not a good fit. Spectral clustering analysis was the best of a variety of clustering techniques, though no strong groupings are apparent in these data. This was confirmed by variational Bayes analysis, which only fitted one Gaussian over the dataset. Slight trends in the optimised hyperparameters between musically-trained and untrained individuals allowed for the building of a successful GP classifier that differentiated between these two groups. In conclusion, this set of techniques provides useful mathematical tools for analysing real-time visualisations of sound and can be applied to similar datasets as well.
APA, Harvard, Vancouver, ISO, and other styles
35

Lapborisuth, Pawan, Sharath Koorathota, Qi Wang, and Paul Sajda. "Integrating neural and ocular attention reorienting signals in virtual reality." Journal of Neural Engineering 18, no. 6 (December 1, 2021): 066052. http://dx.doi.org/10.1088/1741-2552/ac4593.

Full text
Abstract:
Abstract Objective. Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm. Approach. Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events. Main results. In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition. Significance. We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.
APA, Harvard, Vancouver, ISO, and other styles
36

Cao, Yuan, Lin Yang, Zom Bo Fu, and Feng Yang. "Identity Management Architecture: Paradigms and Models." Applied Mechanics and Materials 40-41 (November 2010): 647–51. http://dx.doi.org/10.4028/www.scientific.net/amm.40-41.647.

Full text
Abstract:
This paper provides an overview of identity management architecture from the viewpoint of paradigms and models. The definition of identity management architecture has been discussed, paradigms are classified by the development stage and core design principle transmission of the architecture which include network centric paradigm, service centric paradigm, and user centric paradigm; models are grouped by components varying and functions changing to isolated model, centralized model, and federated model. These paradigms and models have no collisions among them for they are views of identity management from different viewpoint.
APA, Harvard, Vancouver, ISO, and other styles
37

Farias, Ana Paula Silva. "O ensino do empreendedorismo na educação básica representa um novo paradigma?" Revista Foco 11, no. 3 (October 18, 2018): 35. http://dx.doi.org/10.28950/1981-223x_revistafocoadm/2018.v11i3.577.

Full text
Abstract:
Para muitos educadores, o sistema de ensino vigente tem foco na aquisição do conhecimento, sem se preocupar com o desenvolvimento de habilidades específicas para aplicação desse na prática. Em contrapartida, na educação empreendedora, o ensino é construído como algo além da transferência de informações e conhecimentos, o professor é responsável apenas por conduzir o processo de autoaprendizagem e apoiar os estudantes em direção ao controle da situação experimentada, auxiliando o educando na conquista de sua autonomia, fortalecendo seu projeto de vida. Com a intenção de contribuir para a discussão desse tema, o presente trabalho tem como objetivo usar os paradigmas sociológicos desenvolvidos por Burrel e Morgan (1979) para classificar o sistema tradicional de ensino e a proposta de educação empreendedora, respondendo a seguinte questão: em qual paradigma sociológico se enquadra a educação empreendedora, quando comparada a educação tradicional? Essa pesquisa é caraterizada como descritiva, com abordagem qualitativa do problema e usou como procedimento técnico a revisão de literatura. Como principal resultado, obteve-se a classificação da educação empreendedora no paradigma interpretativista, o que ocasiona uma quebra no paradigma vigente. For many educators, the current education system focuses on the acquisition of knowledge, without worrying about the development of specific skills to apply it in practice. On the other hand, in entrepreneurship education, education is built as something beyond the transfer of information and knowledge, the teacher is responsible only for conducting the process of self-learning and to support the students in order to control the situation experienced, assisting the student in the achievement of their autonomy, strengthening their project of life. In order to contribute to the discussion of this theme, the present work aims to use the sociological paradigms developed by Burrell and Morgan (1979) to classify the traditional teaching system and the proposal of entrepreneurial education, answering the following question: in which sociological paradigm is entrepreneurial education, when compared to traditional education? This research is characterized as descriptive, with a qualitative approach to the problem and used as a technical procedure the literature review. The main result was the classification of entrepreneurial education in the interpretative paradigm, which causes a break in the current paradigm.
APA, Harvard, Vancouver, ISO, and other styles
38

Wan, Rongru, Yanqi Huang, and Xiaomei Wu. "Detection of Ventricular Fibrillation Based on Ballistocardiography by Constructing an Effective Feature Set." Sensors 21, no. 10 (May 19, 2021): 3524. http://dx.doi.org/10.3390/s21103524.

Full text
Abstract:
Ventricular fibrillation (VF) is a type of fatal arrhythmia that can cause sudden death within minutes. The study of a VF detection algorithm has important clinical significance. This study aimed to develop an algorithm for the automatic detection of VF based on the acquisition of cardiac mechanical activity-related signals, namely ballistocardiography (BCG), by non-contact sensors. BCG signals, including VF, sinus rhythm, and motion artifacts, were collected through electric defibrillation experiments in pigs. Through autocorrelation and S transform, the time-frequency graph with obvious information of cardiac rhythmic activity was obtained, and a feature set of 13 elements was constructed for each 7 s segment after statistical analysis and hierarchical clustering. Then, the random forest classifier was used to classify VF and non-VF, and two paradigms of intra-patient and inter-patient were used to evaluate the performance. The results showed that the sensitivity and specificity were 0.965 and 0.958 under 10-fold cross-validation, and they were 0.947 and 0.946 under leave-one-subject-out cross-validation. In conclusion, the proposed algorithm combining feature extraction and machine learning can effectively detect VF in BCG, laying a foundation for the development of long-term self-cardiac monitoring at home and a VF real-time detection and alarm system.
APA, Harvard, Vancouver, ISO, and other styles
39

Kim, Keun-Tae, Junhyuk Choi, Ji Hyeok Jeong, Hyungmin Kim, and Song Joo Lee. "High-Frequency Vibrating Stimuli Using the Low-Cost Coin-Type Motors for SSSEP-Based BCI." BioMed Research International 2022 (August 25, 2022): 1–10. http://dx.doi.org/10.1155/2022/4100381.

Full text
Abstract:
Steady-state somatosensory-evoked potential- (SSSEP-) based brain-computer interfaces (BCIs) have been applied for assisting people with physical disabilities since it does not require gaze fixation or long-time training. Despite the advancement of various noninvasive electroencephalogram- (EEG-) based BCI paradigms, researches on SSSEP with the various frequency range and related classification algorithms are relatively unsettled. In this study, we investigated the feasibility of classifying the SSSEP within high-frequency vibration stimuli induced by a versatile coin-type eccentric rotating mass (ERM) motor. Seven healthy subjects performed selective attention (SA) tasks with vibration stimuli attached to the left and right index fingers. Three EEG feature extraction methods, followed by a support vector machine (SVM) classifier, have been tested: common spatial pattern (CSP), filter-bank CSP (FBCSP), and mutual information-based best individual feature (MIBIF) selection after the FBCSP. Consequently, the FBCSP showed the highest performance at 71.5 ± 2.5 % for classifying the left and right-hand SA tasks than the other two methods (i.e., CSP and FBCSP-MIBIF). Based on our findings and approach, the high-frequency vibration stimuli using low-cost coin motors with the FBCSP-based feature selection can be potentially applied to developing practical SSSEP-based BCI systems.
APA, Harvard, Vancouver, ISO, and other styles
40

Casas, Manuel M., Roberto L. Avitia, Jose Antonio Cardenas-Haro, Jugal Kalita, Francisco J. Torres-Reyes, Marco A. Reyna, and Miguel E. Bravo-Zanoguera. "A Novel Unsupervised Computational Method for Ventricular and Supraventricular Origin Beats Classification." Applied Sciences 11, no. 15 (July 22, 2021): 6711. http://dx.doi.org/10.3390/app11156711.

Full text
Abstract:
Arrhythmias are the most common events tracked by a physician. The need for continuous monitoring of such events in the ECG has opened the opportunity for automatic detection. Intra- and inter-patient paradigms are the two approaches currently followed by the scientific community. The intra-patient approach seems to resolve the problem with a high classification percentage but requires a physician to label key samples. The inter-patient makes use of historic data of different patients to build a general classifier, but the inherent variability in the ECG’s signal among patients leads to lower classification percentages compared to the intra-patient approach. In this work, we propose a new unsupervised algorithm that adapts to every patient using the heart rate and morphological features of the ECG beats to classify beats between supraventricular origin and ventricular origin. The results of our work in terms of F-score are 0.88, 0.89, and 0.93 for the ventricular origin beats for three popular ECG databases, and around 0.99 for the supraventricular origin for the same databases, comparable to supervised approaches presented in other works. This paper presents a new path to make use of ECG data to classify heartbeats without the assistance of a physician despite the needed improvements.
APA, Harvard, Vancouver, ISO, and other styles
41

Z. Salih, Nibras, and Walaa Khalaf. "ON THE USE OF MULTIPLE INSTANCE LEARNING FOR DATA CLASSIFICATION." Journal of Engineering and Sustainable Development 25, Special (September 20, 2021): 1–127. http://dx.doi.org/10.31272/jeasd.conf.2.1.15.

Full text
Abstract:
In the multiple instances learning framework, instances are arranged into bags, each bag contains several instances, the labels of each instance are not available but the label is available for each bag. Whilst in a single instance learning each instance is connected with the label that contains a single feature vector. This paper examines the distinction between these paradigms to see if it is appropriate, to cast the problem within a multiple instance framework. In single-instance learning, two datasets are applied (students’ dataset and iris dataset) using Naïve Bayes Classifier (NBC), Multilayer perceptron (MLP), Support Vector Machine (SVM), and Sequential Minimal Optimization (SMO), while SimpleMI, MIWrapper, and MIBoost in multiple instances learning. Leave One Out Cross-Validation (LOOCV), five and ten folds Cross-Validation techniques (5-CV, 10-CV) are implemented to evaluate the classification results. A comparison of the result of these techniques is made, several algorithms are found to be more effective for classification in the multiple instances learning. The suitable algorithms for the students' dataset are MIBoost with MLP for LOOCV with an accuracy of 75%, whereas SimpleMI with SMO for the iris dataset is the suitable algorithm for 10-CV with an accuracy of 99.33%.
APA, Harvard, Vancouver, ISO, and other styles
42

Lv, MingQi, Chao Huang, TieMing Chen, and Ting Wang. "A Collaborative Deep and Shallow Semisupervised Learning Framework for Mobile App Classification." Mobile Information Systems 2020 (February 14, 2020): 1–12. http://dx.doi.org/10.1155/2020/4521723.

Full text
Abstract:
With the rapid growth of mobile Apps, it is necessary to classify the mobile Apps into predefined categories. However, there are two problems that make this task challenging. First, the name of a mobile App is usually short and ambiguous to reflect its real semantic meaning. Second, it is usually difficult to collect adequate labeled samples to train a good classifier when a customized taxonomy of mobile Apps is required. For the first problem, we leverage Web knowledge to enrich the textual information of mobile Apps. For the second problem, the mostly utilized approach is the semisupervised learning, which exploits unlabeled samples in a cotraining scheme. However, how to enhance the diversity between base learners to maximize the power of the cotraining scheme is still an open problem. Aiming at this problem, we exploit totally different machine learning paradigms (i.e., shallow learning and deep learning) to ensure a greater degree of diversity. To this end, this paper proposes Co-DSL, a collaborative deep and shallow semisupervised learning framework, for mobile App classification using only a few labeled samples and a large number of unlabeled samples. The experiment results demonstrate the effectiveness of Co-DSL, which could achieve over 85% classification accuracy by using only two labeled samples from each mobile App category.
APA, Harvard, Vancouver, ISO, and other styles
43

Łukasik, Jakub. "Typology of fractional numerals in Turkic languages." Studia Linguistica Universitatis Iagellonicae Cracoviensis 139, no. 3 (March 28, 2022): 217–38. http://dx.doi.org/10.4467/20834624sl.22.011.16121.

Full text
Abstract:
This paper analyzes fractional numerals in Turkic languages and classifies them into seven types based on morphological criteria. These types are then divided into three paradigms, the Paradigm of Origin (PO), the Paradigm of being Inside (PI) and the Paradigm of Belonging (PB), according to the underlying logic of the constructions. The emergence of each paradigm is also discussed, the conclusion being that they are of different origin.
APA, Harvard, Vancouver, ISO, and other styles
44

Rissman, Jesse, Tiffany E. Chow, Nicco Reggente, and Anthony D. Wagner. "Decoding fMRI Signatures of Real-world Autobiographical Memory Retrieval." Journal of Cognitive Neuroscience 28, no. 4 (April 2016): 604–20. http://dx.doi.org/10.1162/jocn_a_00920.

Full text
Abstract:
Extant neuroimaging data implicate frontoparietal and medial-temporal lobe regions in episodic retrieval, and the specific pattern of activity within and across these regions is diagnostic of an individual's subjective mnemonic experience. For example, in laboratory-based paradigms, memories for recently encoded faces can be accurately decoded from single-trial fMRI patterns [Uncapher, M. R., Boyd-Meredith, J. T., Chow, T. E., Rissman, J., & Wagner, A. D. Goal-directed modulation of neural memory patterns: Implications for fMRI-based memory detection. Journal of Neuroscience, 35, 8531–8545, 2015; Rissman, J., Greely, H. T., & Wagner, A. D. Detecting individual memories through the neural decoding of memory states and past experience. Proceedings of the National Academy of Sciences, U.S.A., 107, 9849–9854, 2010]. Here, we investigated the neural patterns underlying memory for real-world autobiographical events, probed at 1- to 3-week retention intervals as well as whether distinct patterns are associated with different subjective memory states. For 3 weeks, participants (n = 16) wore digital cameras that captured photographs of their daily activities. One week later, they were scanned while making memory judgments about sequences of photos depicting events from their own lives or events captured by the cameras of others. Whole-brain multivoxel pattern analysis achieved near-perfect accuracy at distinguishing correctly recognized events from correctly rejected novel events, and decoding performance did not significantly vary with retention interval. Multivoxel pattern classifiers also differentiated recollection from familiarity and reliably decoded the subjective strength of recollection, of familiarity, or of novelty. Classification-based brain maps revealed dissociable neural signatures of these mnemonic states, with activity patterns in hippocampus, medial PFC, and ventral parietal cortex being particularly diagnostic of recollection. Finally, a classifier trained on previously acquired laboratory-based memory data achieved reliable decoding of autobiographical memory states. We discuss the implications for neuroscientific accounts of episodic retrieval and comment on the potential forensic use of fMRI for probing experiential knowledge.
APA, Harvard, Vancouver, ISO, and other styles
45

Campero-Jurado, Israel, Sergio Márquez-Sánchez, Juan Quintanar-Gómez, Sara Rodríguez, and Juan M. Corchado. "Smart Helmet 5.0 for Industrial Internet of Things Using Artificial Intelligence." Sensors 20, no. 21 (November 1, 2020): 6241. http://dx.doi.org/10.3390/s20216241.

Full text
Abstract:
Information and communication technologies (ICTs) have contributed to advances in Occupational Health and Safety, improving the security of workers. The use of Personal Protective Equipment (PPE) based on ICTs reduces the risk of accidents in the workplace, thanks to the capacity of the equipment to make decisions on the basis of environmental factors. Paradigms such as the Industrial Internet of Things (IIoT) and Artificial Intelligence (AI) make it possible to generate PPE models feasibly and create devices with more advanced characteristics such as monitoring, sensing the environment and risk detection between others. The working environment is monitored continuously by these models and they notify the employees and their supervisors of any anomalies and threats. This paper presents a smart helmet prototype that monitors the conditions in the workers’ environment and performs a near real-time evaluation of risks. The data collected by sensors is sent to an AI-driven platform for analysis. The training dataset consisted of 11,755 samples and 12 different scenarios. As part of this research, a comparative study of the state-of-the-art models of supervised learning is carried out. Moreover, the use of a Deep Convolutional Neural Network (ConvNet/CNN) is proposed for the detection of possible occupational risks. The data are processed to make them suitable for the CNN and the results are compared against a Static Neural Network (NN), Naive Bayes Classifier (NB) and Support Vector Machine (SVM), where the CNN had an accuracy of 92.05% in cross-validation.
APA, Harvard, Vancouver, ISO, and other styles
46

Vidal, Plácido L., Joaquim de Moura, Macarena Díaz, Jorge Novo, and Marcos Ortega. "Diabetic Macular Edema Characterization and Visualization Using Optical Coherence Tomography Images." Applied Sciences 10, no. 21 (October 31, 2020): 7718. http://dx.doi.org/10.3390/app10217718.

Full text
Abstract:
Diabetic Retinopathy and Diabetic Macular Edema (DME) represent one of the main causes of blindness in developed countries. They are characterized by fluid deposits in the retinal layers, causing a progressive vision loss over the time. The clinical literature defines three DME types according to the texture and disposition of the fluid accumulations: Cystoid Macular Edema (CME), Diffuse Retinal Thickening (DRT) and Serous Retinal Detachment (SRD). Detecting each one is essential as, depending on their presence, the expert will decide on the adequate treatment of the pathology. In this work, we propose a robust detection and visualization methodology based on the analysis of independent image regions. We study a complete and heterogeneous library of 375 texture and intensity features in a dataset of 356 labeled images from two of the most used capture devices in the clinical domain: a CIRRUSTM HD-OCT 500 Carl Zeiss Meditec and 179 OCT images from a modular HRA + OCT SPECTRALIS® from Heidelberg Engineering, Inc. We extracted 33,810 samples for each type of DME for the feature analysis and incremental training of four different classifier paradigms. This way, we achieved an 84.04% average accuracy for CME, 78.44% average accuracy for DRT and 95.40% average accuracy for SRD. These models are used to generate an intuitive visualization of the fluid regions. We use an image sampling and voting strategy, resulting in a system capable of detecting and characterizing the three types of DME presenting them in an intuitive and repeatable way.
APA, Harvard, Vancouver, ISO, and other styles
47

Rassi, Elie, Andreas Wutz, Nadia Müller-Voggel, and Nathan Weisz. "Prestimulus feedback connectivity biases the content of visual experiences." Proceedings of the National Academy of Sciences 116, no. 32 (July 22, 2019): 16056–61. http://dx.doi.org/10.1073/pnas.1817317116.

Full text
Abstract:
Ongoing fluctuations in neural excitability and in networkwide activity patterns before stimulus onset have been proposed to underlie variability in near-threshold stimulus detection paradigms—that is, whether or not an object is perceived. Here, we investigated the impact of prestimulus neural fluctuations on the content of perception—that is, whether one or another object is perceived. We recorded neural activity with magnetoencephalography (MEG) before and while participants briefly viewed an ambiguous image, the Rubin face/vase illusion, and required them to report their perceived interpretation in each trial. Using multivariate pattern analysis, we showed robust decoding of the perceptual report during the poststimulus period. Applying source localization to the classifier weights suggested early recruitment of primary visual cortex (V1) and ∼160-ms recruitment of the category-sensitive fusiform face area (FFA). These poststimulus effects were accompanied by stronger oscillatory power in the gamma frequency band for face vs. vase reports. In prestimulus intervals, we found no differences in oscillatory power between face vs. vase reports in V1 or in FFA, indicating similar levels of neural excitability. Despite this, we found stronger connectivity between V1 and FFA before face reports for low-frequency oscillations. Specifically, the strength of prestimulus feedback connectivity (i.e., Granger causality) from FFA to V1 predicted not only the category of the upcoming percept but also the strength of poststimulus neural activity associated with the percept. Our work shows that prestimulus network states can help shape future processing in category-sensitive brain regions and in this way bias the content of visual experiences.
APA, Harvard, Vancouver, ISO, and other styles
48

Keeser, D. "The Effect of Prefrontal Transcranial Direct Current Stimulation on Resting State Functional Connectivity." European Psychiatry 41, S1 (April 2017): S33—S34. http://dx.doi.org/10.1016/j.eurpsy.2017.01.159.

Full text
Abstract:
Transcranial direct current stimulation (tDCS) of the prefrontal cortex (PFC) is currently investigated as therapeutic non-invasive brain stimulation (NIBS) approach in major depressive (MDD) and other neuropsychiatric disorders. In both conditions, different sub regions of the PFC (e.g. the dorsolateral prefrontal cortex, the dorsomedial prefrontal cortex and others) are critically involved in their respective pathophysiology. Although the neurophysiological properties of tDCS have been extensively investigated at the motor cortex level, the action of PFC tDCS on resting state and functional MRI connectivity of neural networks is largely unexplored. Beyond motor cortex paradigms, we aim to establish a model for PFC tDCS modulating functional connectivity in different conditions to provide tailored tDCS protocols for clinical efficacy studies in major psychiatric disorders such as MDD and schizophrenia. One major obstacle in brain research is that patients represent themselves as individuals not as groups. Recent research has shown that the individual human brain functional MRI connectivity shows different within-variability than the variability found between subjects. Several neuroimaging methods may be useful to find a classifier that can be reliable used to predict NIBS effects. These neuroimaging methods include individual brain properties as well as the evaluation of state-dependency. Anatomical targeted analyses of rTMS and tDCS in neuropsychiatric patients and healthy subjects have found promising results.By combining neuroimaging and NIBS new functional models can be developed and compared in different health and pathology states, e.g. in the development of any given psychiatric disorder.Disclosure of interestSupported by the Federal Ministry of Research and Education (“Forschungsnetz für psychische Erkrankungen”, German Center for Brain Stimulation–GCBS–WP5).
APA, Harvard, Vancouver, ISO, and other styles
49

Shah, Syed Mohsin Ali, Syed Muhammad Usman, Shehzad Khalid, Ikram Ur Rehman, Aamir Anwar, Saddam Hussain, Syed Sajid Ullah, Hela Elmannai, Abeer D. Algarni, and Waleed Manzoor. "An Ensemble Model for Consumer Emotion Prediction Using EEG Signals for Neuromarketing Applications." Sensors 22, no. 24 (December 12, 2022): 9744. http://dx.doi.org/10.3390/s22249744.

Full text
Abstract:
Traditional advertising techniques seek to govern the consumer’s opinion toward a product, which may not reflect their actual behavior at the time of purchase. It is probable that advertisers misjudge consumer behavior because predicted opinions do not always correspond to consumers’ actual purchase behaviors. Neuromarketing is the new paradigm of understanding customer buyer behavior and decision making, as well as the prediction of their gestures for product utilization through an unconscious process. Existing methods do not focus on effective preprocessing and classification techniques of electroencephalogram (EEG) signals, so in this study, an effective method for preprocessing and classification of EEG signals is proposed. The proposed method involves effective preprocessing of EEG signals by removing noise and a synthetic minority oversampling technique (SMOTE) to deal with the class imbalance problem. The dataset employed in this study is a publicly available neuromarketing dataset. Automated features were extracted by using a long short-term memory network (LSTM) and then concatenated with handcrafted features like power spectral density (PSD) and discrete wavelet transform (DWT) to create a complete feature set. The classification was done by using the proposed hybrid classifier that optimizes the weights of two machine learning classifiers and one deep learning classifier and classifies the data between like and dislike. The machine learning classifiers include the support vector machine (SVM), random forest (RF), and deep learning classifier (DNN). The proposed hybrid model outperforms other classifiers like RF, SVM, and DNN and achieves an accuracy of 96.89%. In the proposed method, accuracy, sensitivity, specificity, precision, and F1 score were computed to evaluate and compare the proposed method with recent state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Geng, Tao, John Q. Gan, Matthew Dyson, Chun SL Tsui, and Francisco Sepulveda. "A Novel Design of 4-Class BCI Using Two Binary Classifiers and Parallel Mental Tasks." Computational Intelligence and Neuroscience 2008 (2008): 1–5. http://dx.doi.org/10.1155/2008/437306.

Full text
Abstract:
A novel 4-class single-trial brain computer interface (BCI) based on two (rather than four or more) binary linear discriminant analysis (LDA) classifiers is proposed, which is called a “parallel BCI.” Unlike other BCIs where mental tasks are executed and classified in a serial way one after another, the parallel BCI uses properly designed parallel mental tasks that are executed on both sides of the subject body simultaneously, which is the main novelty of the BCI paradigm used in our experiments. Each of the two binary classifiers only classifies the mental tasks executed on one side of the subject body, and the results of the two binary classifiers are combined to give the result of the 4-class BCI. Data was recorded in experiments with both real movement and motor imagery in 3 able-bodied subjects. Artifacts were not detected or removed. Offline analysis has shown that, in some subjects, the parallel BCI can generate a higher accuracy than a conventional 4-class BCI, although both of them have used the same feature selection and classification algorithms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography