Índice

  1. Tesis

Literatura académica sobre el tema "Hybrid classifier"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Hybrid classifier".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Tesis sobre el tema "Hybrid classifier"

1

Vishnampettai, Sridhar Aadhithya. "A Hybrid Classifier Committee Approach for Microarray Sample Classification." University of Akron / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=akron1312341058.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Nair, Sujit S. "Coarse Radio Signal Classifier on a Hybrid FPGA/DSP/GPP Platform." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/76934.

Texto completo
Resumen
The Virginia Tech Universal Classifier Synchronizer (UCS) system can enable a cognitive receiver to detect, classify and extract all the parameters needed from a received signal for physical layer demodulation and configure a cognitive radio accordingly. Currently, UCS can process analog amplitude modulation (AM) and frequency modulation (FM) and digital narrow band M-PSK, M-QAM and wideband signal orthogonal frequency division multiplexing (OFDM). A fully developed prototype of UCS system was designed and implemented in our laboratory using GNU radio software platform and Universal Software Radio Peripheral (USRP) radio platform. That system introduces a lot of latency issues because of the limited USB data transfer speeds between the USRP and the host computer. Also, there are inherent latencies and timing uncertainties in the General Purpose Processor (GPP) software itself. Solving the timing and latency problems requires running key parts of the software-defined radio (SDR) code on a Field Programmable Gate Array (FPGA)/Digital Signal Processor (DSP)/GPP based hybrid platform. Our objective is to port the entire UCS system on the Lyrtech SFF SDR platform which is a hybrid DSP/FPGA/GPP platform. Since the FPGA allows parallel processing on a wideband signal, its computing speed is substantially faster than GPPs and most DSPs, which sequentially process signals. In addition, the Lyrtech Small Form Factor (SFF)-SDR development platform integrates the FPGA and the RF module on one platform; this further reduces the latency in moving signals from RF front end to the computing component. Also for UCS to be commercially viable, we need to port it to a more portable platform which can be transitioned to a handset radio in the future. This thesis is a proof of concept implementation of the coarse classifier which is the first step of classification. Both fixed point and floating point implementations are developed and no compiler specific libraries or vendor specific libraries are used. This makes transitioning the design to any other hardware like GPPs and DSPs of other vendors possible without having to change the basic framework and design.<br>Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Zimit, Sani Ibrahim. "Hybrid approach to interpretable multiple classifier system for intelligent clinical decision support." Thesis, University of Reading, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.631699.

Texto completo
Resumen
Data-driven decision support approaches have been increasingly employed in recent years in order to unveil useful diagnostic and prognostic patterns from data accumulated in clinical repositories. Given the diverse amount of evidence generated through everyday clinical practice and the exponential growth in the number of parameters accumulated in the data, the capability of finding purposeful task-oriented patterns from patient records is crucial for providing effective healthcare delivery. The application of classification decision support tool in clinical settings has brought about formidable challenges that require a robust system. Knowledge Discovery in Database (KDD) provides a viable solution to decipher implicit knowledge in a given context. KDD classification techniques create models of the accumulated data according to induction algorithms. Despite the availability of numerous classification techniques, the accuracy and interpretability of the decision model are fundamental in the decision processes. Multiple Classifier Systems (MCS) based on the aggregation of individual classifiers usually achieve better decision accuracy. The down size of such models is due to their black box nature. Description of the clinical concepts that influence each decision outcome is fundamental in clinical settings. To overcome this deficiency, the use of artificial data is one technique advocated by researchers to extract an interpretable classifier that mimics the MCS. In the clinical context, practical utilisation of the mimetic procedure depends on the appropriateness of the data generation method to reflect the complexities of the evidence domain. A well-defined intelligent data generation method is required to unveil associations and dependency relationships between various entities the evidence domain. This thesis has devised an Interpretable Multiple classifier system (IMC) using the KDD process as the underlying platform. The approach integrates the flexibility of MCS, the robustness of Bayesian network (BN) and the concept of mimetic classifier to build an interpretable classification system. The BN provides a robust and a clinically accepted formalism to generate synthetic data based on encoded joint relationships of the evidence space. The practical applicability of the IMC was evaluated against the conventional approach for inducing an interpretable classifier on nine clinical domain problems. Results of statistical tests substantiated that the IMC model outperforms the direct approach in terms of decision accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Lou, Wan Chan. "A hybrid model of tree classifier and neural network for university admission recommender system." Thesis, University of Macau, 2008. http://umaclib3.umac.mo/record=b1783609.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Toubakh, Houari. "Automated on-line early fault diagnosis of wind turbines based on hybrid dynamic classifier." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10100/document.

Texto completo
Resumen
L'objectif principal de cette thèse est de développer un schéma générique et adaptatif basée sur les approches d'apprentissage automatique, intégrant des mécanismes de détection et d'isolation des défauts avec une force d’apparition progressive. Le but de ce schéma est de réaliser le diagnostic en ligne des défauts simple et multiple de type dérive dans les systèmes éoliens, et plus particulièrement dans le système du calage des pales et le convertisseur de puissance. Le schéma proposé est basé sur la décomposition du système éolien en plusieurs composantes. Ensuite, un classifieur est conçu et utilisé pour réaliser le diagnostic de défauts dans chaque composant. Le but de cette décomposition en composants est de faciliter l'isolation des défauts et d'augmenter la robustesse du schéma globale de diagnostic dans le sens que lorsque le classifier lié à un composant est défaillant, les classifieurs liées aux autres composants continuent à réaliser le diagnostic des défauts dans leurs composants. Ce schéma a aussi l'avantage de prendre en compte la dynamique hybride de l’éolienne<br>This thesis addresses the problem of automatic detection and isolation of drift-like faults in wind turbines (WTs). The main aim of this thesis is to develop a generic on-line adaptive machine learning and data mining scheme that integrates drift detection and isolation mechanism in order to achieve the simple and multiple drift-like fault diagnosis in WTs, in particular pitch system and power converter. The proposed scheme is based on the decomposition of the wind turbine into several components. Then, a classifier is designed and used to achieve the diagnosis of faults impacting each component. The goal of this decomposition into components is to facilitate the isolation of faults and to increase the robustness of the scheme in the sense that when the classifier related to one component is failed, the classifiers for the other components continue to achieve the diagnosis for faults in their corresponding components. This scheme has also the advantage to take into account the WT hybrid dynamics. Indeed, some WT components (as pitch system and power converter) manifest both discrete and continuous dynamic behaviors. In each discrete mode, or a configuration, different continuous dynamics are defined
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Rasheed, Sarbast. "A Multiclassifier Approach to Motor Unit Potential Classification for EMG Signal Decomposition." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/934.

Texto completo
Resumen
EMG signal decomposition is the process of resolving a composite EMG signal into its constituent motor unit potential trains (classes) and it can be configured as a classification problem. An EMG signal detected by the tip of an inserted needle electrode is the superposition of the individual electrical contributions of the different motor units that are active, during a muscle contraction, and background interference. <BR>This thesis addresses the process of EMG signal decomposition by developing an interactive classification system, which uses multiple classifier fusion techniques in order to achieve improved classification performance. The developed system combines heterogeneous sets of base classifier ensembles of different kinds and employs either a one level classifier fusion scheme or a hybrid classifier fusion approach. <BR>The hybrid classifier fusion approach is applied as a two-stage combination process that uses a new aggregator module which consists of two combiners: the first at the abstract level of classifier fusion and the other at the measurement level of classifier fusion such that it uses both combiners in a complementary manner. Both combiners may be either data independent or the first combiner data independent and the second data dependent. For the purpose of experimentation, we used as first combiner the majority voting scheme, while we used as the second combiner one of the fixed combination rules behaving as a data independent combiner or the fuzzy integral with the lambda-fuzzy measure as an implicit data dependent combiner. <BR>Once the set of motor unit potential trains are generated by the classifier fusion system, the firing pattern consistency statistics for each train are calculated to detect classification errors in an adaptive fashion. This firing pattern analysis allows the algorithm to modify the threshold of assertion required for assignment of a motor unit potential classification individually for each train based on an expectation of erroneous assignments. <BR>The classifier ensembles consist of a set of different versions of the Certainty classifier, a set of classifiers based on the nearest neighbour decision rule: the fuzzy <em>k</em>-NN and the adaptive fuzzy <em>k</em>-NN classifiers, and a set of classifiers that use a correlation measure as an estimation of the degree of similarity between a pattern and a class template: the matched template filter classifiers and its adaptive counterpart. The base classifiers, besides being of different kinds, utilize different types of features and their performances were investigated using both real and simulated EMG signals of different complexities. The feature sets extracted include time-domain data, first- and second-order discrete derivative data, and wavelet-domain data. <BR>Following the so-called <em>overproduce and choose</em> strategy to classifier ensemble combination, the developed system allows the construction of a large set of candidate base classifiers and then chooses, from the base classifiers pool, subsets of specified number of classifiers to form candidate classifier ensembles. The system then selects the classifier ensemble having the maximum degree of agreement by exploiting a diversity measure for designing classifier teams. The kappa statistic is used as the diversity measure to estimate the level of agreement between the base classifier outputs, i. e. , to measure the degree of decision similarity between the base classifiers. This mechanism of choosing the team's classifiers based on assessing the classifier agreement throughout all the trains and the unassigned category is applied during the one level classifier fusion scheme and the first combiner in the hybrid classifier fusion approach. For the second combiner in the hybrid classifier fusion approach, we choose team classifiers also based on kappa statistics but by assessing the classifiers agreement only across the unassigned category and choose those base classifiers having the minimum agreement. <BR>Performance of the developed classifier fusion system, in both of its variants, i. e. , the one level scheme and the hybrid approach was evaluated using synthetic simulated signals of known properties and real signals and then compared it with the performance of the constituent base classifiers. Across the EMG signal data sets used, the hybrid approach had better average classification performance overall, specially in terms of reducing the number of classification errors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

McCool, Christopher Steven. "Hybrid 2D and 3D face verification." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16436/1/Christopher_McCool_Thesis.pdf.

Texto completo
Resumen
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

McCool, Christopher Steven. "Hybrid 2D and 3D face verification." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16436/.

Texto completo
Resumen
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Al-Ani, Ahmed Karim. "An improved pattern classification system using optimal feature selection, classifier combination, and subspace mapping techniques." Thesis, Queensland University of Technology, 2002.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ala'raj, Maher A. "A credit scoring model based on classifiers consensus system approach." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/13669.

Texto completo
Resumen
Managing customer credit is an important issue for each commercial bank; therefore, banks take great care when dealing with customer loans to avoid any improper decisions that can lead to loss of opportunity or financial losses. The manual estimation of customer creditworthiness has become both time- and resource-consuming. Moreover, a manual approach is subjective (dependable on the bank employee who gives this estimation), which is why devising and implementing programming models that provide loan estimations is the only way of eradicating the ‘human factor’ in this problem. This model should give recommendations to the bank in terms of whether or not a loan should be given, or otherwise can give a probability in relation to whether the loan will be returned. Nowadays, a number of models have been designed, but there is no ideal classifier amongst these models since each gives some percentage of incorrect outputs; this is a critical consideration when each percent of incorrect answer can mean millions of dollars of losses for large banks. However, the LR remains the industry standard tool for credit-scoring models development. For this purpose, an investigation is carried out on the combination of the most efficient classifiers in credit-scoring scope in an attempt to produce a classifier that exceeds each of its classifiers or components. In this work, a fusion model referred to as ‘the Classifiers Consensus Approach’ is developed, which gives a lot better performance than each of single classifiers that constitute it. The difference of the consensus approach and the majority of other combiners lie in the fact that the consensus approach adopts the model of real expert group behaviour during the process of finding the consensus (aggregate) answer. The consensus model is compared not only with single classifiers, but also with traditional combiners and a quite complex combiner model known as the ‘Dynamic Ensemble Selection’ approach. As a pre-processing technique, step data-filtering (select training entries which fits input data well and remove outliers and noisy data) and feature selection (remove useless and statistically insignificant features which values are low correlated with real quality of loan) are used. These techniques are valuable in significantly improving the consensus approach results. Results clearly show that the consensus approach is statistically better (with 95% confidence value, according to Friedman test) than any other single classifier or combiner analysed; this means that for similar datasets, there is a 95% guarantee that the consensus approach will outperform all other classifiers. The consensus approach gives not only the best accuracy, but also better AUC value, Brier score and H-measure for almost all datasets investigated in this thesis. Moreover, it outperformed Logistic Regression. Thus, it has been proven that the use of the consensus approach for credit-scoring is justified and recommended in commercial banks. Along with the consensus approach, the dynamic ensemble selection approach is analysed, the results of which show that, under some conditions, the dynamic ensemble selection approach can rival the consensus approach. The good sides of dynamic ensemble selection approach include its stability and high accuracy on various datasets. The consensus approach, which is improved in this work, may be considered in banks that hold the same characteristics of the datasets used in this work, where utilisation could decrease the level of mistakenly rejected loans of solvent customers, and the level of mistakenly accepted loans that are never to be returned. Furthermore, the consensus approach is a notable step in the direction of building a universal classifier that can fit data with any structure. Another advantage of the consensus approach is its flexibility; therefore, even if the input data is changed due to various reasons, the consensus approach can be easily re-trained and used with the same performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía