To see the other types of publications on this topic, follow the link: Fuzzy Support Vector Machine (FSVM).

Dissertations / Theses on the topic 'Fuzzy Support Vector Machine (FSVM)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 32 dissertations / theses for your research on the topic 'Fuzzy Support Vector Machine (FSVM).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kannan, Anand. "Performance evaluation of security mechanisms in Cloud Networks." Thesis, KTH, Kommunikationssystem, CoS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-99464.

Full text
Abstract:
Infrastructure as a Service (IaaS) is a cloud service provisioning model which largely focuses on data centre provisioning of computing and storage facilities. The networking aspects of IaaS beyond the data centre are a limiting factor preventing communication services that are sensitive to network characteristics from adopting this approach. Cloud networking is a new technology which integrates network provisioning with the existing cloud service provisioning models thereby completing the cloud computing picture by addressing the networking aspects. In cloud networking, shared network resources are virtualized, and provisioned to customers and end-users on-demand in an elastic fashion. This technology allows various kinds of optimization, e.g., reducing latency and network load. Further, this allows service providers to provision network performance guarantees as a part of their service offering. However, this new approach introduces new security challenges. Many of these security challenges are addressed in the CloNe security architecture. This thesis presents a set of potential techniques for securing different resource in a cloud network environment which are not addressed in the existing CloNe security architecture. The thesis begins with a holistic view of the Cloud networking, as described in the Scalable and Adaptive Internet Solutions (SAIL) project, along with its proposed architecture and security goals. This is followed by an overview of the problems that need to be solved and some of the different methods that can be applied to solve parts of the overall problem, specifically a comprehensive, tightly integrated, and multi-level security architecture, a key management algorithm to support the access control mechanism, and an intrusion detection mechanism. For each method or set of methods, the respective state of the art is presented. Additionally, experiments to understand the performance of these mechanisms are evaluated on a simple cloud network test bed. The proposed key management scheme uses a hierarchical key management approach that provides fast and secure key update when member join and member leave operations are carried out. Experiments show that the proposed key management scheme enhances the security and increases the availability and integrity. A newly proposed genetic algorithm based feature selection technique has been employed for effective feature selection. Fuzzy SVM has been used on the data set for effective classification. Experiments have shown that the proposed genetic based feature selection algorithm reduces the number of features and hence decreases the classification time, while improving detection accuracy of the fuzzy SVM classifier by minimizing the conflicting rules that may confuse the classifier. The main advantages of this intrusion detection system are the reduction in false positives and increased security.
Infrastructure as a Service (IaaS) är en Cloudtjänstmodell som huvudsakligen är inriktat på att tillhandahålla ett datacenter för behandling och lagring av data. Nätverksaspekterna av en cloudbaserad infrastruktur som en tjänst utanför datacentret utgör en begränsande faktor som förhindrar känsliga kommunikationstjänster från att anamma denna teknik. Cloudnätverk är en ny teknik som integrerar nätverkstillgång med befintliga cloudtjänstmodeller och därmed fullbordar föreställningen av cloud data genom att ta itu med nätverkaspekten.  I cloudnätverk virtualiseras delade nätverksresurser, de avsätts till kunder och slutanvändare vid efterfrågan på ett flexibelt sätt. Denna teknik tillåter olika typer av möjligheter, t.ex. att minska latens och belastningen på nätet. Vidare ger detta tjänsteleverantörer ett sätt att tillhandahålla garantier för nätverksprestandan som en del av deras tjänsteutbud. Men denna nya strategi introducerar nya säkerhetsutmaningar, exempelvis VM migration genom offentligt nätverk. Många av dessa säkerhetsutmaningar behandlas i CloNe’s Security Architecture. Denna rapport presenterar en rad av potentiella tekniker för att säkra olika resurser i en cloudbaserad nätverksmiljö som inte behandlas i den redan existerande CloNe Security Architecture. Rapporten inleds med en helhetssyn på cloudbaserad nätverk som beskrivs i Scalable and Adaptive Internet Solutions (SAIL)-projektet, tillsammans med dess föreslagna arkitektur och säkerhetsmål. Detta följs av en översikt över de problem som måste lösas och några av de olika metoder som kan tillämpas för att lösa delar av det övergripande problemet. Speciellt behandlas en omfattande och tätt integrerad multi-säkerhetsarkitektur, en nyckelhanteringsalgoritm som stödjer mekanismens åtkomstkontroll och en mekanism för intrångsdetektering. För varje metod eller för varje uppsättning av metoder, presenteras ståndpunkten för respektive teknik. Dessutom har experimenten för att förstå prestandan av dessa mekanismer utvärderats på testbädd av ett enkelt cloudnätverk. Den föreslagna nyckelhantering system använder en hierarkisk nyckelhantering strategi som ger snabb och säker viktig uppdatering när medlemmar ansluta sig till och medlemmarna lämnar utförs. Försöksresultat visar att den föreslagna nyckelhantering system ökar säkerheten och ökar tillgänglighet och integritet. En nyligen föreslagna genetisk algoritm baserad funktion valet teknik har använts för effektiv funktion val. Fuzzy SVM har använts på de uppgifter som för effektiv klassificering. Försök har visat att den föreslagna genetiska baserad funktion selekteringsalgoritmen minskar antalet funktioner och därmed minskar klassificering tiden, och samtidigt förbättra upptäckt noggrannhet fuzzy SVM klassificeraren genom att minimera de motstående regler som kan förvirra klassificeraren. De främsta fördelarna med detta intrångsdetekteringssystem är den minskning av falska positiva och ökad säkerhet.
APA, Harvard, Vancouver, ISO, and other styles
2

Benbrahim, Houda. "A fuzzy semi-supervised support vector machine approach to hypertext categorization." Thesis, University of Portsmouth, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.494145.

Full text
Abstract:
As the web expands exponentially, the need to put some order to its content becomes apparent. Hypertext categorization, that is the automatic classification of web documents into predefined classes, came to elevate humans from that task. The extra information available in a hypertext document poses new challenges for automatic categorization. HTML tags and linked neighbourhood all provide rich information for hypertext categorization that is no available in traditional text classification.
APA, Harvard, Vancouver, ISO, and other styles
3

Uslan, Volkan. "Support vector machine-based fuzzy systems for quantitative prediction of peptide binding affinity." Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/11170.

Full text
Abstract:
Reliable prediction of binding affinity of peptides is one of the most challenging but important complex modelling problems in the post-genome era due to the diversity and functionality of the peptides discovered. Generally, peptide binding prediction models are commonly used to find out whether a binding exists between a certain peptide(s) and a major histocompatibility complex (MHC) molecule(s). Recent research efforts have been focused on quantifying the binding predictions. The objective of this thesis is to develop reliable real-value predictive models through the use of fuzzy systems. A non-linear system is proposed with the aid of support vector-based regression to improve the fuzzy system and applied to the real value prediction of degree of peptide binding. This research study introduced two novel methods to improve structure and parameter identification of fuzzy systems. First, the support-vector based regression is used to identify initial parameter values of the consequent part of type-1 and interval type-2 fuzzy systems. Second, an overlapping clustering concept is used to derive interval valued parameters of the premise part of the type-2 fuzzy system. Publicly available peptide binding affinity data sets obtained from the literature are used in the experimental studies of this thesis. First, the proposed models are blind validated using the peptide binding affinity data sets obtained from a modelling competition. In that competition, almost an equal number of peptide sequences in the training and testing data sets (89, 76, 133 and 133 peptides for the training and 88, 76, 133 and 47 peptides for the testing) are provided to the participants. Each peptide in the data sets was represented by 643 bio-chemical descriptors assigned to each amino acid. Second, the proposed models are cross validated using mouse class I MHC alleles (H2-Db, H2-Kb and H2-Kk). H2-Db, H2-Kb, and H2-Kk consist of 65 nona-peptides, 62 octa-peptides, and 154 octa-peptides, respectively. Compared to the previously published results in the literature, the support vector-based type-1 and support vector-based interval type-2 fuzzy models yield an improvement in the prediction accuracy. The quantitative predictive performances have been improved as much as 33.6\% for the first group of data sets and 1.32\% for the second group of data sets. The proposed models not only improved the performance of the fuzzy system (which used support vector-based regression), but the support vector-based regression benefited from the fuzzy concept also. The results obtained here sets the platform for the presented models to be considered for other application domains in computational and/or systems biology. Apart from improving the prediction accuracy, this research study has also identified specific features which play a key role(s) in making reliable peptide binding affinity predictions. The amino acid features "Polarity", "Positive charge", "Hydrophobicity coefficient", and "Zimm-Bragg parameter" are considered as highly discriminating features in the peptide binding affinity data sets. This information can be valuable in the design of peptides with strong binding affinity to a MHC I molecule(s). This information may also be useful when designing drugs and vaccines.
APA, Harvard, Vancouver, ISO, and other styles
4

OLIVEIRA, A. B. "Modelo de Predição para análise comparativa de Técnicas Neuro-Fuzzy e de Regressão." Universidade Federal do Espírito Santo, 2010. http://repositorio.ufes.br/handle/10/4218.

Full text
Abstract:
Made available in DSpace on 2016-08-29T15:33:12Z (GMT). No. of bitstreams: 1 tese_3521_.pdf: 2782962 bytes, checksum: d4b2294e5ee9ab86b7a35aec083af692 (MD5) Previous issue date: 2010-02-12
Os Modelos de Predição implementados pelos algoritmos de Aprendizagem de Máquina advindos como linha de pesquisa da Inteligência Computacional são resultantes de pesquisas e investigações empíricas em dados do mundo real. Neste contexto; estes modelos são extraídos para comparação de duas grandes técnicas de aprendizagem de máquina Redes Neuro-Fuzzy e de Regressão aplicadas no intuito de estimar um parâmetro de qualidade do produto em um ambiente industrial sob processo contínuo. Heuristicamente; esses Modelos de Predição são aplicados e comparados em um mesmo ambiente de simulação com intuito de mensurar os níveis de adequação dos mesmos, o poder de desempenho e generalização dos dados empíricos que compõem este cenário (ambiente industrial de mineração).
APA, Harvard, Vancouver, ISO, and other styles
5

Abo, Al Ahad George, and Abbas Salami. "Machine Learning for Market Prediction : Soft Margin Classifiers for Predicting the Sign of Return on Financial Assets." Thesis, Linköpings universitet, Produktionsekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-151459.

Full text
Abstract:
Forecasting procedures have found applications in a wide variety of areas within finance and have further shown to be one of the most challenging areas of finance. Having an immense variety of economic data, stakeholders aim to understand the current and future state of the market. Since it is hard for a human to make sense out of large amounts of data, different modeling techniques have been applied to extract useful information from financial databases, where machine learning techniques are among the most recent modeling techniques. Binary classifiers such as Support Vector Machines (SVMs) have to some extent been used for this purpose where extensions of the algorithm have been developed with increased prediction performance as the main goal. The objective of this study has been to develop a process for improving the performance when predicting the sign of return of financial time series with soft margin classifiers. An analysis regarding the algorithms is presented in this study followed by a description of the methodology that has been utilized. The developed process containing some of the presented soft margin classifiers, and other aspects of kernel methods such as Multiple Kernel Learning have shown pleasant results over the long term, in which the capability of capturing different market conditions have been shown to improve with the incorporation of different models and kernels, instead of only a single one. However, the results are mostly congruent with earlier studies in this field. Furthermore, two research questions have been answered where the complexity regarding the kernel functions that are used by the SVM have been studied and the robustness of the process as a whole. Complexity refers to achieving more complex feature maps through combining kernels by either adding, multiplying or functionally transforming them. It is not concluded that an increased complexity leads to a consistent improvement, however, the combined kernel function is superior during some of the periods of the time series used in this thesis for the individual models. The robustness has been investigated for different signal-to-noise ratio where it has been observed that windows with previously poor performance are more exposed to noise impact.
APA, Harvard, Vancouver, ISO, and other styles
6

Chida, Anjum A. "Protein Tertiary Model Assessment Using Granular Machine Learning Techniques." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/cs_diss/65.

Full text
Abstract:
The automatic prediction of protein three dimensional structures from its amino acid sequence has become one of the most important and researched fields in bioinformatics. As models are not experimental structures determined with known accuracy but rather with prediction it’s vital to determine estimates of models quality. We attempt to solve this problem using machine learning techniques and information from both the sequence and structure of the protein. The goal is to generate a machine that understands structures from PDB and when given a new model, predicts whether it belongs to the same class as the PDB structures (correct or incorrect protein models). Different subsets of PDB (protein data bank) are considered for evaluating the prediction potential of the machine learning methods. Here we show two such machines, one using SVM (support vector machines) and another using fuzzy decision trees (FDT). First using a preliminary encoding style SVM could get around 70% in protein model quality assessment accuracy, and improved Fuzzy Decision Tree (IFDT) could reach above 80% accuracy. For the purpose of reducing computational overhead multiprocessor environment and basic feature selection method is used in machine learning algorithm using SVM. Next an enhanced scheme is introduced using new encoding style. In the new style, information like amino acid substitution matrix, polarity, secondary structure information and relative distance between alpha carbon atoms etc is collected through spatial traversing of the 3D structure to form training vectors. This guarantees that the properties of alpha carbon atoms that are close together in 3D space and thus interacting are used in vector formation. With the use of fuzzy decision tree, we obtained a training accuracy around 90%. There is significant improvement compared to previous encoding technique in prediction accuracy and execution time. This outcome motivates to continue to explore effective machine learning algorithms for accurate protein model quality assessment. Finally these machines are tested using CASP8 and CASP9 templates and compared with other CASP competitors, with promising results. We further discuss the importance of model quality assessment and other information from proteins that could be considered for the same.
APA, Harvard, Vancouver, ISO, and other styles
7

Thomas, Rodney H. "Machine Learning for Exploring State Space Structure in Genetic Regulatory Networks." Diss., NSUWorks, 2018. https://nsuworks.nova.edu/gscis_etd/1053.

Full text
Abstract:
Genetic regulatory networks (GRN) offer a useful model for clinical biology. Specifically, such networks capture interactions among genes, proteins, and other metabolic factors. Unfortunately, it is difficult to understand and predict the behavior of networks that are of realistic size and complexity. In this dissertation, behavior refers to the trajectory of a state, through a series of state transitions over time, to an attractor in the network. This project assumes asynchronous Boolean networks, implying that a state may transition to more than one attractor. The goal of this project is to efficiently identify a network's set of attractors and to predict the likelihood with which an arbitrary state leads to each of the network’s attractors. These probabilities will be represented using a fuzzy membership vector. Predicting fuzzy membership vectors using machine learning techniques may address the intractability posed by networks of realistic size and complexity. Modeling and simulation can be used to provide the necessary training sets for machine learning methods to predict fuzzy membership vectors. The experiments comprise several GRNs, each represented by a set of output classes. These classes consist of thresholds τ and ¬τ, where τ = [τlaw,τhigh]; state s belongs to class τ if the probability of its transitioning to attractor 􀜣 belongs to the range [τlaw,τhigh]; otherwise it belongs to class ¬τ. Finally, each machine learning classifier was trained with the training sets that was previously collected. The objective is to explore methods to discover patterns for meaningful classification of states in realistically complex regulatory networks. The research design took a GRN and a machine learning method as input and produced output class < Ατ > and its negation ¬ < Ατ >. For each GRN, attractors were identified, data was collected by sampling each state to create fuzzy membership vectors, and machine learning methods were trained to predict whether a state is in a healthy attractor or not. For T-LGL, SVMs had the highest accuracy in predictions (between 93.6% and 96.9%) and precision (between 94.59% and 97.87%). However, naive Bayesian classifiers had the highest recall (between 94.71% and 97.78%). This study showed that all experiments have extreme significance with pvalue < 0.0001. The contribution this research offers helps clinical biologist to submit genetic states to get an initial result on their outcomes. For future work, this implementation could use other machine learning classifiers such as xgboost or deep learning methods. Other suggestions offered are developing methods that improves the performance of state transition that allow for larger training sets to be sampled.
APA, Harvard, Vancouver, ISO, and other styles
8

Díaz, Jorge Luis Guevara. "Modelos de aprendizado supervisionado usando métodos kernel, conjuntos fuzzy e medidas de probabilidade." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-03122015-155546/.

Full text
Abstract:
Esta tese propõe uma metodologia baseada em métodos de kernel, teoria fuzzy e probabilidade para tratar conjuntos de dados cujas observações são conjuntos de pontos. As medidas de probabilidade e os conjuntos fuzzy são usados para modelar essas observações. Posteriormente, graças a kernels definidos sobre medidas de probabilidade, ou em conjuntos fuzzy, é feito o mapeamento implícito dessas medidas de probabilidade, ou desses conjuntos fuzzy, para espaços de Hilbert com kernel reproduzível, onde a análise pode ser feita com algum método kernel. Usando essa metodologia, é possível fazer frente a uma ampla gamma de problemas de aprendizado para esses conjuntos de dados. Em particular, a tese apresenta o projeto de modelos de descrição de dados para observações modeladas com medidas de probabilidade. Isso é conseguido graças ao mergulho das medidas de probabilidade nos espaços de Hilbert, e a construção de esferas envolventes mínimas nesses espaços de Hilbert. A tese apresenta como esses modelos podem ser usados como classificadores de uma classe, aplicados na tarefa de detecção de anomalias grupais. No caso que as observações sejam modeladas por conjuntos fuzzy, a tese propõe mapear esses conjuntos fuzzy para os espaços de Hilbert com kernel reproduzível. Isso pode ser feito graças à projeção de novos kernels definidos sobre conjuntos fuzzy. A tese apresenta como esses novos kernels podem ser usados em diversos problemas como classificação, regressão e na definição de distâncias entre conjuntos fuzzy. Em particular, a tese apresenta a aplicação desses kernels em problemas de classificação supervisionada em dados intervalares e teste kernel de duas amostras para dados contendo atributos imprecisos.
This thesis proposes a methodology based on kernel methods, probability measures and fuzzy sets, to analyze datasets whose individual observations are itself sets of points, instead of individual points. Fuzzy sets and probability measures are used to model observations; and kernel methods to analyze the data. Fuzzy sets are used when the observation contain imprecise, vague or linguistic values. Whereas probability measures are used when the observation is given as a set of multidimensional points in a $D$-dimensional Euclidean space. Using this methodology, it is possible to address a wide range of machine learning problems for such datasets. Particularly, this work presents data description models when observations are modeled by probability measures. Those description models are applied to the group anomaly detection task. This work also proposes a new class of kernels, \\emph{the kernels on fuzzy sets}, that are reproducing kernels able to map fuzzy sets to a geometric feature spaces. Those kernels are similarity measures between fuzzy sets. We give from basic definitions to applications of those kernels in machine learning problems as supervised classification and a kernel two-sample test. Potential applications of those kernels include machine learning and patter recognition tasks over fuzzy data; and computational tasks requiring a similarity measure estimation between fuzzy sets.
APA, Harvard, Vancouver, ISO, and other styles
9

Hu, Linlin. "A novel hybrid technique for short-term electricity price forecasting in deregulated electricity markets." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4498.

Full text
Abstract:
Short-term electricity price forecasting is now crucial practice in deregulated electricity markets, as it forms the basis for maximizing the profits of the market participants. In this thesis, short-term electricity prices are forecast using three different predictor schemes, Artificial Neural Networks (ANNs), Support Vector Machine (SVM) and a hybrid scheme, respectively. ANNs are the very popular and successful tools for practical forecasting. In this thesis, a hidden-layered feed-forward neural network with back-propagation has been adopted for detailed comparison with other forecasting models. SVM is a newly developed technique that has many attractive features and good performance in terms of prediction. In order to overcome the limitations of individual forecasting models, a hybrid technique that combines Fuzzy-C-Means (FCM) clustering and SVM regression algorithms is proposed to forecast the half-hour electricity prices in the UK electricity markets. According to the value of their power prices, thousands of the training data are classified by the unsupervised learning method of FCM clustering. SVM regression model is then applied to each cluster by taking advantage of the aggregated data information, which reduces the noise for each training program. In order to demonstrate the predictive capability of the proposed model, ANNs and SVM models are presented and compared with the hybrid technique based on the same training and testing data sets in the case studies by using real electricity market data. The data was obtained upon request from APX Power UK for the year 2007. Mean Absolute Percentage Error (MAPE) is used to analyze the forecasting errors of different models and the results presented clearly show that the proposed hybrid technique considerably improves the electricity price forecasting.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Xiujuan. "Computational Intelligence Based Classifier Fusion Models for Biomedical Classification Applications." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_diss/26.

Full text
Abstract:
The generalization abilities of machine learning algorithms often depend on the algorithms’ initialization, parameter settings, training sets, or feature selections. For instance, SVM classifier performance largely relies on whether the selected kernel functions are suitable for real application data. To enhance the performance of individual classifiers, this dissertation proposes classifier fusion models using computational intelligence knowledge to combine different classifiers. The first fusion model called T1FFSVM combines multiple SVM classifiers through constructing a fuzzy logic system. T1FFSVM can be improved by tuning the fuzzy membership functions of linguistic variables using genetic algorithms. The improved model is called GFFSVM. To better handle uncertainties existing in fuzzy MFs and in classification data, T1FFSVM can also be improved by applying type-2 fuzzy logic to construct a type-2 fuzzy classifier fusion model (T2FFSVM). T1FFSVM, GFFSVM, and T2FFSVM use accuracy as a classifier performance measure. AUC (the area under an ROC curve) is proved to be a better classifier performance metric. As a comparison study, AUC-based classifier fusion models are also proposed in the dissertation. The experiments on biomedical datasets demonstrate promising performance of the proposed classifier fusion models comparing with the individual composing classifiers. The proposed classifier fusion models also demonstrate better performance than many existing classifier fusion methods. The dissertation also studies one interesting phenomena in biology domain using machine learning and classifier fusion methods. That is, how protein structures and sequences are related each other. The experiments show that protein segments with similar structures also share similar sequences, which add new insights into the existing knowledge on the relation between protein sequences and structures: similar sequences share high structure similarity, but similar structures may not share high sequence similarity.
APA, Harvard, Vancouver, ISO, and other styles
11

Martins, Natalie Henriques. "Modelos de agrupamento e classificação para os bairros da cidade do Rio de Janeiro sob a ótica da Inteligência Computacional: Lógica Fuzzy, Máquinas de Vetores Suporte e Algoritmos Genéticos." Universidade do Estado do Rio de Janeiro, 2015. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=9502.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
A partir de 2011, ocorreram e ainda ocorrerão eventos de grande repercussão para a cidade do Rio de Janeiro, como a conferência Rio+20 das Nações Unidas e eventos esportivos de grande importância mundial (Copa do Mundo de Futebol, Olimpíadas e Paraolimpíadas). Estes acontecimentos possibilitam a atração de recursos financeiros para a cidade, assim como a geração de empregos, melhorias de infraestrutura e valorização imobiliária, tanto territorial quanto predial. Ao optar por um imóvel residencial em determinado bairro, não se avalia apenas o imóvel, mas também as facilidades urbanas disponíveis na localidade. Neste contexto, foi possível definir uma interpretação qualitativa linguística inerente aos bairros da cidade do Rio de Janeiro, integrando-se três técnicas de Inteligência Computacional para a avaliação de benefícios: Lógica Fuzzy, Máquina de Vetores Suporte e Algoritmos Genéticos. A base de dados foi construída com informações da web e institutos governamentais, evidenciando o custo de imóveis residenciais, benefícios e fragilidades dos bairros da cidade. Implementou-se inicialmente a Lógica Fuzzy como um modelo não supervisionado de agrupamento através das Regras Elipsoidais pelo Princípio de Extensão com o uso da Distância de Mahalanobis, configurando-se de forma inferencial os grupos de designação linguística (Bom, Regular e Ruim) de acordo com doze características urbanas. A partir desta discriminação, foi tangível o uso da Máquina de Vetores Suporte integrado aos Algoritmos Genéticos como um método supervisionado, com o fim de buscar/selecionar o menor subconjunto das variáveis presentes no agrupamento que melhor classifique os bairros (Princípio da Parcimônia). A análise das taxas de erro possibilitou a escolha do melhor modelo de classificação com redução do espaço de variáveis, resultando em um subconjunto que contém informações sobre: IDH, quantidade de linhas de ônibus, instituições de ensino, valor m médio, espaços ao ar livre, locais de entretenimento e crimes. A modelagem que combinou as três técnicas de Inteligência Computacional hierarquizou os bairros do Rio de Janeiro com taxas de erros aceitáveis, colaborando na tomada de decisão para a compra e venda de imóveis residenciais. Quando se trata de transporte público na cidade em questão, foi possível perceber que a malha rodoviária ainda é a prioritária
APA, Harvard, Vancouver, ISO, and other styles
12

詹茗旭. "Apply Fuzzy Support Vector Machine to Texture Classification." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/84040074259555099804.

Full text
Abstract:
碩士
明新科技大學
工業工程與管理研究所
97
In recent years, to develop an automatic and user-friendly vision inspection system has been explored by a number of related researchers and groups, such as: industrial inspection, fingerprint identification, medical testing and remote sensing technology. There are image processing technologies and applications in the above fields. Images classification can be regarded as one technology that trains computer to identify and distinguish among images, understand human thinking and decision model. Its future application derived from it can bring the considerable convenience. Image classification techniques are usually related to image analysis of the relevant characteristic values. Texture feature attribute is one of the clearest descriptions of the image of object categories. Therefore, texture feature attributes as a basis for classification will be a clear description of different type of texture image. This study focused on the on the texture image classification. First, author uses gray-level Co-occurrence matrix (GLCM) of texture images to transfer the original texture images. After capturing of two kinds of eigenvalue statistics and seven gray-level Co-occurrence matrix eigenvalue, then author classifies the texture features using support vector machines (SVM) and fuzzy support vector machines (FSVM) for image classification. Finally, comparison with traditional support vector machines and fuzzy support vector machines classification of the obtained results, it is showed that the results of two classifiers are almost the same.
APA, Harvard, Vancouver, ISO, and other styles
13

Lai, Yen-hung, and 賴彥宏. "A Fuzzy based on Parameters determination for Support Vector Machine." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/73781506506309519033.

Full text
Abstract:
碩士
朝陽科技大學
資訊工程系碩士班
100
In the phase of building SVM model, it is still an unsolved problem of how to decide the optimal parameter values for the cost function and kernel function.Although numerous researches have been proposed to overcome this problem, they were suffered with the problem of much time complexity. Ideal parameter values could increase the accuracy of classification. In this thesis a novel algorithm is proposed to generate ideal parameter values. In this algorithm,overall relations between training patterns are summarized into nine fuzzy rules and fuzzy inference engine is used to generate the ideal parameter values.Besides, fuzzy neural network is used to reach the optimal solution.Experimental results show that our proposed algorithm produces ideal C and γ effectively and outperform other methods.
APA, Harvard, Vancouver, ISO, and other styles
14

湯和程. "Texture Classification using Wavelet Transform and Fuzzy Support Vector Machine." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/74919021188369236033.

Full text
Abstract:
碩士
明新科技大學
工業工程與管理研究所
99
In recent years, texture analysis has played an important role in many tasks, ranging from remote sensing, defect detection, pattern recognition to medical imaging and query by content in large image databases. Images classification can be regarded as one technology that trains computer to identify and distinguish among images, understand human thinking and decision model. Its future application derived from it can bring the considerable convenience. Image classification techniques are usually related to image analysis of the relevant characteristic values. Texture feature attribute is one of the clearest descriptions of the image of object categories. Therefore, texture feature attributes as a basis for classification will be a clear description of different type of texture image. In this paper, we apply a classification approach based on wavelet transforms and fuzzy support vector machines (FSVMs), offering the classification performance for texture classification. Since one of the main difficulties in applying of standard SVM is their sensitivity to outliers and noises in the training procedure due to overfitting, the fuzzy support vector machine is proposed to deal with the difficulties. In addition, the fuzzy membership setting in FSVM is also a critical factor significantly reflecting their relative degrees as meaningful data. In this paper, we also introduce mean and radius membership setting functions in texture classification. The results show that support vector machines and fuzzy support vector machine has not bad performance, in which the performance of the RBF kernel function best.
APA, Harvard, Vancouver, ISO, and other styles
15

Roy, Andreas Franskie Van, and AndreasFranskieVanRoy. "Evolutionary Fuzzy Decision Model for Construction Management using Weighted Support Vector Machine." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/88562275650834329958.

Full text
Abstract:
博士
國立臺灣科技大學
營建工程系
98
Construction projects are, by their very nature, challenging; and project decision makers must work successfully within an environment that is frequently complex and fraught with uncertainty. As many decisions must be made intuitively based on limited information, successful decision making depends heavily on two factors, including the experience of the expert(s) involved and the quality of knowledge accumulated from previous experience. Knowledge, however, is subject to various factors that cause its value and accuracy to deteriorate. Research has demonstrated that artificial intelligence has the potential to overcome these factors. The Evolutionary Fuzzy Support Vector Machine Inference Model (EFSIM), an artificial intelligence hybrid system that fuses together fuzzy logic (FL), weighted support vector machines (SVMs) and a fast messy genetic algorithm (fmGA), represents an alternative approach to retaining and utilizing experiential knowledge. In the EFSIM, FL handles imprecision in the environment and approximate reasoning; weighted SVMs act as a supervised learning tool to handle fuzzy input-output mapping focused on data characteristics; and fmGA is used as an optimization tool to search simultaneously for fittest membership functions, defuzzification parameter and weighted SVMs parameters (herein C, ? and ??. The Evolutionary Fuzzy Support Vector Machine Inference System (EFSIS), in effect an automated EFSIM adaptation process, used one artificial and four real construction management problems to demonstrate the EFSIM as an effective and potential tool for solving various problems in construction management.
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Jun-An, and 陳俊安. "Using Fuzzy Support Vector Machine to Solve Imbalanced Datasets and Noise Problems." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/12559097922249906761.

Full text
Abstract:
碩士
朝陽科技大學
資訊工程系
103
This paper proposed a method that removes the redundant training data in order to retrieve the support vectors and introduces fuzzy support vector machine to solve imbalanced datasets problems. Firstly, all categories of training data were clustered and the probability of training data belongs to support vectors were computing, and then randomly remove the non-support vector so that the number of data in each category was reached balanced. Next, the degrees of membership of training data were calculated by using fuzzy k-nearest neighborhood algorithm, in order to identify and remove the noise. Finally, the data obtained from the above treatment are recombined to construct a fuzzy support vector machine. In this paper, UCI WCBD (Wisconsin Breast Cancer Dataset) repository was selected for the experiment. The experimental results that are achieved by the proposed method were compared to some well know techniques, i.e. the classical SMOTE approach, SBC approach, and SUNDO approach. Experimental results reveal that the proposed approach outperforms with other approaches.
APA, Harvard, Vancouver, ISO, and other styles
17

Chiu, Shih-Hsuan, and 邱士軒. "Skin Color Image Segmentation by Support Vector Machine-aided Self Organizing Fuzzy Network." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/66319834242323149836.

Full text
Abstract:
碩士
國立中興大學
電機工程學系
93
Skin color image segmentation by Support Vector Machine–aided Self Organizing Fuzzy Network (SVM-SOFN) is proposed in this thesis. SVM-SOFN is a fuzzy system constructed by the hybrid of fuzzy clustering and SVM. Antecedent part of SVM-SOFN is generated via fuzzy clustering on the input data, and then SVM is used to tune the consequent part parameters to make the network with better generalization performance. In SVM-SOFN, there are two types of consequent parts that can be used. One is SVM-aided singleton type SOFN, called SVM-SSOFN, and the other is SVM-aided TSK-type SOFN, called SVM-TSOFN. Each color pixel is represented by hue and saturation component of HSV color space. To represent a color by histogram as accurately as possible, non-uniform partition of HS space is used. Histogram information from images under different environments is used to train SVM-SOFN to make the method as robust as possible. To verify performance of the proposed method, experiments on skin color segmentation are performed. For comparison, other four color image segmentation methods, including Histogram-based Skin Classifier, Mixture of Gaussian Classifier, Self-cOnstructing Neural Fuzzy Inference Network, and Support Vector Machine, are applied to the same problem. From comparisons, we find that SVM-TSOFN achieves the best segmentation result.
APA, Harvard, Vancouver, ISO, and other styles
18

Duc, Hoang Nhat, and 黃日德. "ESTIMATE AT COMPLETION USING TIME-DEPENDENT EVOLUTIONARY FUZZY SUPPORT VECTOR MACHINE INFERENCE MODEL." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/59518142814475653930.

Full text
Abstract:
碩士
國立臺灣科技大學
營建工程系
98
In construction management, successful in cost control during construction stage is critical to the general contractor survival. The reason is that the bottom line of construction management is to ensure the project carried out within the planned budget. Cost overrun may lead to profit damage, occasionally even bring about project failure. To deal with such issue, this research utilizes Estimate at Completion (EAC) and Time-dependent Evolutionary Fuzzy Support Vector Machine (EFSIMT) to form the model (EAC-EFSIMT) uniquely for EAC prediction. In EAC-EFSIMT, the Support Vector Machines is utilized as a supervised learning instrument, in order to infer the causal relationship between multiple attributes in the input space and EAC as the single output in the output space. The fuzzy logic is used to emphasize the approximate reasoning. Moreover, to address the feature of time-dependent data, the inference model employs 3 types of time series functions (Linear, Quadratic, and Exponential) to weight training data points. The effect of each time series function on the model performance is investigated individually. This research work has proved that integration of time series function can meliorate the outcome of EAC prediction. Through the training, testing and results comparison process, the Exponential function has been identified as the preferable time series function for EAC problem. Moreover, the capability of EAC-EFSIMT in real-world situation is demonstrated. It is shown that newly proposed model is an effective replacement for previous methods in construction project cost control.
APA, Harvard, Vancouver, ISO, and other styles
19

Chiu, Yen-Lin, and 邱彥霖. "Based on Sequential Covering Algorithm to Achieve Fuzzy Rule Extraction for Support Vector Machine." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/48552747362732918377.

Full text
Abstract:
碩士
朝陽科技大學
資訊工程系碩士班
101
In this thesis, a three phases of rule extraction algorithm, based on sequential covering algorithm and fuzzy logic, is proposed to extract fuzzy rules for support vector machines. In first phase, feature vectors are selected by analyzing the properties of training patterns. In second phase, rules are generated by using feature vector. Within this phase, sequential covering algorithm is used to reduce the rules and maximize the covering scope. In third phase, fuzzy logic is introduced so that general rules are transferred to fuzzy rules. Besides, learning mechanism is used to achieve optimization. Compared to other classification algorithms, our proposed algorithm achieves lower rules and high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Wei-Hung, and 王崴弘. "Implementation of Earthquake Early Warning System Based on Support Vector Machine and Fuzzy Inference." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/5asub9.

Full text
Abstract:
碩士
國立中央大學
通訊工程學系
103
Taiwan is located in a seismically active zone and faces the problem of a large number of earthquakes. Many disastrous earthquakes usually cause the loss of lives and properties over the years. Therefore, how to design a series of preventative measures and reduce the risk of damages caused by earthquakes is an important issue. So far we still do not have a better way to get earthquake warning immediately. As a result, the Earthquake Early Warning System (EEWS) would be a useful tool and has become the urgent need of human beings. Nowadays countries neighboring the Ring of Fire put a lot of efforts and resources on investigating the EEWS. Among them, Japan, The United States of America, and Taiwan have the most abundant achievements. Constructing the EEWS has some common problems. Firstly, the need of the special equipment for seismic wave sensing costs highly. Secondly, the number of earthquake detection stations cannot increase arbitrarily due to the lack of funds. Thirdly, people are not familiar with the EEWS. Until now, people get the earthquake news from television, radio stations or social APPs. The EEWS does not fully come into our lives. In this thesis, we propose a new algorithm and architecture of the EEWS named as the EQ-system. The hardware part of EQ-system is a G-sensor for detection. The software part of EQ-system is a library of earthquake early warning called LibEQ. LibEQ combines several theories such as Support Vector Machine, Fuzzy Inference and so on. We can build a biggest earthquake detection network with low cost. People can get earthquake alarm by APP. Consequently, for earthquake early warning, EQ-system can produce the best possible results.
APA, Harvard, Vancouver, ISO, and other styles
21

Jie, Chen Ping, and 陳炳傑. "Exploring Stock Market Dynamism by Applying Dynamic Fuzzy Model in Combination with Support Vector Machine." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/75934437255229667721.

Full text
Abstract:
碩士
中華大學
資訊管理學系
94
In the study, a new dynamic fuzzy model is proposed in combination with support vector machine (SVM) to forecast stock market dynamism. In this new integrated model, the fuzzy model integrates various influence factors as the input variables, and the genetic algorithm (GA) adjusts the influential degree of each input variable dynamically. SVM then serves to predict stock market dynamism in the next phase. In the meanwhile, the multiperiod experiment method is designed to simulate the volatility of stock market. To evaluate the performance of the new integrated model, we compare it with the traditional forecast methods and design different experiments to testify. From the experiment results, the model from the study does generate better accuracy in forecast than other forecast models.
APA, Harvard, Vancouver, ISO, and other styles
22

Wibowo, Dedy Kurniawan, and 容德慶. "Predicting Productivity Loss Caused by Change Orders Using Evolutionary Fuzzy Support Vector Machine Inference Model." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/9a2dc3.

Full text
Abstract:
碩士
國立臺灣科技大學
營建工程系
100
Change orders in construction projects are very common and result in many negative impacts. The impact of change orders on labor productivity is difficult to quantify. A complex input-output relationship that measures the effect of change orders cannot be calculated using a traditional approach. In this study, Evolutionary Fuzzy Support Vector Machines Inference Model (EFSIM), which combines fuzzy logic (FL), support vector machine (SVM), and fast messy genetic algorithm (fmGA) is developed as a tool for predicting productivity loss caused by change orders. The SVM is utilized as a supervised learning technique for solving classification and regression problems. The advantages of FL in reckoning vagueness and uncertainty are exploited. Furthermore, fmGA is applied to optimize the model’s parameters. A case study regarding productivity loss caused by change orders is presented to demonstrate and to validate the performance of the proposed prediction model. Simulation results demonstrate EFSIM’s ability to predict the impact of change orders is outperformed compared to artificial neural network (ANN), support vector machine (SVM) and evolutionary support vector machine inference model (ESIM). Validation with previous studies shows that EFSIM successfully improve the accuracy and reliability of the prediction model.
APA, Harvard, Vancouver, ISO, and other styles
23

Lee, Kun-Ta, and 李昆達. "Using Fuzzy Logic to Reduce Outliers and Noise to Improve Accuracy in Support Vector Machine." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/88734214928573133793.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
101
With advent of information and cloud technology development, there are still numerous unknown data that could be used for research and waiting to be excavated. The evolution of Data Mining and Machine Learning technology has brought us even newer front to extract useful information to benefit people’s daily lives, general knowledge, and research applications. However, the Data Mining Classification is an important part of technology. By using the known data and categorizing properties to build a classification model, we could apply this model to further predict the unknown new data. Support Vector Machine (SVM) is a type of statistical theory widely used in recent years and based on the data attribute to build a classification model. This classifier can remap the data to high-dimension space through the attribute of information from a large amount of data. Nevertheless, there still exists conflicting information preventing the data to be accurately classified. In this thesis, we attempt to eliminate the data that may be inconsistent in dataset by applying fuzzy theory first, and followed by re-establishing a new training model for prediction. In order to verify this theory, three existing real world categorizing dataset problems were drawn from the UCI database first, followed by the removal of outlier data by the use of fuzzy theory, and finally achieving a substantially greatly improved classification accuracy for new data.
APA, Harvard, Vancouver, ISO, and other styles
24

Liu, Cheng-Kai, and 劉正凱. "License Plate Recognition System based on Adaptive Network-Based Fuzzy Inference System & Support Vector Machine." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/86639642579064967574.

Full text
Abstract:
碩士
南台科技大學
資訊工程系
97
The main purpose of this study is to offer a complete process of license plates recognition. The entire process of license plates recognition includes pre-processing, license plate location, character segmentation, and character recognition. Pre-processing contains color image enhancement and HSI color space transformation. To reduce the impact of changes in light, the researcher exploited technology of color image enhancement first to adjust the brightness of images. In the section of license plate location, the researcher obtained an image of degree of membership in color by adaptive network fuzzy inference system. At the same time the researcher also obtained an image of the degree of membership in edge. After fuzzy computation, the researcher could obtain some candidate areas by morphology, and then found out the correct license plate by template matching. Character segmentation consists of tilt angle adjustment, characters clusting, characters compensation and character normalization. In order to make the performance of character recognition better, support vector machines was adopted for character recognition. The researcher exploited Gabor filter and Sobel operator to extract features of character, and reduced amount of character features. Finally, the feature vectors were inputted to support vector machine for recognition. After the experiment, the researcher found out that by this process, it could reached a 97.7% location rate of license plates and a 99.2% recognition rate of characters.
APA, Harvard, Vancouver, ISO, and other styles
25

Cheng, Yu-Hao, and 鄭育皓. "Application of Fuzzy Support Vector Machine to Detect Fault Type and Location in the Power System." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/87828857127359483217.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
102
Recent years, due to high-tech industry and computer equipment developed too quickly, the requirements for power quality are increased. Stable power supply is very important for customer. Therefore, detecting the fault location and type on the transmission line is an essential part of research regarding power system. Traditional power system divided the fault into balance and unbalance. If type and location of power system fault can be identified sooner when it happens, power company can resume electricity supply at soonest and minimize the time and cost arising from the power outage. In this thesis, using Power World Simulator13 to established the 345kV Taiwan power system model and Support Vector Machine to classify the fault type and location in power system. After combining the fuzzy rule base from fuzzy theorem and defuzzified values, it can make the classification of fault location and type more distinctly which further lead to quicker and more accurate judgment of the fault.
APA, Harvard, Vancouver, ISO, and other styles
26

Lin, Yu-Tung, and 林玉堂. "A Novel Support-Vector-Machine-Based Prediction System with Modified Grey Fourier Series and Fuzzy Time Series." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/4ku92b.

Full text
Abstract:
博士
國立臺灣科技大學
資訊工程系
105
The main purpose of this study is to improve the performance of grey theory in non-smooth time series, so that after modification, the theory can predict stock value more accurate both in the trend or non-trend, especially Taiwan’s weighted share price index. In the trend, we use the modified GM (1,1) in which the initial value of the background is set in the near-term data, and use the Fourier series and exponential smoothing method to perform primary and secondary residual correction. In the non-trend, we mainly use the fixed range characteristics to find out the technical index of high relevance. Through the predicted technical index placement value predicted by the fuzzy time series of defuzzification center of gravity method, we reverse the stock value. Finally, through the hidden Markov forecast non-trend segment value, we predict actual forecast stocks. Through analysis and decision-making process provided by the support vector machine, grey theory integrated the advantages of both trend and non-trend system and will be able to work effectively in terms of stock value prediction. The use of Markov model helps to find out in which section stock prices may fall, correct the real final stock price forecast, and obtain the value of system prediction. It is proved that the modified GM (1,1) can effectively improve the traditional GM (1,1) prediction when faced with market reversal and randomness. Combined with the hidden Markov model, the use of technical index conducting fuzzy time series can effectively find non-trend market turning point. The integration of modified GM (1,1) and technical index systems makes more accurate predictions in both trend and non-trend.
APA, Harvard, Vancouver, ISO, and other styles
27

蔡佳航. "Automatic defect classification for TFT-LCD Cell process inspection using wavelet transform reconstruction and fuzzy support vector machine." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/90394490680589720977.

Full text
Abstract:
碩士
明新科技大學
工業工程與管理研究所
99
In cell process, some defects are critical to the quality of LCD panels, while some are not. This paper proposes a defect identification system by which the defects can be automatically identified and classified. The proposed system is composed of four parts: preprocessing defect image, wavelet transform and inverse wavelet transform, feature extraction, and defect identification and classification. For the defect identification and classification, a novel classifier called fuzzy support vector machine (FSVM) with a radius based membership setting is proposed. FSVM is proposed to solve the critical problem existing in traditional standard SVM, overfitting due to outliers and noises. In FSVM classification, both the best fuzzy memberships and the optimal parameters of the classification can be determined. The results show that SVM is a robust classifier for TFT-LCD defect classification, and the RBF is the most suitable kernel function employed for FSVM classification in this research.
APA, Harvard, Vancouver, ISO, and other styles
28

Dan, Le Trung, and 黎中旦. "Enhanced Time-Dependent Evolutionary Fuzzy Support Vector Machine Inference Model for Cash-Flow Prediction and Estimate at Completion." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/p6jjc8.

Full text
Abstract:
碩士
國立臺灣科技大學
營建工程系
99
This study has a two-fold objective. First, it conducts a mechanism enhance time series data of the time-dependent evolutionary fuzzy support vector machine inference model (EFSIMT). The enhanced model is called EFSIMET. The EFSIMET was developed particularly to treat construction management problems that contain time series data. The EFSIMET¬ is an artificial intelligent hybrid system in which fuzzy logic (FL) deal with vagueness and approximate reasoning; support vector machine (SVM) acts as supervise learning tool; and fast messy genetic algorithm (fmGA) works to optimize FL and SVMs parameters simultaneously. Moreover, to capture the time series data characteristics, the author develops fmGA-based searching mechanism to seek suitable weight values to weight the training data points. This random-based searching mechanism has the capacity to address the complex and dynamic nature of time series data; thus, it could improve the model’s performance significantly. Nowadays, construction management is facing complex and difficult problems due to the increasing uncertainties during project implementation. Therefore, the second objective of this study is proposed for the application of EFSIMET to treat two typical problems in construction: forecasting cash-flow and estimate at completion. Through performance’s comparison with previous works, the effectiveness and real world application of EFSIMET are proved. Hence, this model may be use as an intelligent decision support tool to assist the decision-making process to solve the construction management’s difficulties.
APA, Harvard, Vancouver, ISO, and other styles
29

LIOU, FU-JAN, and 劉富展. "A Hybrid Method for Estimating the Time of Change in Variable Sampling Control Chart Using Support Vector Machine and Fuzzy Statistical Clustering." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/37b28k.

Full text
Abstract:
碩士
國立雲林科技大學
工業工程與管理系
105
Control charts are the most common tool to monitor process, they can be used to assess whether the process is in control or not. Control charts detect signals that is not in control when variations of the process occur. However, control chart would detect out of control signal with large amount of delay in most cases. To address this problem, many supplementary techniques are applied to control charts aiming to identify the exact change time in process. This study proposes a hybrid method to estimate the time of change in variable sampling X ̅ control chart, we assume the change type and the magnitude of process variations are unknown. For identifying the change type in process, this study used the support vector machine which would recognize control chart patterns to address the unknown change type. After we identify the change type, the estimation of the change time is accomplished by the Fuzzy statistical clustering. The study then conducts extended simulations to evaluate this hybrid method’s performance of change point estimation with different variable sampling strategies and different change types in process. The result show that the performance of estimating change point is very close to the real change point for upward and downward step change. On the other hand, the performance of estimating change point is later than the real change point for increasing and decreasing trend, and the larger the magnitude of disturbance change is, the better the performance of estimating change point is. The performance of proposed method is close to the performance of the change type be known, both of the performances of estimators are excellent.
APA, Harvard, Vancouver, ISO, and other styles
30

Lin, Chi-Wen, and 林豈汶. "Fuzzy Preference Relations─New Similarity Measure and Evolutionary Support Vector Machine Inference Model for Slurry Wall selection and prediction of Slurry Wall duration." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/05928149127306734050.

Full text
Abstract:
碩士
國立臺灣科技大學
營建工程系
97
Because Taiwan is a thick place, it increases the usable area that the underground foundation digs more and more. It is due to the construction environment changes a lot that the excavation method choice becomes an important topic of the construction plan. If the excavation method choice was not unsuitable for the construction environment, it will waste the cost lightly or it will damage the house nearby. Furthermore the underground foundation is the previous actives of the following engineering. Any kind of the underground foundation of the duration is delayed or the recourses are allotted in trouble. It would be the cause of a huge effect for the duration of the whole engineering and the budget. This paper present「Fuzzy Preference Relations─New Similarity Measure and Evolutionary Support Vector Machine Inference Model, PNSM─ESIM」, it cans save historical cases and has study ability of the prediction. At first, this paper defines the factor of the slurry wall selection and prediction of slurry wall duration, and set up the case-base. Then it uses the Fuzzy Preference Relations to find the weight of slurry wall and it uses New Similarity Measure to select the method of slurry wall that defines the method of slurry wall. Second, it uses Evolutionary Support Vector Machine Inference Model to optimize the prediction of slurry wall duration. By testing, it know FPNSM-ESIM that it can fast select the method of slurry wall and it can predicts the duration of slurry wall. Therefore, this paper present FPNSM-ESIM that it can be the slurry wall selection and prediction of slurry wall duration for the construction plan.
APA, Harvard, Vancouver, ISO, and other styles
31

Teles, Germanno Gurgel do Amaral. "Decision Support Systems for Risk Assessment in Credit Operations Against Collateral." Doctoral thesis, 2020. http://hdl.handle.net/10400.6/11163.

Full text
Abstract:
With the global economic crisis, which reached its peak in the second half of 2008, and before a market shaken by economic instability, financial institutions have taken steps to protect the banks’ default risks, which had an impact directly in the form of analysis in credit institutions to individuals and to corporate entities. To mitigate the risk of banks in credit operations, most banks use a graded scale of customer risk, which determines the provision that banks must do according to the default risk levels in each credit transaction. The credit analysis involves the ability to make a credit decision inside a scenario of uncertainty and constant changes and incomplete transformations. This ability depends on the capacity to logically analyze situations, often complex and reach a clear conclusion, practical and practicable to implement. Credit Scoring models are used to predict the probability of a customer proposing to credit to become in default at any given time, based on his personal and financial information that may influence the ability of the client to pay the debt. This estimated probability, called the score, is an estimate of the risk of default of a customer in a given period. This increased concern has been in no small part caused by the weaknesses of existing risk management techniques that have been revealed by the recent financial crisis and the growing demand for consumer credit.The constant change affects several banking sections because it prevents the ability to investigate the data that is produced and stored in computers that are too often dependent on manual techniques. Among the many alternatives used in the world to balance this risk, the provision of guarantees stands out of guarantees in the formalization of credit agreements. In theory, the collateral does not ensure the credit return, as it is not computed as payment of the obligation within the project. There is also the fact that it will only be successful if triggered, which involves the legal area of the banking institution. The truth is, collateral is a mitigating element of credit risk. Collaterals are divided into two types, an individual guarantee (sponsor) and the asset guarantee (fiduciary). Both aim to increase security in credit operations, as an payment alternative to the holder of credit provided to the lender, if possible, unable to meet its obligations on time. For the creditor, it generates liquidity security from the receiving operation. The measurement of credit recoverability is a system that evaluates the efficiency of the collateral invested return mechanism. In an attempt to identify the sufficiency of collateral in credit operations, this thesis presents an assessment of smart classifiers that uses contextual information to assess whether collaterals provide for the recovery of credit granted in the decision-making process before the credit transaction become insolvent. The results observed when compared with other approaches in the literature and the comparative analysis of the most relevant artificial intelligence solutions, considering the classifiers that use guarantees as a parameter to calculate the risk contribute to the advance of the state of the art advance, increasing the commitment to the financial institutions.
Com a crise econômica global, que atingiu seu auge no segundo semestre de 2008, e diante de um mercado abalado pela instabilidade econômica, as instituições financeiras tomaram medidas para proteger os riscos de inadimplência dos bancos, medidas que impactavam diretamente na forma de análise nas instituições de crédito para pessoas físicas e jurídicas. Para mitigar o risco dos bancos nas operações de crédito, a maioria destas instituições utiliza uma escala graduada de risco do cliente, que determina a provisão que os bancos devem fazer de acordo com os níveis de risco padrão em cada transação de crédito. A análise de crédito envolve a capacidade de tomar uma decisão de crédito dentro de um cenário de incerteza e mudanças constantes e transformações incompletas. Essa aptidão depende da capacidade de analisar situações lógicas, geralmente complexas e de chegar a uma conclusão clara, prática e praticável de implementar. Os modelos de Credit Score são usados para prever a probabilidade de um cliente propor crédito e tornar-se inadimplente a qualquer momento, com base em suas informações pessoais e financeiras que podem influenciar a capacidade do cliente de pagar a dívida. Essa probabilidade estimada, denominada pontuação, é uma estimativa do risco de inadimplência de um cliente em um determinado período. A mudança constante afeta várias seções bancárias, pois impede a capacidade de investigar os dados que são produzidos e armazenados em computadores que frequentemente dependem de técnicas manuais. Entre as inúmeras alternativas utilizadas no mundo para equilibrar esse risco, destacase o aporte de garantias na formalização dos contratos de crédito. Em tese, a garantia não “garante” o retorno do crédito, já que não é computada como pagamento da obrigação dentro do projeto. Tem-se ainda, o fato de que esta só terá algum êxito se acionada, o que envolve a área jurídica da instituição bancária. A verdade é que, a garantia é um elemento mitigador do risco de crédito. As garantias são divididas em dois tipos, uma garantia individual (patrocinadora) e a garantia do ativo (fiduciário). Ambos visam aumentar a segurança nas operações de crédito, como uma alternativa de pagamento ao titular do crédito fornecido ao credor, se possível, não puder cumprir suas obrigações no prazo. Para o credor, gera segurança de liquidez a partir da operação de recebimento. A mensuração da recuperabilidade do crédito é uma sistemática que avalia a eficiência do mecanismo de retorno do capital investido em garantias. Para tentar identificar a suficiência das garantias nas operações de crédito, esta tese apresenta uma avaliação dos classificadores inteligentes que utiliza informações contextuais para avaliar se as garantias permitem prever a recuperação de crédito concedido no processo de tomada de decisão antes que a operação de crédito entre em default. Os resultados observados quando comparados com outras abordagens existentes na literatura e a análise comparativa das soluções de inteligência artificial mais relevantes, mostram que os classificadores que usam garantias como parâmetro para calcular o risco contribuem para o avanço do estado da arte, aumentando o comprometimento com as instituições financeiras.
APA, Harvard, Vancouver, ISO, and other styles
32

Sadri, Sara. "Frequency Analysis of Droughts Using Stochastic and Soft Computing Techniques." Thesis, 2010. http://hdl.handle.net/10012/5198.

Full text
Abstract:
In the Canadian Prairies recurring droughts are one of the realities which can have significant economical, environmental, and social impacts. For example, droughts in 1997 and 2001 cost over $100 million on different sectors. Drought frequency analysis is a technique for analyzing how frequently a drought event of a given magnitude may be expected to occur. In this study the state of the science related to frequency analysis of droughts is reviewed and studied. The main contributions of this thesis include development of a model in Matlab which uses the qualities of Fuzzy C-Means (FCMs) clustering and corrects the formed regions to meet the criteria of effective hydrological regions. In FCM each site has a degree of membership in each of the clusters. The algorithm developed is flexible to get number of regions and return period as inputs and show the final corrected clusters as output for most case scenarios. While drought is considered a bivariate phenomena with two statistical variables of duration and severity to be analyzed simultaneously, an important step in this study is increasing the complexity of the initial model in Matlab to correct regions based on L-comoments statistics (as apposed to L-moments). Implementing a reasonably straightforward approach for bivariate drought frequency analysis using bivariate L-comoments and copula is another contribution of this study. Quantile estimation at ungauged sites for return periods of interest is studied by introducing two new classes of neural network and machine learning: Radial Basis Function (RBF) and Support Vector Machine Regression (SVM-R). These two techniques are selected based on their good reviews in literature in function estimation and nonparametric regression. The functionalities of RBF and SVM-R are compared with traditional nonlinear regression (NLR) method. As well, a nonlinear regression with regionalization method in which catchments are first regionalized using FCMs is applied and its results are compared with the other three models. Drought data from 36 natural catchments in the Canadian Prairies are used in this study. This study provides a methodology for bivariate drought frequency analysis that can be practiced in any part of the world.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography