Academic literature on the topic 'Minimal Optimization (SMO) algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Minimal Optimization (SMO) algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Minimal Optimization (SMO) algorithm"

1

Keerthi, S. S., S. K. Shevade, C. Bhattacharyya, and K. R. K. Murthy. "Improvements to Platt's SMO Algorithm for SVM Classifier Design." Neural Computation 13, no. 3 (March 1, 2001): 637–49. http://dx.doi.org/10.1162/089976601300014493.

Full text
Abstract:
This article points out an important source of inefficiency in Platt's sequential minimal optimization (SMO) algorithm that is caused by the use of a single threshold value. Using clues from the KKT conditions for the dual problem, two threshold parameters are employed to derive modifications of SMO. These modified algorithms perform significantly faster than the original SMO on all benchmark data sets tried.
APA, Harvard, Vancouver, ISO, and other styles
2

Tian, Li Yan, and Xiao Guang Hu. "Method of Parallel Sequential Minimal Optimization for Fast Training Support Vector Machine." Applied Mechanics and Materials 29-32 (August 2010): 947–51. http://dx.doi.org/10.4028/www.scientific.net/amm.29-32.947.

Full text
Abstract:
A fast training support vector machine using parallel sequential minimal optimization is presented in this paper. Up to now, sequential minimal optimization (SMO) is one of the major algorithms for training SVM, but it still requires a large amount of computation time for the large sample problems. Unlike the traditional SMO, the parallel SMO partitions the entire training data set into small subsets first and then runs multiple CPU processors to seal with each of the partitioned data set. Experiments show that the new algorithm has great advantage in terms of speediness when applied to problems with large training sets and high dimensional spaces without reducing generalization performance of SVM.
APA, Harvard, Vancouver, ISO, and other styles
3

Knebel, Tilman, Sepp Hochreiter, and Klaus Obermayer. "An SMO Algorithm for the Potential Support Vector Machine." Neural Computation 20, no. 1 (January 2008): 271–87. http://dx.doi.org/10.1162/neco.2008.20.1.271.

Full text
Abstract:
We describe a fast sequential minimal optimization (SMO) procedure for solving the dual optimization problem of the recently proposed potential support vector machine (P-SVM). The new SMO consists of a sequence of iteration steps in which the Lagrangian is optimized with respect to either one (single SMO) or two (dual SMO) of the Lagrange multipliers while keeping the other variables fixed. An efficient selection procedure for Lagrange multipliers is given, and two heuristics for improving the SMO procedure are described: block optimization and annealing of the regularization parameter ε. A comparison of the variants shows that the dual SMO, including block optimization and annealing, performs efficiently in terms of computation time. In contrast to standard support vector machines (SVMs), the P-SVM is applicable to arbitrary dyadic data sets, but benchmarks are provided against libSVM's ε-SVR and C-SVC implementations for problems that are also solvable by standard SVM methods. For those problems, computation time of the P-SVM is comparable to or somewhat higher than the standard SVM. The number of support vectors found by the P-SVM is usually much smaller for the same generalization performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Gadal, Saad, Rania Mokhtar, Maha Abdelhaq, Raed Alsaqour, Elmustafa Sayed Ali, and Rashid Saeed. "Machine Learning-Based Anomaly Detection Using K-Mean Array and Sequential Minimal Optimization." Electronics 11, no. 14 (July 10, 2022): 2158. http://dx.doi.org/10.3390/electronics11142158.

Full text
Abstract:
Recently, artificial intelligence (AI) techniques have been used to describe the characteristics of information, as they help in the process of data mining (DM) to analyze data and reveal rules and patterns. In DM, anomaly detection is an important area that helps discover hidden behavior within the data that is most vulnerable to attack. It also helps detect network intrusion. Algorithms such as hybrid K-mean array and sequential minimal optimization (SMO) rating can be used to improve the accuracy of the anomaly detection rate. This paper presents an anomaly detection model based on the machine learning (ML) technique. ML improves the detection rate, reduces the false-positive alarm rate, and is capable of enhancing the accuracy of intrusion classification. This study used a dataset known as network security-knowledge and data discovery (NSL-KDD) lab to evaluate a proposed hybrid ML technology. K-mean cluster and SMO were used for classification. In the study, the performance of the proposed anomaly detection was tested, and results showed that the use of K-mean and SMO enhances the rate of positive detection besides reducing the rate of false alarms and achieving a high accuracy at the same time. Moreover, the proposed algorithm outperformed recent and close work related to using similar variables and the environment by 14.48% and decreased false alarm probability (FAP) by (12%) in addition to giving a higher accuracy by 97.4%. These outcomes are attributed to the common algorithm providing an appropriate number of detectors to be generated with an acceptable accurate detection and a trivial false alarm probability (FAP). The proposed hybrid algorithm could be considered for anomaly detection in future data mining systems, where processing in real-time is highly likely to be reduced dramatically. The justification is that the hybrid algorithm can provide appropriate detectors numbers that can be generated with an acceptable detection accuracy and trivial FAP. Given to the low FAP, it is highly expected to reduce the time of the preprocessing and processing compared with the other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, Zhe, and Xiao Yu Li. "Study of Sequential Minimal Optimization Algorithm Type and Kernel Function Selection for Short-Term Load Forecasting." Applied Mechanics and Materials 329 (June 2013): 472–77. http://dx.doi.org/10.4028/www.scientific.net/amm.329.472.

Full text
Abstract:
Short-term load forecasting is important for power system operation,including preparing plans for generation and supply, arranging the generator to set start or stop, coordinating thermal power units and hydropower units. Support vector machines have advantage in approximating any nonlinear function with arbitrary precision and modeling by studying history data. Based on SVM, this paper selects the sequential minimal optimization (SMO) algorithm to compute load forecasting, because SMO can avoid iterative, so as to short the running time. If we select different kernel functions and the SMO type in the computing process, we will receive different result. Though the analysis of results,the paper obtains the optimal solution in different accuracy or time requirements for short-term load forecasting. By a power plant data, respectively, it discusses from the weekly load forecasts and daily load forecast to play an empirical analysis. It concludes that the selection of ɛ-SVR type and the linear form kernel function is ideal for short-term load forecasting in a not strictly time limits. Otherwise, it will select others in different terms.
APA, Harvard, Vancouver, ISO, and other styles
6

Panigrahi, Satya Sobhan, and Ajay Kumar Jena. "Optimization of Test Cases in Object-Oriented Systems Using Fractional-SMO." International Journal of Open Source Software and Processes 12, no. 1 (January 2021): 41–59. http://dx.doi.org/10.4018/ijossp.2021010103.

Full text
Abstract:
This paper introduces the technique to select the test cases from the unified modeling language (UML) behavioral diagram. The UML behavioral diagram describes the boundary, structure, and behavior of the system that is fed as input for generating the graph. The graph is constructed by assigning the weights, nodes, and edges. Then, test case sequences are created from the graph with minimal fitness value. Then, the optimal sequences are selected from the proposed fractional-spider monkey optimization (fractional-SMO). The developed fractional-SMO is designed by integrating fractional calculus and SMO. Thus, the efficient test cases are selected based on the optimization algorithm that uses fitness parameters, like coverage and fault. Simulations are performed via five synthetic UML diagrams taken from the dataset. The performance of the proposed technique is computed using coverage and the number of test cases. The maximal coverage of 49 and the minimal number of test cases as 2,562 indicate the superiority of the proposed technique.
APA, Harvard, Vancouver, ISO, and other styles
7

Glasmachers, Tobias, and Christian Igel. "Second-Order SMO Improves SVM Online and Active Learning." Neural Computation 20, no. 2 (February 2008): 374–82. http://dx.doi.org/10.1162/neco.2007.10-06-354.

Full text
Abstract:
Iterative learning algorithms that approximate the solution of support vector machines (SVMs) have two potential advantages. First, they allow online and active learning. Second, for large data sets, computing the exact SVM solution may be too time-consuming, and an efficient approximation can be preferable. The powerful LASVM iteratively approaches the exact SVM solution using sequential minimal optimization (SMO). It allows efficient online and active learning. Here, this algorithm is considerably improved in speed and accuracy by replacing the working set selection in the SMO steps. A second-order working set selection strategy, which greedily aims at maximizing the progress in each single step, is incorporated.
APA, Harvard, Vancouver, ISO, and other styles
8

Wibowo, Agung. "Aplikasi Diagnosis Penyakit Kanker Payudara Menggunakan Algoritma Sequential Minimal Optimization." Jurnal Teknologi dan Sistem Komputer 5, no. 4 (October 29, 2017): 153. http://dx.doi.org/10.14710/jtsiskom.5.4.2017.153-158.

Full text
Abstract:
Various methods for the diagnosis of breast cancer exist, but not many have been implemented as an application. This study aims to develop an application using SMO algorithm assisted by Weka to diagnose breast cancer. The application was web-based application and developed using Javascript. Test dataset and model formation used original Breast Cancer Database (WBCD) data without missing value. Test mode used 10-fold cross-validation. This application can diagnose breast cancer with an accuracy of 97.3645% and has a significant increase in accuracy for the diagnosis of malignant cancer.Beragam metode untuk diagnosis kanker payudara, namun belum banyak yang diimplementasikan menjadi sebuah aplikasi. Penelitian ini bertujuan untuk mengembangkan aplikasi berdasarkan model hasil kalkulasi algoritma SMO berbantuan Weka untuk mendiagnosis penyakit kanker payudara. Aplikasi dikembangkan berbasis web menggunakan Javascript. Dataset pengujian dan pembentukan model menggunakan data Winconsin Breast Cancer Database original (WBCD) tanpa nilai hilang. Mode pengujian menggunakan 10-fold cross validation. Aplikasi ini dapat mendiagnosis kanker payudara dengan akurasi 97.3645% dan memiliki peningkatan akurasi yang signifikan untuk diagnosis kanker ganas.
APA, Harvard, Vancouver, ISO, and other styles
9

Shao, Xigao, Kun Wu, and Bifeng Liao. "Single Directional SMO Algorithm for Least Squares Support Vector Machines." Computational Intelligence and Neuroscience 2013 (2013): 1–7. http://dx.doi.org/10.1155/2013/968438.

Full text
Abstract:
Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs). In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO-) type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the classification accuracy of the new method is not largely different from the existing methods, but the training speed is faster than existing ones.
APA, Harvard, Vancouver, ISO, and other styles
10

Al-Ibrahim, Ali Mohammad H. "Using Sequential Minimal Optimization for Phishing Attack Detection." Modern Applied Science 13, no. 5 (April 30, 2019): 114. http://dx.doi.org/10.5539/mas.v13n5p114.

Full text
Abstract:
With the development of Internet technology and electronic transactions, the problem of software security has become a reality that must be confronted and is no longer an option that can be abandoned. For this reason, software must be protected in all available ways. Where attackers use many methods to enable them to penetrate systems, especially those that rely on the Internet and hackers try to identify the vulnerabilities in the programs and exploit them to enter the database and steal sensitive information. Electronic phishing is a form of illegal access to information, such as user names, passwords, credit card details, etc. Where attackers use different types of tricks to reveal confidential user information. Where attacks appear as links and phishing is done by clicking on the links contained in them. This leads to obtaining confidential information by using those false emails, redirecting the user without his knowledge to a site similar to the site he wants to access and capturing information. The main purpose of this paper is to protect users from malicious pages that are intended to steal personal information. Therefore, an electronic phishing detection algorithm called the SMO algorithm, which deals only with the properties of links, has been used. Weka was used in the classification process. The samples were the characteristics of the links and they contain a number of sites which were 8266 and the number of phishing sites 4116 and legitimate sites 4150 sites and results were found to be new for the previous algorithms where the real classification rate 99.0202% in the time of 1.68 seconds.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Minimal Optimization (SMO) algorithm"

1

Zhang, Hang. "Distributed Support Vector Machine With Graphics Processing Units." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/991.

Full text
Abstract:
Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. Sequential Minimal Optimization (SMO) is a decomposition-based algorithm which breaks this large QP problem into a series of smallest possible QP problems. However, it still costs O(n2) computation time. In our SVM implementation, we can do training with huge data sets in a distributed manner (by breaking the dataset into chunks, then using Message Passing Interface (MPI) to distribute each chunk to a different machine and processing SVM training within each chunk). In addition, we moved the kernel calculation part in SVM classification to a graphics processing unit (GPU) which has zero scheduling overhead to create concurrent threads. In this thesis, we will take advantage of this GPU architecture to improve the classification performance of SVM.
APA, Harvard, Vancouver, ISO, and other styles
2

Fang, Juing. "Décodage pondère des codes en blocs et quelques sujets sur la complexité du décodage." Paris, ENST, 1987. http://www.theses.fr/1987ENST0005.

Full text
Abstract:
Etude de la compléxité théorique du décodage des codes en blocs à travers une famille d'algorithmes basée sur le principe d'optimisation combinatoire. Puis on aborde un algorithme parallèle de décodage algébrique dont la complexitré est liée au niveau de bruit du canal. Enfin on introduit un algorithme de Viterbi pour les applications de traitement en chaînes.
APA, Harvard, Vancouver, ISO, and other styles
3

Candra, Henry. "Emotion recognition using facial expression and electroencephalography features with support vector machine classifier." Thesis, 2017. http://hdl.handle.net/10453/116427.

Full text
Abstract:
University of Technology Sydney. Faculty of Engineering and Information Technology.
Recognizing emotions from facial expression and electroencephalography (EEG) emotion signals are complicated tasks that require substantial issues to be solved in order to achieve higher performance of the classifications, i.e. facial expression has to deal with features, features dimensionality, and classification processing time, while EEG emotion recognition has the concerned with features, number of channels and sub band frequency, and also non-stationary behaviour of EEG signals. This thesis addresses the aforementioned challenges. First, a feature for facial expression recognition using a combination of Viola-Jones algorithm and improved Histogram of Oriented Gradients (HOG) descriptor termed Edge-HOG or E–HOG is proposed which has the advantage of insensitivity to lighting conditions. The issue of dimensionality and classification processing time was resolved using a combination of Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which has successfully reduced both the dimension and the classification processing time resulting in a new low dimension of feature called Reduced E–HOG (RED E–HOG). In the case of EEG emotion recognition, a method to recognize 4 discrete emotions from arousal-valence dimensional plane using wavelet energy and entropy features was developed. The effects of EEG channel and subband selection were also addressed, which managed to reduce the channels from 32 to 18 channels and the subband from 5 to 3 bands. To deal with the non-stationary behaviour of EEG signals, an Optimal Window Selection (OWS) method as feature-agnostic pre-processing was proposed. The main objective of OWS is window segmentation with varying window which was applied to 7 various features to improve the classification results of 4 dimensional plane emotions, namely arousal, valence, dominance, and liking, to distinguish between the high or low state of the aforementioned emotions. The improvement of accuracy makes the OWS method a potential solution to dealing with the non-stationary behaviour of EEG signals in emotion recognition. The implementation of OWS provides the information that the EEG emotions may be appropriately localized at 4–12 seconds time segments. In addition, a feature concatenating of both Wavelet Entropy and average Wavelet Approximation Coefficients was developed for EEG emotion recognition. The SVM classifier trained using this feature provides a higher classification result consistently compared to various different features such as: simple average, Fast Fourier Transform (FFT), and Wavelet Energy. In all the experiments, the classification was conducted using optimized SVM with a Radial Basis Function (RBF) kernel. The RBF kernel parameters were properly optimized using a particle swarm ensemble clustering algorithm called Ensemble Rapid Centroid Estimation (ERCE). The algorithm estimates the number of clusters directly from the data using swarm intelligence and ensemble aggregation. The SVM is then trained using the optimized RBF kernel parameters and Sequential Minimal Optimization (SMO) algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Wei, Chih-Yuan, and 魏志原. "Sequential Minimal Optimization Algorithm For Robust Support Vector Regression." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/79312955586930977629.

Full text
Abstract:
碩士
國立臺北大學
電機資訊產業研發碩士專班
102
Based on the statistical learning theory, the support vector machine is excellent with its features in low model complexity and high generalization ability, and is highly potential for the applications in both pattern recognition and function approximation. The quadratic expression in its original model intrinsically corresponds to a high computational complexity in O(n2), and leads it to a curse of dimensionality with the increasing training instances. By employing the sequential minimal optimization (SMO) algorithm which subdivides the big integrated optimization into a series of small two-instance optimization, the computation of the quadratic programming can be effectively reduced, and reach rapidly the optimal solution. With some improved findings, the study extends the SMO for SVM classifications to that for SVM regression. The development would be advantageous to the applications of function approximation.
APA, Harvard, Vancouver, ISO, and other styles
5

Jr-ShiangPeng and 彭志祥. "Hardware and Software Co-design of Silicon Intellectual Property Module Based on Sequential Minimal Optimization algorithm for Speaker Recognition." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/72913970118404970293.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
98
This thesis proposes a hardware/software co-design IP for embedded text-independent speaker recognition system to increase convenient life through portable speech application. In hardware part, the Sequential Minimal Optimization (SMO) algorithm is adopted for accelerating SVM training to create speaker models. In software part, we modify our lab’s previous fixed-point arithmetic design for both the Linear Prediction Cepstral Coefficients (LPCC) and the one vs. one highest voting analysis algorithm. Two schemes, the heuristics selection and the efficient cache utilization method are proposed to implement the SMO algorithm into hardware design for decreasing the training time. Moreover, a specific design is proposed to efficiently utilize the bus bandwidth and reduce delivering time for about 5% between software and hardware communications. Finally, our simulation/emulation results show that 90% of training time is reduced while the recognition accuracy rate can achieve 92.7%.
APA, Harvard, Vancouver, ISO, and other styles
6

Singh, Inderjeet 1978. "Risk-averse periodic preventive maintenance optimization." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-08-4203.

Full text
Abstract:
We consider a class of periodic preventive maintenance (PM) optimization problems, for a single piece of equipment that deteriorates with time or use, and can be repaired upon failure, through corrective maintenance (CM). We develop analytical and simulation-based optimization models that seek an optimal periodic PM policy, which minimizes the sum of the expected total cost of PMs and the risk-averse cost of CMs, over a finite planning horizon. In the simulation-based models, we assume that both types of maintenance actions are imperfect, whereas our analytical models consider imperfect PMs with minimal CMs. The effectiveness of maintenance actions is modeled using age reduction factors. For a repairable unit of equipment, its virtual age, and not its calendar age, determines the associated failure rate. Therefore, two sets of parameters, one describing the effectiveness of maintenance actions, and the other that defines the underlying failure rate of a piece of equipment, are critical to our models. Under a given maintenance policy, the two sets of parameters and a virtual-age-based age-reduction model, completely define the failure process of a piece of equipment. In practice, the true failure rate, and exact quality of the maintenance actions, cannot be determined, and are often estimated from the equipment failure history. We use a Bayesian approach to parameter estimation, under which a random-walk-based Gibbs sampler provides posterior estimates for the parameters of interest. Our posterior estimates for a few datasets from the literature, are consistent with published results. Furthermore, our computational results successfully demonstrate that our Gibbs sampler is arguably the obvious choice over a general rejection sampling-based parameter estimation method, for this class of problems. We present a general simulation-based periodic PM optimization model, which uses the posterior estimates to simulate the number of operational equipment failures, under a given periodic PM policy. Optimal periodic PM policies, under the classical maximum likelihood (ML) and Bayesian estimates are obtained for a few datasets. Limitations of the ML approach are revealed for a dataset from the literature, in which the use of ML estimates of the parameters, in the maintenance optimization model, fails to capture a trivial optimal PM policy. Finally, we introduce a single-stage and a two-stage formulation of the risk-averse periodic PM optimization model, with imperfect PMs and minimal CMs. Such models apply to a class of complex equipment with many parts, operational failures of which are addressed by replacing or repairing a few parts, thereby not affecting the failure rate of the equipment under consideration. For general values of PM age reduction factors, we provide sufficient conditions to establish the convexity of the first and second moments of the number of failures, and the risk-averse expected total maintenance cost, over a finite planning horizon. For increasing Weibull rates and a general class of increasing and convex failure rates, we show that these convexity results are independent of the PM age reduction factors. In general, the optimal periodic PM policy under the single-stage model is no better than the optimal two-stage policy. But if PMs are assumed perfect, then we establish that the single-stage and the two-stage optimization models are equivalent.
text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Minimal Optimization (SMO) algorithm"

1

Hendrix, Eligius M. T., and Ana Maria A. C. Rocha. "On Local Convergence of Stochastic Global Optimization Algorithms." In Computational Science and Its Applications – ICCSA 2021, 456–72. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86976-2_31.

Full text
Abstract:
AbstractIn engineering optimization with continuous variables, the use of Stochastic Global Optimization (SGO) algorithms is popular due to the easy availability of codes. All algorithms have a global and local search character, where the global behaviour tries to avoid getting trapped in local optima and the local behaviour intends to reach the lowest objective function values. As the algorithm parameter set includes a final convergence criterion, the algorithm might be running for a while around a reached minimum point. Our question deals with the local search behaviour after the algorithm reached the final stage. How fast do practical SGO algorithms actually converge to the minimum point? To investigate this question, we run implementations of well known SGO algorithms in a final local phase stage.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Xinyue, and Jun Guo. "An Algorithm for Parallelizing Sequential Minimal Optimization." In Neural Information Processing, 657–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-42042-9_81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pisinger, David. "A minimal algorithm for the Bounded Knapsack Problem." In Integer Programming and Combinatorial Optimization, 95–109. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-59408-6_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tang, Maolin. "An Adaptive Genetic Algorithm for the Minimal Switching Graph Problem." In Evolutionary Computation in Combinatorial Optimization, 224–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-31996-2_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Jun, Norikazu Takahashi, and Tetsuo Nishi. "A Novel Sequential Minimal Optimization Algorithm for Support Vector Regression." In Neural Information Processing, 827–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11893028_92.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

León-Javier, Alejandro, Nareli Cruz-Cortés, Marco A. Moreno-Armendáriz, and Sandra Orantes-Jiménez. "Finding Minimal Addition Chains with a Particle Swarm Optimization Algorithm." In MICAI 2009: Advances in Artificial Intelligence, 680–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-05258-3_60.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rodriguez-Cristerna, Arturo, and Jose Torres-Jimenez. "A Genetic Algorithm for the Problem of Minimal Brauer Chains for Large Exponents." In Soft Computing Applications in Optimization, Control, and Recognition, 27–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-35323-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Candel, Diego, Ricardo Ñanculef, Carlos Concha, and Héctor Allende. "A Sequential Minimal Optimization Algorithm for the All-Distances Support Vector Machine." In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 484–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16687-7_64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Matei, Alexander, and Stefan Ulbrich. "Detection of Model Uncertainty in the Dynamic Linear-Elastic Model of Vibrations in a Truss." In Lecture Notes in Mechanical Engineering, 281–95. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77256-7_22.

Full text
Abstract:
AbstractDynamic processes have always been of profound interest for scientists and engineers alike. Often, the mathematical models used to describe and predict time-variant phenomena are uncertain in the sense that governing relations between model parameters, state variables and the time domain are incomplete. In this paper we adopt a recently proposed algorithm for the detection of model uncertainty and apply it to dynamic models. This algorithm combines parameter estimation, optimum experimental design and classical hypothesis testing within a probabilistic frequentist framework. The best setup of an experiment is defined by optimal sensor positions and optimal input configurations which both are the solution of a PDE-constrained optimization problem. The data collected by this optimized experiment then leads to variance-minimal parameter estimates. We develop efficient adjoint-based methods to solve this optimization problem with SQP-type solvers. The crucial test which a model has to pass is conducted over the claimed true values of the model parameters which are estimated from pairwise distinct data sets. For this hypothesis test, we divide the data into k equally-sized parts and follow a k-fold cross-validation procedure. We demonstrate the usefulness of our approach in simulated experiments with a vibrating linear-elastic truss.
APA, Harvard, Vancouver, ISO, and other styles
10

Şahin, Durmuş Özkan, and Erdal Kılıç. "An Extensive Text Mining Study for the Turkish Language." In Research Anthology on Implementing Sentiment Analysis Across Multiple Disciplines, 690–724. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-6303-1.ch037.

Full text
Abstract:
In this study, the authors give both theoretical and experimental information about text mining, which is one of the natural language processing topics. Three different text mining problems such as news classification, sentiment analysis, and author recognition are discussed for Turkish. They aim to reduce the running time and increase the performance of machine learning algorithms. Four different machine learning algorithms and two different feature selection metrics are used to solve these text classification problems. Classification algorithms are random forest (RF), logistic regression (LR), naive bayes (NB), and sequential minimal optimization (SMO). Chi-square and information gain metrics are used as the feature selection method. The highest classification performance achieved in this study is 0.895 according to the F-measure metric. This result is obtained by using the SMO classifier and information gain metric for news classification. This study is important in terms of comparing the performances of classification algorithms and feature selection methods.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Minimal Optimization (SMO) algorithm"

1

Bidar, Mahdi, and Malek Mouhoub. "Discrete Particle Swarm Optimization Algorithm for Dynamic Constraint Satisfaction with Minimal Perturbation." In 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). IEEE, 2019. http://dx.doi.org/10.1109/smc.2019.8914496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wilding, Paul R., Nathan R. Murray, and Matthew J. Memmott. "Design Optimization of PERCS in RELAP5 Using Parallel Processing and a Multi-Objective Non-Dominated Sorting Genetic Algorithm." In 2018 26th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/icone26-82389.

Full text
Abstract:
Multi-objective optimization is a powerful tool that has been successfully applied to many fields but has seen minimal use in the design and development of nuclear power plant systems. When applied to design, multi-objective optimization involves the manipulation of key design parameters in order to develop optimal designs. These design parameters include continuous and/or discrete variables and represent the physical design specifications. They are modified across a specific design space to accomplish a number of set objective functions, representing the goals for both system design and performance, which conflict and cannot be combined into a single objective function. In this paper, a non-dominated sorting genetic algorithm (NSGA) and parallel processing in Python 3 were used to optimize the design of the passive endothermic reaction cooling system (PERCS) model developed in RELAP5/MOD 3.3. This system has been proposed as a retrofit to currently-operating light water reactors (LWR) and is designed to remove decay heat from the reactor core via the endothermic decomposition of magnesium carbonate (MgCO3) and natural circulation of the reactor coolant. The PERCS design is currently a shell-and-tube heat exchanger, with the coolant flowing through the tube side and MgCO3 on the shell side. During a station blackout (SBO), the PERCS initially keeps the reactor core outlet temperature from exceeding 635 K and then reduces it to below 620 K for 30 days. The optimization of the PERCS was performed with three different objectives: (1) minimization of equipment costs, (2) minimization of deviation of the core outlet temperature during a SBO from its normal operation steady-state value, and (3) minimization of fractional consumption of MgCO3, a metric that is measurable and directly related to the operating time of the PERCS. The manipulated parameters of the optimization include the radius of the PERCS shell, the pitch, hydraulic diameter, thickness and length of the PERCS tubes, and the elevation of the PERCS with respect to the reactor core. The NSGA methodology works by creating a population of PERCS options with varying design parameters. Using the evolutionary concepts of selection, reproduction, mutation, and survival of the fittest, the NSGA method repeatedly generates new PERCS options and gets rid of less fit ones. In the end, the result was a Pareto front of PERCS designs, each thermodynamically viable and optimal with respect to the three objectives. The Pareto front of options as a whole represents the optimized trade-off between the objectives.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, C. R., and J. Guo. "An Improved Algorithm for Parallelizing Sequential Minimal Optimization." In 2015 International Conference on Industrial Technology and Management Science. Paris, France: Atlantis Press, 2015. http://dx.doi.org/10.2991/itms-15.2015.331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ya-Zhou Liu, Hong-Xun Yao, Wen Gao, and De-Bin Zhao. "Single sequential minimal optimization: an improved SVMs training algorithm." In Proceedings of 2005 International Conference on Machine Learning and Cybernetics. IEEE, 2005. http://dx.doi.org/10.1109/icmlc.2005.1527705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Qian, Yong-Jie Zhai, and Pu Han. "Sequential Minimal Optimization Algorithm Applied in Short-Term Load Forecasting." In 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kuan, Ta-Wen, Jhing-Fa Wang, Jia-Ching Wang, and Gaung-Hui Gu. "VLSI design of sequential minimal optimization algorithm for SVM learning." In 2009 IEEE International Symposium on Circuits and Systems - ISCAS 2009. IEEE, 2009. http://dx.doi.org/10.1109/iscas.2009.5118311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lu, Changhua, Xiaokang Deng, Chun Liu, and Yong Wang. "An Optimization Algorithm for Computing the Minimal Test Set of Circuits." In 2008 International Symposium on Intelligent Information Technology Application Workshops (IITAW). IEEE, 2008. http://dx.doi.org/10.1109/iita.workshops.2008.186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Du Zhiyong, Dong Zuolin, Qu Peixin, and Wang Xianfang. "Fuzzy support vector machine based on improved sequential minimal optimization algorithm." In 2010 International Conference On Computer and Communication Technologies in Agriculture Engineering (CCTAE). IEEE, 2010. http://dx.doi.org/10.1109/cctae.2010.5543317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Collette, M. D., X. Wang, and J. Li. "Ultimate Strength and Optimization of Aluminum Extrusions." In SNAME Maritime Convention. SNAME, 2009. http://dx.doi.org/10.5957/smc-2009-035.

Full text
Abstract:
Recent large aluminum high-speed vessels have made use of custom extrusions to efficiently construct large flat structures including internal decks, wet decks, and side shell components. In this paper an efficient method for designing and optimizing such extrusions to minimize structural weight is presented. Strength methods for extrusions under in-plane and out-of-plane loads are briefly reviewed and shortcomings in existing aluminum strength prediction methods for marine design are discussed. A multi-objective optimizer using a genetic algorithm approach is presented; this optimizer was designed to quickly generate Pareto frontiers linking designs of minimum weight for a wide range of strength levels. The method was used to develop strength vs. weight Pareto frontiers for extruded panels for a main vehicle deck and a strength deck location on a nominal high-speed vessel.
APA, Harvard, Vancouver, ISO, and other styles
10

Qian, Zhou, Zhai Yong Jie, and Han Pu. "The Application of Sequential Minimal Optimization Algorithm In Short-term Load Forecasting." In 2007 Chinese Control Conference. IEEE, 2006. http://dx.doi.org/10.1109/chicc.2006.4346950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography