Littérature scientifique sur le sujet « Machine learning. Heuristic programming. Set theory »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Machine learning. Heuristic programming. Set theory ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Machine learning. Heuristic programming. Set theory"

1

CHEN, PO-CHI, et SUH-YIN LEE. « AN IMPROVEMENT TO TOP-DOWN CLAUSE SPECIALIZATION ». International Journal on Artificial Intelligence Tools 07, no 01 (mars 1998) : 71–102. http://dx.doi.org/10.1142/s0218213098000068.

Texte intégral
Résumé :
One remarkable progress of recent research in machine learning is inductive logic programming (ILP). In most ILP system, clause specialization is one of the most important tasks. Usually, the clause specialization is performed by adding a literal at a time using hill-climbing heuristics. However, the single-literal addition can be caught by local pits when more than one literal needs to be added at a time increase the accuracy. Several techniques have been proposed for this problem but are restricted to relational domains. In this paper, we propose a technique called structure subtraction to construct a set of candidates for adding literals, single-literal or multiple-literals. This technique can be employed in any ILP system using top-down specilization and is not restricted to relational domains. A theory revision system is described to illustrate the use of structural subtraction.
Styles APA, Harvard, Vancouver, ISO, etc.
2

WANG, CHAO, MINGHU HA, JIQIANG CHEN et HONGJIE XING. « SUPPORT VECTOR MACHINE BASED ON RANDOM SET SAMPLES ». International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, supp01 (juillet 2013) : 101–12. http://dx.doi.org/10.1142/s0218488513400084.

Texte intégral
Résumé :
In order to deal with learning problems of random set samples encountered in real-world, according to random set theory and convex quadratic programming, a new support vector machine based on random set samples is constructed. Experimental results show that the new support vector machine is feasible and effective.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Sokolov, I. A. « Theory and practice in artificial intelligence ». Вестник Российской академии наук 89, no 4 (24 avril 2019) : 365–70. http://dx.doi.org/10.31857/s0869-5873894365-370.

Texte intégral
Résumé :
Artificial Intelligence is an interdisciplinary field, and formed about 60 years ago as an interaction between mathematical methods, computer science, psychology, and linguistics. Artificial Intelligence is an experimental science and today features a number of internally designed theoretical methods: knowledge representation, modeling of reasoning and behavior, textual analysis, and data mining. Within the framework of Artificial Intelligence, novel scientific domains have arisen: non-monotonic logic, description logic, heuristic programming, expert systems, and knowledge-based software engineering. Increasing interest in Artificial Intelligence in recent years is related to the development of promising new technologies based on specific methods like knowledge discovery (or machine learning), natural language processing, autonomous unmanned intelligent systems, and hybrid human-machine intelligence.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Masrom, Suraya, Masurah Mohamad, Shahirah Mohamed Hatim, Norhayati Baharun, Nasiroh Omar et Abdullah Sani Abd. Rahman. « Different mutation and crossover set of genetic programming in an automated machine learning ». IAES International Journal of Artificial Intelligence (IJ-AI) 9, no 3 (1 septembre 2020) : 402. http://dx.doi.org/10.11591/ijai.v9.i3.pp402-408.

Texte intégral
Résumé :
<span lang="EN-US">Automated machine learning is a promising approach widely used to solve classification and prediction problems, which currently receives much attention for modification and improvement. One of the progressing works for automated machine learning improvement is the inclusion of evolutionary algorithm such as Genetic Programming. The function of Genetic Programming is to optimize the best combination of solutions from the possible pipelines of machine learning modelling, including selection of algorithms and parameters optimization of the selected algorithm. As a family of evolutionary based algorithm, the effectiveness of Genetic Programming in providing the best machine learning pipelines for a given problem or dataset is substantially depending on the algorithm parameterizations including the mutation and crossover rates. This paper presents the effect of different pairs of mutation and crossover rates on the automated machine learning performances that tested on different types of datasets. The finding can be used to support the theory that higher crossover rates used to improve the algorithm accuracy score while lower crossover rates may cause the algorithm to converge at earlier stage.</span>
Styles APA, Harvard, Vancouver, ISO, etc.
5

MARATEA, MARCO, LUCA PULINA et FRANCESCO RICCA. « A multi-engine approach to answer-set programming ». Theory and Practice of Logic Programming 14, no 6 (15 août 2013) : 841–68. http://dx.doi.org/10.1017/s1471068413000094.

Texte intégral
Résumé :
AbstractAnswer-set programming (ASP) is a truly declarative programming paradigm proposed in the area of non-monotonic reasoning and logic programming, which has been recently employed in many applications. The development of efficient ASP systems is, thus, crucial. Having in mind the task of improving the solving methods for ASP, there are two usual ways to reach this goal: (i) extending state-of-the-art techniques and ASP solvers or (ii) designing a new ASP solver from scratch. An alternative to these trends is to build on top of state-of-the-art solvers, and to apply machine learning techniques for choosing automatically the “best” available solver on a per-instance basis.In this paper, we pursue this latter direction. We first define a set of cheap-to-compute syntactic features that characterize several aspects of ASP programs. Then, we apply classification methods that, given the features of the instances in atrainingset and the solvers' performance on these instances, inductively learn algorithm selection strategies to be applied to atestset. We report the results of a number of experiments considering solvers and different training and test sets of instances taken from the ones submitted to the “System Track” of the Third ASP Competition. Our analysis shows that by applying machine learning techniques to ASP solving, it is possible to obtain very robust performance: our approach can solve more instances compared with any solver that entered the Third ASP Competition.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Suehiro, Daiki, Kohei Hatano, Eiji Takimoto, Shuji Yamamoto, Kenichi Bannai et Akiko Takeda. « Theory and Algorithms for Shapelet-Based Multiple-Instance Learning ». Neural Computation 32, no 8 (août 2020) : 1580–613. http://dx.doi.org/10.1162/neco_a_01297.

Texte intégral
Résumé :
We propose a new formulation of multiple-instance learning (MIL), in which a unit of data consists of a set of instances called a bag. The goal is to find a good classifier of bags based on the similarity with a “shapelet” (or pattern), where the similarity of a bag with a shapelet is the maximum similarity of instances in the bag. In previous work, some of the training instances have been chosen as shapelets with no theoretical justification. In our formulation, we use all possible, and thus infinitely many, shapelets, resulting in a richer class of classifiers. We show that the formulation is tractable, that is, it can be reduced through linear programming boosting (LPBoost) to difference of convex (DC) programs of finite (actually polynomial) size. Our theoretical result also gives justification to the heuristics of some previous work. The time complexity of the proposed algorithm highly depends on the size of the set of all instances in the training sample. To apply to the data containing a large number of instances, we also propose a heuristic option of the algorithm without the loss of the theoretical guarantee. Our empirical study demonstrates that our algorithm uniformly works for shapelet learning tasks on time-series classification and various MIL tasks with comparable accuracy to the existing methods. Moreover, we show that the proposed heuristics allow us to achieve the result in reasonable computational time.
Styles APA, Harvard, Vancouver, ISO, etc.
7

MITRA, ARINDAM, et CHITTA BARAL. « Incremental and Iterative Learning of Answer Set Programs from Mutually Distinct Examples ». Theory and Practice of Logic Programming 18, no 3-4 (juillet 2018) : 623–37. http://dx.doi.org/10.1017/s1471068418000248.

Texte intégral
Résumé :
AbstractOver the years the Artificial Intelligence (AI) community has produced several datasets which have given the machine learning algorithms the opportunity to learn various skills across various domains. However, a subclass of these machine learning algorithms that aimed at learning logic programs, namely the Inductive Logic Programming algorithms, have often failed at the task due to the vastness of these datasets. This has impacted the usability of knowledge representation and reasoning techniques in the development of AI systems. In this research, we try to address this scalability issue for the algorithms that learn answer set programs. We present a sound and complete algorithm which takes the input in a slightly different manner and performs an efficient and more user controlled search for a solution. We show via experiments that our algorithm can learn from two popular datasets from machine learning community, namely bAbl (a question answering dataset) and MNIST (a dataset for handwritten digit recognition), which to the best of our knowledge was not previously possible. The system is publicly available athttps://goo.gl/KdWAcV.
Styles APA, Harvard, Vancouver, ISO, etc.
8

El'shin, L. A., A. M. Gil'manov et V. V. Banderov. « Forecasting trends in the cryptocurrency exchange rate through the machine learning theory ». Financial Analytics : Science and Experience 13, no 1 (28 février 2020) : 97–113. http://dx.doi.org/10.24891/fa.13.1.97.

Texte intégral
Résumé :
Subject. The study discusses methodological approaches to forecasting trends in the development of the cryptocurrency market (bitcoin). Objectives. The study aims to discover and explain tools and mechanisms for predicting how the cyptocurrency market may evolve in a short run through time series modeling methods and machine learning methods, which are based on artificial neural networks LSTM. Methods. Using Python-based programming methods, we constructed and substantiated a neural network model for the analyzable series describing how the stock exchange rate of bitcoin develops. Results. Matching loss functions, optimizer and parameters for constructing a neural network that predicts the BTC/USD exchange rate for a coming day, we proved its applicability and feasibility, which is confirmed with the lowest number of errors in the test and validation set. Conclusions and Relevance. The findings mainly prove that the above mechanism is feasible for predicting the cryptocurrency market. The mechanism is based on algorithms for constructing LSTM networks. The approach should be used to analyze and evaluate the current and future parameters of the cryptocurrency market development. The tools can be of interest for investors which operate in new markets of e-money.
Styles APA, Harvard, Vancouver, ISO, etc.
9

KATZOURIS, NIKOS, ALEXANDER ARTIKIS et GEORGIOS PALIOURAS. « Online learning of event definitions ». Theory and Practice of Logic Programming 16, no 5-6 (septembre 2016) : 817–33. http://dx.doi.org/10.1017/s1471068416000260.

Texte intégral
Résumé :
AbstractSystems for symbolic event recognition infer occurrences of events in time using a set of event definitions in the form of first-order rules. The Event Calculus is a temporal logic that has been used as a basis in event recognition applications, providing among others, direct connections to machine learning, via Inductive Logic Programming (ILP). We present an ILP system for online learning of Event Calculus theories. To allow for a single-pass learning strategy, we use the Hoeffding bound for evaluating clauses on a subset of the input stream. We employ a decoupling scheme of the Event Calculus axioms during the learning process, that allows to learn each clause in isolation. Moreover, we use abductive-inductive logic programming techniques to handle unobserved target predicates. We evaluate our approach on an activity recognition application and compare it to a number of batch learning techniques. We obtain results of comparable predicative accuracy with significant speed-ups in training time. We also outperform hand-crafted rules and match the performance of a sound incremental learner that can only operate on noise-free datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Qu, Qiang, Ming Qi Chang, Lei Xu, Yue Wang et Shao Hua Lu. « Support Vector Machine-Based Aqueduct Safety Assessment ». Advanced Materials Research 368-373 (octobre 2011) : 531–36. http://dx.doi.org/10.4028/www.scientific.net/amr.368-373.531.

Texte intégral
Résumé :
According to water power, structure and foundation conditions of aqueduct, it has established aqueduct safety assessment indicator system and standards. Based on statistical learning theory, support vector machine shifts the learning problems into a convex quadratic programming problem with structural risk minimization criterion, which could get the global optimal solution, and be applicable to solving the small sample, nonlinearity classification and regression problems. In order to evaluate the safety condition of aqueduct, it has established the aqueduct safety assessment model which is based on support vector machine. It has divided safety standards into normal, basically normal, abnormal and dangerous. According to the aqueduct safety assessment standards and respective evaluation level, the sample set is generated randomly, which is used to build a pair of classifier with many support vectors. The results show that the method is feasible, and it has a good application prospect in irrigation district canal building safety assessment.
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Thèses sur le sujet "Machine learning. Heuristic programming. Set theory"

1

Perry, Kristine. « Heuristic weighted voting / ». Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2120.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ciarleglio, Michael Ian 1979. « Modular Abstract Self-learning Tabu Search (MASTS) : metaheuristic search theory and practice ». 2008. http://hdl.handle.net/2152/18086.

Texte intégral
Résumé :
MASTS is an extensible, feature rich, software architecture based on tabu search (TS), a metaheuristic that relies on memory structures to intelligently organize and navigate the search space. MASTS introduces a new methodology of rule based objectives (RBOs), in which the search objective is replaced with a binary comparison operator more capable of expressing a variety of preferences. In addition, MASTS supports a new metastrategy, dynamic neighborhood selection (DNS), which “learns” about the search landscape to implement an adaptive intensification-diversification strategy. DNS can improve search performance by directing the search to promising regions and reducing the number of required evaluations. To demonstrate the flexibility and range of capabilities, MASTS is applied to two complex decision problems in conservation planning and groundwater management. As an extension of MASTS, ConsNet addresses the spatial conservation area network design problem (SCANP) in conservation biology. Given a set of possible geographic reserve sites, the goal is to select which sites to place under conservation to preserve unique elements of biodiversity. Structurally, this problem resembles the NP-hard set cover problem, but also considers additional spatial criteria including compactness, connectivity, and replication. Modeling the conservation network as a graph, ConsNet uses novel techniques to quickly compute these spatial criteria, exceeding the capabilities of classical optimization methods and prior planning software. In the arena of groundwater planning, MASTS demonstrates extraordinary flexibility as both an advanced search engine and a decision aid. In House Bill 1763, the Texas state legislature mandates that individual Groundwater Conservation Districts (GCDs) must work together to set specific management goals for the future condition of regional groundwater resources. This complex multi-agent multi-criteria decision problem involves finding the best way to meet these goals considering a host of decision variables such as pumping locations, groundwater extraction rates, and drought management policies. In two separate projects, MASTS has shaped planning decisions in the Barton Springs/Edwards Aquifer Conservation District and Groundwater Management Area 9 (GMA9). The software has been an invaluable decision support tool for planners, stakeholders, and scientists alike, allowing users to explore the problem from a multicriteria perspective.
text
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Machine learning. Heuristic programming. Set theory"

1

Neto, João José. « Adaptive Technology and Its Applications ». Dans Machine Learning, 87–96. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-818-7.ch108.

Texte intégral
Résumé :
Before the advent of software engineering, the lack of memory space in computers and the absence of established programming methodologies led early programmers to use self-modification as a regular coding strategy. Although unavoidable and valuable for that class of software, solutions using self-modification proved inadequate while programs grew in size and complexity, and security and reliability became major requirements. Software engineering, in the 70’s, almost led to the vanishing of self-modifying software, whose occurrence was afterwards limited to small low-level machinelanguage programs with very special requirements. Nevertheless, recent research developed in this area, and the modern needs for powerful and effective ways to represent and handle complex phenomena in hightechnology computers are leading self-modification to be considered again as an implementation choice in several situations. Artificial intelligence strongly contributed for this scenario by developing and applying non-conventional approaches, e.g. heuristics, knowledge representation and handling, inference methods, evolving software/ hardware, genetic algorithms, neural networks, fuzzy systems, expert systems, machine learning, etc. In this publication, another alternative is proposed for developing Artificial Intelligence applications: the use of adaptive devices, a special class of abstractions whose practical application in the solution of current problems is called Adaptive Technology. The behavior of adaptive devices is defined by a dynamic set of rules. In this case, knowledge may be represented, stored and handled within that set of rules by adding and removing rules that represent the addition or elimination of the information they represent. Because of the explicit way adopted for representing and acquiring knowledge, adaptivity provides a very simple abstraction for the implementation of artificial learning mechanisms: knowledge may be comfortably gathered by inserting and removing rules, and handled by tracking the evolution of the set of rules and by interpreting the collected information as the representation of the knowledge encoded in the rule set.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Mohammadian, M. « Designing Unsupervised Hierarchical Fuzzy Logic Systems ». Dans Machine Learning, 253–61. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-818-7.ch210.

Texte intégral
Résumé :
Systems such as robotic systems and systems with large input-output data tend to be difficult to model using mathematical techniques. These systems have typically high dimensionality and have degrees of uncertainty in many parameters. Artificial intelligence techniques such as neural networks, fuzzy logic, genetic algorithms and evolutionary algorithms have created new opportunities to solve complex systems. Application of fuzzy logic [Bai, Y., Zhuang H. and Wang, D. (2006)] in particular, to model and solve industrial problems is now wide spread and has universal acceptance. Fuzzy modelling or fuzzy identification has numerous practical applications in control, prediction and inference. It has been found useful when the system is either difficult to predict and or difficult to model by conventional methods. Fuzzy set theory provides a means for representing uncertainties. The underlying power of fuzzy logic is its ability to represent imprecise values in an understandable form. The majority of fuzzy logic systems to date have been static and based upon knowledge derived from imprecise heuristic knowledge of experienced operators, and where applicable also upon physical laws that governs the dynamics of the process. Although its application to industrial problems has often produced results superior to classical control, the design procedures are limited by the heuristic rules of the system. It is simply assumed that the rules for the system are readily available or can be obtained. This implicit assumption limits the application of fuzzy logic to the cases of the system with a few parameters. The number of parameters of a system could be large. The number of fuzzy rules of a system is directly dependent on these parameters. As the number of parameters increase, the number of fuzzy rules of the system grows exponentially. Genetic Algorithms can be used as a tool for the generation of fuzzy rules for a fuzzy logic system. This automatic generation of fuzzy rules, via genetic algorithms, can be categorised into two learning techniques, supervised and unsupervised. In this paper unsupervised learning of fuzzy rules of hierarchical and multi-layer fuzzy logic control systems are considered. In unsupervised learning there is no external teacher or critic to oversee the learning process. In other words, there are no specific examples of the function to be learned by the system. Rather, provision is made for a task-independent measure of the quality or representation that the system is required to learn. That is the system learns statistical regularities of the input data and it develops the ability to learn the feature of the input data and thereby create new classes automatically [Mohammadian, M., Nainar, I. and Kingham, M. (1997)]. To perform unsupervised learning, a competitive learning strategy may be used. The individual strings of genetic algorithms compete with each other for the “opportunity” to respond to features contained in the input data. In its simplest form, the system operates in accordance with the strategy that ‘the fittest wins and survives’. That is the individual chromosome in a population with greatest fitness ‘wins’ the competition and gets selected for the genetic algorithms operations (cross-over and mutation). The other individuals in the population then have to compete with fit individual to survive. The diversity of the learning tasks shown in this paper indicates genetic algorithm’s universality for concept learning in unsupervised manner. A hybrid integrated architecture incorporating fuzzy logic and genetic algorithm can generate fuzzy rules for problems requiring supervised or unsupervised learning. In this paper only unsupervised learning of fuzzy logic systems is considered. The learning of fuzzy rules and internal parameters in an unsupervised manner is performed using genetic algorithms. Simulations results have shown that the proposed system is capable of learning the control rules for hierarchical and multi-layer fuzzy logic systems. Application areas considered are, hierarchical control of a network of traffic light control and robotic systems. A first step in the construction of a fuzzy logic system is to determine which variables are fundamentally important. Any number of these decision variables may appear, but the more that are used, the larger the rule set that must be found. It is known [Raju, S., Zhou J. and Kisner, R. A. (1990), Raju G. V. S. and Zhou, J. (1993), Kingham, M., Mohammadian, M, and Stonier, R. J. (1998)], that the total number of rules in a system is an exponential function of the number of system variables. In order to design a fuzzy system with the required accuracy, the number of rules increases exponentially with the number of input variables and its associated fuzzy sets for the fuzzy logic system. A way to avoid the explosion of fuzzy rule bases in fuzzy logic systems is to consider Hierarchical Fuzzy Logic Control (HFLC) [Raju G. V. S. and Zhou, J. (1993)]. A learning approach based on genetic algorithms [Goldberg, D. (1989)] is discussed in this paper for the determination of the rule bases of hierarchical fuzzy logic systems.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Peng, Yueqi, Yunqing Liu et Qi Li. « The Application of Improved Sparrow Search Algorithm in Sensor Networks Coverage Optimization of Bridge Monitoring ». Dans Machine Learning and Artificial Intelligence. IOS Press, 2020. http://dx.doi.org/10.3233/faia200808.

Texte intégral
Résumé :
Aiming at the uneven random coverage distribution of wireless sensor networks sensor nodes, the sparrow search algorithm (SSA) is used to optimize the node coverage of wireless sensor networks. To improve the global search capability of SSA, the algorithm is improved and applied to the wireless sensor networks of bridge monitoring. For enhancing the coverage of the wireless sensor networks, this article uses two improved methods. One is to use the good point set theory to make the initial population evenly distributed; another is to introduce a weight factor to speed up its convergence. The experiments have proved the reliability and rationality of the algorithm. The improved algorithm is superior to other meta-heuristic algorithms and provides a new idea for optimizing bridge monitoring wireless sensor networks coverage.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Han, Hyoil, et Ramez Elmasri. « Ontology Extraction and Conceptual Modeling for Web Information ». Dans Information Modeling for Internet Applications, 174–88. IGI Global, 2003. http://dx.doi.org/10.4018/978-1-59140-050-9.ch009.

Texte intégral
Résumé :
A lot of work has been done in the area of extracting data content from the Web, but less attention has been given to extracting the conceptual schemas or ontologies of underlying Web pages. The goal of the WebOntEx (Web ontology extraction) project is to make progress toward semiautomatically extracting Web ontologies by analyzing a set of Web pages that are in the same application domain. The ontology is considered a complete schema of the domain concepts. Our ontology metaconcepts are based on the extended entity-relationship (EER) model. The concepts are classified into entity types, relationships, attributes, and superclass/subclass hierarchies. WebOntEx attempts to extract ontology concepts by analyzing the use of HTML tags and by utilizing Part-of-Speech tagging. WebOntEx applies heuristic rules and machine learning techniques, in particular, inductive logic programming (ILP).
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Machine learning. Heuristic programming. Set theory"

1

Khalil, Elias B., Bistra Dilkina, George L. Nemhauser, Shabbir Ahmed et Yufen Shao. « Learning to Run Heuristics in Tree Search ». Dans Twenty-Sixth International Joint Conference on Artificial Intelligence. California : International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/92.

Texte intégral
Résumé :
``Primal heuristics'' are a key contributor to the improved performance of exact branch-and-bound solvers for combinatorial optimization and integer programming. Perhaps the most crucial question concerning primal heuristics is that of at which nodes they should run, to which the typical answer is via hard-coded rules or fixed solver parameters tuned, offline, by trial-and-error. Alternatively, a heuristic should be run when it is most likely to succeed, based on the problem instance's characteristics, the state of the search, etc. In this work, we study the problem of deciding at which node a heuristic should be run, such that the overall (primal) performance of the solver is optimized. To our knowledge, this is the first attempt at formalizing and systematically addressing this problem. Central to our approach is the use of Machine Learning (ML) for predicting whether a heuristic will succeed at a given node. We give a theoretical framework for analyzing this decision-making process in a simplified setting, propose a ML approach for modeling heuristic success likelihood, and design practical rules that leverage the ML models to dynamically decide whether to run a heuristic at each node of the search tree. Experimentally, our approach improves the primal performance of a state-of-the-art Mixed Integer Programming solver by up to 6% on a set of benchmark instances, and by up to 60% on a family of hard Independent Set instances.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie