To see the other types of publications on this topic, follow the link: Graph modification problem.

Journal articles on the topic 'Graph modification problem'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Graph modification problem.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sritharan, R. "Graph modification problem for some classes of graphs." Journal of Discrete Algorithms 38-41 (May 2016): 32–37. http://dx.doi.org/10.1016/j.jda.2016.06.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Asahiro, Yuichi, Jesper Jansson, Eiji Miyano, Hirotaka Ono, and T. P. Sandhya. "Graph Orientation with Edge Modifications." International Journal of Foundations of Computer Science 32, no. 02 (January 30, 2021): 209–33. http://dx.doi.org/10.1142/s012905412150012x.

Full text
Abstract:
The goal of an outdegree-constrained edge-modification problem is to find a spanning subgraph or supergraph [Formula: see text] of an input undirected graph [Formula: see text] such that either: (Type I) the number of edges in [Formula: see text] is minimized or maximized and [Formula: see text] can be oriented to satisfy some specified constraints on the vertices’ resulting outdegrees; or: (Type II) among all subgraphs or supergraphs of [Formula: see text] that can be constructed by deleting or inserting a fixed number of edges, [Formula: see text] admits an orientation optimizing some objective involving the vertices’ outdegrees. This paper introduces eight new outdegree-constrained edge-modification problems related to load balancing called (Type I) MIN-DEL-MAX, MIN-INS-MIN, MAX-INS-MAX, and MAX-DEL-MIN and (Type II) [Formula: see text]-DEL-MAX, [Formula: see text]-INS-MIN, [Formula: see text]-INS-MAX, and [Formula: see text]-DEL-MIN. In each of the eight problems, the input is a graph and the goal is to delete or insert edges so that the resulting graph has an orientation in which the maximum outdegree (taken over all vertices) is small or the minimum outdegree is large. We first present a framework that provides algorithms for solving all eight problems in polynomial time on unweighted graphs. Next we investigate the inapproximability of the edge-weighted versions of the problems, and design polynomial-time algorithms for six of the problems on edge-weighted trees.
APA, Harvard, Vancouver, ISO, and other styles
3

IVANOV, DENIS, and ALEXANDER SEMENOV. "TOPOLOGICAL APPROACH TO BLACKHOLES ANOMALY DETECTION IN DIRECTED NETWORKS." Computational nanotechnology 7, no. 2 (June 30, 2020): 49–57. http://dx.doi.org/10.33693/2313-223x-2020-7-2-49-57.

Full text
Abstract:
In this paper we consider the problem of finding a blackhole pattern in directed unweighted graphs. The problem statement is the same as in an original paper by scientists from University of New-Jersey published in 2010. The paper contributes to the special graph pattern matching, the work results can be used for anomaly detection in finances, natural disasters, urban analysis. This paper aims to propose a novel algorithm for blackhole mining, which would take into account inner structure of the “blackhole” pattern and utilize this knowledge for more efficient mining. This paper reviews previously published solutions and tests them on larger graphs containing up to 1 million of nodes. In particular, an iBlackhole algorithm and its Divide and Conquer modification iBlackholeDC are considered, their weak spots are highlighted and reviewed upon. Graph condensation is introduced as an efficient preprocessing for the problem. This paper provides theorems and definitions describing inner structure of the blackhole pattern. Based on the new theorems, a new approach to enumeration of candidates is introduced as well as rules and heuristics aiming for faster filtration of candidates: they utilize topological sorting of a graph and definition of a “special” node, which is also introduced in this paper. Special nodes properties are described. We propose a novel TopSortBH algorithm. It consists of the graph condensation, candidates enumeration and heuristics for candidates filtration. The algorithm is provided with modification called FastSkip, which allows for more aggressive filtering strategy in time-sensitive cases. All mentioned algorithms are implemented and tested on the IBM Power8 based system. Experimental results show efficiency of the condensation as graph preprocessing for the problem. Strong advantage in found blackholes count is demonstrated for TopSortBH in comparison to iBlackholeDC on RMAT, SSCA2 and UR graphs containing up to 1 million nodes.
APA, Harvard, Vancouver, ISO, and other styles
4

Skakalina, E. "APPLICATION OF ANT OPTIMIZATION ALGORITHMS IN THE SOLUTION OF THE ROUTING PROBLEM." Системи управління, навігації та зв’язку. Збірник наукових праць 6, no. 58 (December 28, 2019): 75–83. http://dx.doi.org/10.26906/sunz.2019.6.075.

Full text
Abstract:
The article discusses current issues of using evolutionary algorithms to solve the routing problem. Ant algorithms (MA), like most types of evolutionary algorithms, are based on the use of a population of potential solutions and are designed to solve combinatorial optimization problems, first of all, search for various paths on graphs. The cooperation between individuals (artificial ants) is implemented on the basis of stigmetry modeling. In addition, each agent, called artificial ant, is looking for a solution to the problem. Artificial ants consistently build a solution to the problem, moving around the graph, lay the pheromone and, when choosing a further section of the path, take into account the concentration of this enzyme. The higher the concentration of pheromone in the subsequent section, the greater the likelihood of its choice. Since MA is based on the movement of ants along some paths, MAs are effective, first of all, in solving problems that can be interpreted in the form of a graph. Computer experiments showed that the efficiency of MA increases with increasing dimension of the problem and for tasks on high-dimensional graphs they work faster than other evolutionary algorithms. Good results were also noted in solving non-stationary problems on graphs with a changing environment. In connection with this, the implementation of the meta - heuristic method is proposed as a modification of ant optimization algorithms. The scheme of the system is presented. A software product specification is also provided
APA, Harvard, Vancouver, ISO, and other styles
5

NASTOS, JAMES, and YONG GAO. "BOUNDED SEARCH TREE ALGORITHMS FOR PARAMETRIZED COGRAPH DELETION: EFFICIENT BRANCHING RULES BY EXPLOITING STRUCTURES OF SPECIAL GRAPH CLASSES." Discrete Mathematics, Algorithms and Applications 04, no. 01 (March 2012): 1250008. http://dx.doi.org/10.1142/s1793830912500085.

Full text
Abstract:
Many fixed-parameter tractable algorithms using a bounded search tree have been repeatedly improved, often by describing a larger number of branching rules involving an increasingly complex case analysis. We introduce a novel and general search strategy that branches on the forbidden subgraphs of a graph class relaxation. By using the class of P4-sparse graphs as the relaxed graph class, we obtain efficient bounded search tree algorithms for several parametrized deletion problems. We give the first non-trivial bounded search tree algorithms for the cograph edge-deletion problem and the trivially perfect edge-deletion problems. For the cograph vertex deletion problem, a refined analysis of the runtime of our simple bounded search algorithm gives a faster exponential factor than those algorithms designed with the help of complicated case distinctions and non-trivial running time analysis [R. Niedermeier and P. Rossmanith, An efficient fixed-parameter algorithm for 3-hitting set, J. Discrete Algorithms1(1) (2003) 89–102] and computer-aided branching rules [J. Gramm, J. Guo, F. Hüffner and R. Niedermeier, Automated generation of search tree algorithms for hard graph modification problems, Algorithmica39(4) (2004) 321–347].
APA, Harvard, Vancouver, ISO, and other styles
6

Zolotarev, I. A., and V. A. Rasskazova. "Modification of Algorithm of Directed Graph Paths Decomposition with Schedule." Моделирование и анализ данных 11, no. 2 (2021): 51–58. http://dx.doi.org/10.17759/mda.2021110203.

Full text
Abstract:
This study aims to refinement of the oriented graph paths decomposition algorithm. An additional constraint on the balance parameter is considered to take into account the locomotive departure schedule. Also new rule to compute Ns parameters is given. An example of the operation of the oriented graph paths decomposition algorithm with the schedule is given. The scientific and practical novelty of the work lies in a significant reduction in the dimension of the original problem, which is especially important in the conditions of transport networks of complex topology.
APA, Harvard, Vancouver, ISO, and other styles
7

Albijanic, Miloljub. "Some properties of convex functions." Filomat 31, no. 10 (2017): 3015–21. http://dx.doi.org/10.2298/fil1710015a.

Full text
Abstract:
We treat two problems on convex functions of one real variable. The first one is concerned with properties of tangent lines to the graph of a convex function and essentially is related to the questions on the first derivative (if it exists). The second problem is related to Schwarz?s derivative, in fact its upper limit modification. It gives an interesting characterization of convex functions. Let us recall the definition of a convex functions.
APA, Harvard, Vancouver, ISO, and other styles
8

Panić, Biljana, Nataša Kontrec, Mirko Vujošević, and Stefan Panić. "A Novel Approach for Determination of Reliability of Covering a Node from K Nodes." Symmetry 12, no. 9 (September 5, 2020): 1461. http://dx.doi.org/10.3390/sym12091461.

Full text
Abstract:
In this paper, a stochastic problem of multicenter location on a graph was formulated through the modification of the existing p-center problem to determine the location of a given number of facilities, to maximize the reliability of supplying the system. The system is represented by a graph whose nodes are the locations of demand and the potential facilities, while the weights of the arcs represent the reliability, i.e., the probability that an appropriate branch is available. First, k locations of facilities are randomly determined. Using a modified Dijkstra’s algorithm, the elementary path of maximal reliability for every demand node is determined. Then, a graph of all of elementary paths for demand node is formed. Finally, a new algorithm for calculating the reliability of covering a node from k nodes (k—covering reliability) was formulated.
APA, Harvard, Vancouver, ISO, and other styles
9

Karim, A., C. Sueur, and G. Dauphin-Tanguy. "Non-regular static state feedback for linear bond graph models." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 217, no. 2 (March 1, 2003): 61–71. http://dx.doi.org/10.1177/095965180321700201.

Full text
Abstract:
In this paper, non-regular static state feedback solutions of the row-by-row decoupling problem (RBRDP) with stability are investigated by the bond graph approach for the class of non-square linear models. More precisely, the aim is to study the problem of the non-regular static state feedback designing, firstly, when the model is not decouplable with a regular static state feedback and, secondly, when the model is decouplable with a regular static state feedback but not with stability. The bond graph methodology is used in order to characterize the structure of non-square models and the modification of the infinite structure that arises when a regular control law cannot exist. A new graphical methodology is proposed. Finally, when row-by-row decoupling with stability is possible for non-square bond graph models, a formal expression of the control is given using simultaneously the bond graph and the geometric approaches.
APA, Harvard, Vancouver, ISO, and other styles
10

Phillips, Charles, Kai Wang, Erich Baker, Jason Bubier, Elissa Chesler, and Michael Langston. "On Finding and Enumerating Maximal and Maximum k-Partite Cliques in k-Partite Graphs." Algorithms 12, no. 1 (January 15, 2019): 23. http://dx.doi.org/10.3390/a12010023.

Full text
Abstract:
Let k denote an integer greater than 2, let G denote a k-partite graph, and let S denote the set of all maximal k-partite cliques in G. Several open questions concerning the computation of S are resolved. A straightforward and highly-scalable modification to the classic recursive backtracking approach of Bron and Kerbosch is first described and shown to run in O(3n/3) time. A series of novel graph constructions is then used to prove that this bound is best possible in the sense that it matches an asymptotically tight upper limit on |S|. The task of identifying a vertex-maximum element of S is also considered and, in contrast with the k = 2 case, shown to be NP-hard for every k ≥ 3. A special class of k-partite graphs that arises in the context of functional genomics and other problem domains is studied as well and shown to be more readily solvable via a polynomial-time transformation to bipartite graphs. Applications, limitations, potentials for faster methods, heuristic approaches, and alternate formulations are also addressed.
APA, Harvard, Vancouver, ISO, and other styles
11

Vatutin, Eduard. "Comparison of Decisions Quality of Heuristic Methods with Limited Depth-First Search Techniques in the Graph Shortest Path Problem." Open Engineering 7, no. 1 (December 29, 2017): 428–34. http://dx.doi.org/10.1515/eng-2017-0041.

Full text
Abstract:
AbstractThe article deals with the problem of analysis of effectiveness of the heuristic methods with limited depth-first search techniques of decision obtaining in the test problem of getting the shortest path in graph. The article briefly describes the group of methods based on the limit of branches number of the combinatorial search tree and limit of analyzed subtree depth used to solve the problem. The methodology of comparing experimental data for the estimation of the quality of solutions based on the performing of computational experiments with samples of graphs with pseudo-random structure and selected vertices and arcs number using the BOINC platform is considered. It also shows description of obtained experimental results which allow to identify the areas of the preferable usage of selected subset of heuristic methods depending on the size of the problem and power of constraints. It is shown that the considered pair of methods is ineffective in the selected problem and significantly inferior to the quality of solutions that are provided by ant colony optimization method and its modification with combinatorial returns.
APA, Harvard, Vancouver, ISO, and other styles
12

WANG, SHYUE-LIANG, YU-CHUAN TSAI, HUNG-YU KAO, I.-HSIEN TING, and TZUNG-PEI HONG. "SHORTEST PATHS ANONYMIZATION ON WEIGHTED GRAPHS." International Journal of Software Engineering and Knowledge Engineering 23, no. 01 (February 2013): 65–79. http://dx.doi.org/10.1142/s0218194013400056.

Full text
Abstract:
Due to the proliferation of online social networking, a large number of personal data are publicly available. As such, personal attacks, reputational, financial, or family losses might occur once this personal and sensitive information falls into the hands of malicious hackers. Research on Privacy-Preserving Network Publishing has attracted much attention in recent years. But most work focus on node de-identification and link protection. In academic social networks, business transaction networks, and transportation networks, etc, node identities and link structures are public knowledge but weights and shortest paths are sensitive. In this work, we study the problem of k-anonymous path privacy. A published network graph with k-anonymous path privacy has at least k indistinguishable shortest paths between the source and destination vertices [21]. In order to achieve such privacy, three different strategies of modification on edge weights of directed graphs are proposed. Numerical comparisons show that weight-proportional-based strategy is more efficient than PageRank-based and degree-based strategies. In addition, it is also more efficient and causes less information loss than running on un-directed graphs.
APA, Harvard, Vancouver, ISO, and other styles
13

Wamiliana, Wamiliana, M. Usman, Warsito Warsito, Warsono Warsono, and Jamal I. Daoud. "USING MODIFICATION OF PRIM’S ALGORITHM AND GNU OCTAVE AND TO SOLVE THE MULTIPERIODS INSTALLATION PROBLEM." IIUM Engineering Journal 21, no. 1 (January 20, 2020): 100–112. http://dx.doi.org/10.31436/iiumej.v21i1.1088.

Full text
Abstract:
The Minimum Spanning Tree (MST) is one of the famous problems that is used mostly as the backbone in many network design problems. Given a graph G(V,E), where V is the set of vertices and E is the set of edges connecting vertices in V, and for every edge eij there is an associated weight cij ?0. The Multi Period Degree Constrained Minimum Spanning Tree (MPDCMST) is a problem of finding an MST while also considering the degree constrained on every vertex, and satisfying vertices installation requirement on every period. Two algorithms (WWM1 and WWM2) are proposed for solving this problem. GNU OCTAVE is used for coding and visualization. GNU is a recursive acronym for "GNU's Not Unix!", and that name is chosen because it is like Unix but differs from Unix because it is free and contains no Unix code. Those algorithms were implemented using 300 randomly generated problems. Moreover, we compare WWM1 and WWM2 algorithms using existing data from the literature and the results show that WWM2 is the best. ABSTRAK: Minimum Spanning Tree (MST) merupakan salah satu masalah mahsyur yang banyak digunakan sebagai tulang belakang kepada masalah banyak rekaan jaringan. Menerusi graf G(V,E), di mana V adalah himpunan titik dan E adalah himpunan garis yang menghubungkan titik-titik dalam V, dan bagi setiap garis eij terdapat berat berhubung cij ?0, Multi-period Degree Constrained Minimum Spanning Tree (MPDCMST) merupakan masalah dalam menentukan MST, pada masa sama turut menimbangkan kekangan pada setiap titik vertek, dan memenuhi syarat keperluan pemasangan pada setiap detik. Dua algoritma (WWM1 dan WWM2) dicadangkan bagi menyelesaikan masalah ini. GNU OCTAVE digunakan bagi pengaturcaraan dan visualisasi. GNU merupakan suatu singkatan kepada “GNU's Not Unix”, dan nama tersebut dipilih karena ianya seperti Unix, tetapi berbeza dari Unix kerana ia percuma dan tidak mempunyai kod Unix. Algoritma tersebut dilaksana dengan menggunakan 300 masalah terhasil secara rawak. Tambahan, algoritma WWM1 dan WWM2 dibandingkan dengan kajian terdahulu dan hasil kajian menunjukkan WWM2 adalah terbaik.
APA, Harvard, Vancouver, ISO, and other styles
14

Lu, Chao, Lei-shan Zhou, Yi-xiang Yue, and Ran Chen. "A Branch and Bound Algorithm for the Exact Solution of the Problem of EMU Circulation Scheduling in Railway Network." Mathematical Problems in Engineering 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/8537089.

Full text
Abstract:
This paper is concerned with the scheduling of Electrical Multiple Units (EMUs) under the condition of their utilization on one sector or within several interacting sectors. Based on the introduction of the train connection graph which describes the possible connection relationship between trains, the integer programming model of EMU circulation planning is constructed. In order to analyzing the resolution of the model, a heuristic which shares the characteristics with the existing methods is introduced first. This method consists of two stages: one is a greedy strategy to construct a feasible circulation plan fragment, and another is to apply a stochastic disturbance to it to generate a whole feasible solution or get a new feasible solution. Then, an exact branch and bound method which is based on graph designing is proposed. Due to the complexity, the lower bound is computed through a polynomial approximation algorithm which is a modification from the one solving the degree constraint minimum 1-tree problem. Then, a branching strategy is designed to cope with the maintenance constraints. Finally, we report extensive computational results on a railway corridor in which the sectors possess the basic feature of railway networks.
APA, Harvard, Vancouver, ISO, and other styles
15

Huang, Qiuping, Liangye He, Derek F. Wong, and Lidia S. Chao. "Chinese Unknown Word Recognition for PCFG-LA Parsing." Scientific World Journal 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/959328.

Full text
Abstract:
This paper investigates the recognition of unknown words in Chinese parsing. Two methods are proposed to handle this problem. One is the modification of a character-based model. We model the emission probability of an unknown word using the first and last characters in the word. It aims to reduce the POS tag ambiguities of unknown words to improve the parsing performance. In addition, a novel method, using graph-based semisupervised learning (SSL), is proposed to improve the syntax parsing of unknown words. Its goal is to discover additional lexical knowledge from a large amount of unlabeled data to help the syntax parsing. The method is mainly to propagate lexical emission probabilities to unknown words by building the similarity graphs over the words of labeled and unlabeled data. The derived distributions are incorporated into the parsing process. The proposed methods are effective in dealing with the unknown words to improve the parsing. Empirical results for Penn Chinese Treebank and TCT Treebank revealed its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
16

Karsaev, O. V. "Modifi cation of the CGR-Algorithm on Data Routing in a Communication Network of Satellite Constellation." Mekhatronika, Avtomatizatsiya, Upravlenie 21, no. 2 (February 10, 2020): 75–85. http://dx.doi.org/10.17587/mau.21.75-85.

Full text
Abstract:
Communication networks in space systems involving the use of satellite constellations are DTN networks (Delay and Disruption Tolerant Networks). The establishment of communication channels in space communication networks has certain specifics: communication channels can be planned. In this regard, the CGR approach (Contact Graph Routing) is considered as the most promising solution to the problem of data routing. At the basis of this approach, taking into account this specificity, the calculation of the contact plan is considered. On the basis of this plan in the network nodes contact graphs are calculated, which are used to search the shortest data transmission routes. The paper proposes two interrelated solutions as a modification of this approach: the route search based on the contact plan, i.e. without calculation and use of the contact graph, and an adaptive method of finding the set of shortest routes required for routing. The essence of the first solution is as follows. In the standard CGR approach, the graph vertices correspond to the planned contacts between the network nodes, and the edges correspond to the data storage processes in the network nodes. In contrast, in the proposed approach, the vertices of the graph correspond to the nodes of the network, and the edges of the graph and their weight are determined dynamically, in the process of finding the shortest routes. The second solution is based on the concept of the planning front, which means a list of the closest contacts in time. The required routes are divided into a certain number of pools. Each pool combines the routes that use the specified contact from the planning front. The planning front is updated in two cases. If the network topology changes, the completed or not established contacts are replaced by subsequent ones with the same network nodes that are closest in time. If message traffic grows, a certain extension of the planning front and the use of additional route pools are performed. The article concludes with a description and justification of the expected advantages of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
17

Druzhitskiy, I. S., and D. E. Bekasov. "Application of Majorization-Minimization Method to Chan --- Vese Algorithm in the Image Segmentation Problem." Herald of the Bauman Moscow State Technical University. Series Instrument Engineering, no. 6 (129) (December 2019): 19–29. http://dx.doi.org/10.18698/0236-3933-2019-6-19-29.

Full text
Abstract:
The purpose of the study was to modify Chan --- Vese algorithm in order to overcome its shortcomings, such as high computational complexity and the use of approximations. In the considered modification, optimization is carried out by the majorization-minimization method, the main idea of which is to reduce the complexity of the problem using the majority function. Due to the proposed optimization method, it is possible to use the Heaviside step function and Dirac delta function. This enabled the same or better saturation levels when optimization is done by the graph cut method in a smaller number of iterations, which reduced the operation time. The proposed algorithm was tested on a Caltech101 dataset. The algorithm is general, does not depend on the subject area and does not require prior training. This allows it to be used as the basis for a wide range of image segmentation algorithms.
APA, Harvard, Vancouver, ISO, and other styles
18

Mandal, Ashis Kumar, M. N. M. Kahar, and Graham Kendall. "Addressing Examination Timetabling Problem Using a Partial Exams Approach in Constructive and Improvement." Computation 8, no. 2 (May 17, 2020): 46. http://dx.doi.org/10.3390/computation8020046.

Full text
Abstract:
The paper investigates a partial exam assignment approach for solving the examination timetabling problem. Current approaches involve scheduling all of the exams into time slots and rooms (i.e., produce an initial solution) and then continuing by improving the initial solution in a predetermined number of iterations. We propose a modification of this process that schedules partially selected exams into time slots and rooms followed by improving the solution vector of partial exams. The process then continues with the next batch of exams until all exams are scheduled. The partial exam assignment approach utilises partial graph heuristic orderings with a modified great deluge algorithm (PGH-mGD). The PGH-mGD approach is tested on two benchmark datasets, a capacitated examination dataset from the 2nd international timetable competition (ITC2007) and an un-capacitated Toronto examination dataset. Experimental results show that PGH-mGD is able to produce quality solutions that are competitive with those of the previous approaches reported in the scientific literature.
APA, Harvard, Vancouver, ISO, and other styles
19

Sedeño-Noda, Antonio. "Ranking One Million Simple Paths in Road Networks." Asia-Pacific Journal of Operational Research 33, no. 05 (October 2016): 1650042. http://dx.doi.org/10.1142/s0217595916500421.

Full text
Abstract:
In this paper, we address the problem of finding the [Formula: see text] best paths connecting a given pair of nodes in a directed graph with arbitrary lengths. We introduce an algorithm to determine the [Formula: see text] best paths in order of length when repeat nodes in the paths are allowed. We obtain an O[Formula: see text] time and O[Formula: see text] space algorithm to implicitly determine the [Formula: see text] shortest paths in a directed graph with [Formula: see text] nodes and [Formula: see text] arcs. Empirical results where the algorithm was used to compute one hundred million paths in USA road networks are reported. A non-trivial modification of the previous algorithm is performed obtaining an O[Formula: see text] time method to compute paths without repeat nodes and to answer the next question: how many paths [Formula: see text] in practice are needed to identify [Formula: see text] simple paths using the previous algorithm? We find that the response is usually O[Formula: see text] based on an experiment computing one million paths in USA road networks. The determination of a theoretical tight bound on [Formula: see text] remains as an open problem.
APA, Harvard, Vancouver, ISO, and other styles
20

PERKOVIĆ, LJUBOMIR, and BRUCE REED. "AN IMPROVED ALGORITHM FOR FINDING TREE DECOMPOSITIONS OF SMALL WIDTH." International Journal of Foundations of Computer Science 11, no. 03 (September 2000): 365–71. http://dx.doi.org/10.1142/s0129054100000247.

Full text
Abstract:
We present a modification of Bodlaender's linear time algorithm that, for constant k, determine whether an input graph G has treewidth k and, if so, constructs a tree decomposition of G of width at most k. Our algorithm has the following additional feature: if G has treewidth greater than k then a subgraph G′ of G of treewidth greater than k is returned along with a tree decomposition of G′ of width at most 2k. A consequence is that the fundamental disjoint rooted paths problem can now be solved in O(n2) time. This is the primary motivation of this paper.
APA, Harvard, Vancouver, ISO, and other styles
21

Korotyaev, Evgeny, and Natalia Saburova. "Scattering on periodic metric graphs." Reviews in Mathematical Physics 32, no. 08 (February 13, 2020): 2050024. http://dx.doi.org/10.1142/s0129055x20500245.

Full text
Abstract:
We consider the Laplacian on a periodic metric graph and obtain its decomposition into a direct fiber integral in terms of the corresponding discrete Laplacian. Eigenfunctions and eigenvalues of the fiber metric Laplacian are expressed explicitly in terms of eigenfunctions and eigenvalues of the corresponding fiber discrete Laplacian and eigenfunctions of the Dirichlet problem on the unit interval. We show that all these eigenfunctions are uniformly bounded. We apply these results to the periodic metric Laplacian perturbed by real integrable potentials. We prove the following: (a) the wave operators exist and are complete, (b) the standard Fredholm determinant is well-defined and is analytic in the upper half-plane without any modification for any dimension, (c) the determinant and the corresponding S-matrix satisfy the Birman–Krein identity.
APA, Harvard, Vancouver, ISO, and other styles
22

Faisol, Faisol, and Masdukil Makruf. "Distribusi Batik Madura Melalui Penerapan Generalized Vehicle Routing Problem (GVRP)." Jurnal Matematika "MANTIK" 3, no. 2 (October 28, 2017): 101–4. http://dx.doi.org/10.15642/mantik.2017.3.2.101-104.

Full text
Abstract:
Product distribution process is an effort to convey a product of consumer handlebar with a planned and programmed system. Cluster method is a grouping of the nearest market location, then analyzed the location of potential facilities through center of gravity. GVRP (Generalized Vehicle Routing Problem) is one of the algorithms in the cluster method [1]. In the GVRP describes the route determination to minimize the required distribution costs. GVRP is a generalization of VRP, so the point of the graph is partitioned into several sets of specific points, called clusters [2]. In this research, modification of GVRP model for multi-capacity vehicle case can determine the route and minimize the cost of distribution. Taken case on UD. Damai Asih for the distribution of Madura writes batik to 25 districts in East Java. From the results of running using MATLAB 7.8.0 obtained the efficiency of the distribution cost of 8.71% of the initial cost before doing the clustering based on distance and maximum capacity of the car of Rp. 6,969,480.00. After the filtering based on the distance and maximum capacity of the car obtained a cost of Rp. 6.365.500.00. The highest value of efficiency is obtained in cluster four, while the lowest efficiency value is obtained in cluster eight. The existence of cost efficiency is due to the different mileage in the clustering process.
APA, Harvard, Vancouver, ISO, and other styles
23

Sharma, Vivek, Rakesh Kumar, and Sanjay Tyagi. "Optimized Solution of TSP (Travelling Salesman Problem) Based on Mendelian Inheritance." Recent Advances in Computer Science and Communications 13, no. 5 (November 5, 2020): 909–16. http://dx.doi.org/10.2174/2213275912666190617155828.

Full text
Abstract:
Background: TSP problem has been the part of literature from many decades; it’s an important optimization issue in operation research. TSP problem always remain greedy for the better results especially if chosen working field are Genetic Algorithms (GA). Objective: This paper presents a TSP solution, which performed the modified selection and crossover operations as well as takes advantage of Mendelian inheritance while producing the generations. Methods: GA has very broad resolution scope for optimization problems and it is capable enough for generating well-optimized results if right GA technique has been applied on right point of issue in controlled manner. here the proposed agenda is to utilize the GA concept for TSP by applying mendels rules which is never applied before for the same issue. Here the proposed scheme applies some modification in traditional Mendel process. In general, full chromosome window has been utilized in mendel inheritance process but in presented scheme we have utilizes Most Significant Bits (MSB) only which helps in to control the convergence aptitude of the process. Results: The scheme uses advanced modified Mendel operation which helps in to control convergence aptitude of the operation. It efficiently minimizes the total travelled distance of the graph which was the ultimate objective of the problem and that has been successfully achieved. Conclusion: The validation of the scheme has been confirmed from the obtained results, which are better enough as comparison to traditional TSP-GA.
APA, Harvard, Vancouver, ISO, and other styles
24

Melo, Andre, Johanna Völker, and Heiko Paulheim. "Type Prediction in Noisy RDF Knowledge Bases Using Hierarchical Multilabel Classification with Graph and Latent Features." International Journal on Artificial Intelligence Tools 26, no. 02 (April 2017): 1760011. http://dx.doi.org/10.1142/s0218213017600119.

Full text
Abstract:
Semantic Web knowledge bases, in particular large cross-domain data, are often noisy, incorrect, and incomplete with respect to type information. This incompleteness can be reduced, as previous work shows, with automatic type prediction methods. Most knowledge bases contain an ontology defining a type hierarchy, and, in general, entities are allowed to have multiple types (classes of an instance assigned with the rdf:type relation). In this paper, we exploit these characteristics and formulate the type prediction problem as hierarchical multi classification, where the labels are types. We evaluate different sets of features, including entity embeddings, which can be extracted from the knowledge graph exclusively. We propose SLCN, a modification of the local classifier per node approach, which performs feature selection, instance sampling, and class balancing for each local classifier with the objective of improving scalability. Furthermore, we explore different variants of creating features for the classifier, including both graph and latent features. We compare the performance of our proposed method with the state-of-the-art type prediction approach and popular hierarchical multilabel classifiers, and report on experiments with large-scale cross-domain RDF datasets.
APA, Harvard, Vancouver, ISO, and other styles
25

Madleňák, Radovan, Lucia Madleňáková, Jozef Štefunko, and Reiner Keil. "Multiple Approaches of Solving Allocation Problems on Postal Transportation Network in Conditions of Large Countries." Transport and Telecommunication Journal 17, no. 3 (September 1, 2016): 222–30. http://dx.doi.org/10.1515/ttj-2016-0020.

Full text
Abstract:
Abstract The article deals with the optimizing the postal transportation network with two different optimizing methods. The research adopted in this article uses allocation models within graph theory to obtain results for addressed optimization problem. The article presents and compares two types of these models: p-median and uncapacitated fixed charge facility location model. The aim of p-median model is to find the location of P facilities in network, serving all demands in a way ensuring the average transport cost to be minimal. Fixed charge location model approach the issue of facility location based on minimizing the overall costs of implementation of selected variants. The latter this two models are subsequently applied on the postal network to determine the optimal location of postal facilities. These two models are adopted in the condition of large country with area above 300 000 km2. The Italy was chosen as a typical country that fits this condition. The underlying infrastructure of Italy is represented by simplified model of a postal network, abstracted by a graph G = (V, E, c, w). The results can serve as a basis for modification of the used models for the simulation of networks in the postal sector and as a key that compares the opportunities and results of application of these two models in the conditions large countries.
APA, Harvard, Vancouver, ISO, and other styles
26

Yu, Cheng-Wei, Ben R. Hodges, and Frank Liu. "Automated Detection of Instability-Inducing Channel Geometry Transitions in Saint-Venant Simulation of Large-Scale River Networks." Water 13, no. 16 (August 17, 2021): 2236. http://dx.doi.org/10.3390/w13162236.

Full text
Abstract:
A new sweep-search algorithm (SSA) is developed and tested to identify the channel geometry transitions responsible for numerical convergence failure in a Saint-Venant equation (SVE) simulation of a large-scale open-channel network. Numerical instabilities are known to occur at “sharp” transitions in discrete geometry, but the identification of problem locations has been a matter of modeler’s art and a roadblock to implementing large-scale SVE simulations. The new method implements techniques from graph theory applied to a steady-state 1D shallow-water equation solver to recursively examine the numerical stability of each flowpath through the channel network. The SSA is validated with a short river reach and tested by the simulation of ten complete river systems of the Texas–Gulf Coast region by using the extreme hydrological conditions recorded during hurricane Harvey. The SSA successfully identified the problematic channel sections in all tested river systems. Subsequent modification of the problem sections allowed stable solution by an unsteady SVE numerical solver. The new SSA approach permits automated and consistent identification of problem channel geometry in large open-channel network data sets, which is necessary to effectively apply the fully dynamic Saint-Venant equations to large-scale river networks or for city-wide stormwater networks.
APA, Harvard, Vancouver, ISO, and other styles
27

Castiglioni, Matteo, Diodato Ferraioli, Nicola Gatti, and Giulia Landriani. "Election Manipulation on Social Networks: Seeding, Edge Removal, Edge Addition." Journal of Artificial Intelligence Research 71 (August 27, 2021): 1049–90. http://dx.doi.org/10.1613/jair.1.12826.

Full text
Abstract:
We focus on the election manipulation problem through social influence, where a manipulator exploits a social network to make her most preferred candidate win an election. Influence is due to information in favor of and/or against one or multiple candidates, sent by seeds and spreading through the network according to the independent cascade model. We provide a comprehensive theoretical study of the election control problem, investigating two forms of manipulations: seeding to buy influencers given a social network and removing or adding edges in the social network given the set of the seeds and the information sent. In particular, we study a wide range of cases distinguishing in the number of candidates or the kind of information spread over the network. Our main result shows that the election manipulation problem is not affordable in the worst-case, even when one accepts to get an approximation of the optimal margin of victory, except for the case of seeding when the number of hard-to-manipulate voters is not too large, and the number of uncertain voters is not too small, where we say that a voter that does not vote for the manipulator's candidate is hard-to-manipulate if there is no way to make her vote for this candidate, and uncertain otherwise. We also provide some results showing the hardness of the problems in special cases. More precisely, in the case of seeding, we show that the manipulation is hard even if the graph is a line and that a large class of algorithms, including most of the approaches recently adopted for social-influence problems (e.g., greedy, degree centrality, PageRank, VoteRank), fails to compute a bounded approximation even on elementary networks, such as undirected graphs with every node having a degree at most two or directed trees. In the case of edge removal or addition, our hardness results also apply to election manipulation when the manipulator has an unlimited budget, being allowed to remove or add an arbitrary number of edges, and to the basic case of social influence maximization/minimization in the restricted case of finite budget. Interestingly, our hardness results for seeding and edge removal/addition still hold in a re-optimization variant, where the manipulator already knows an optimal solution to the problem and computes a new solution once a local modification occurs, e.g., the removal/addition of a single edge.
APA, Harvard, Vancouver, ISO, and other styles
28

Chew Hernandez, M. L., E. K. Velazquez Hernandez, and S. Leon Dominguez. "A Decision-Analytic Feasibility Study of Upgrading Machinery at a Tools Workshop." Engineering, Technology & Applied Science Research 2, no. 2 (April 11, 2012): 182–89. http://dx.doi.org/10.48084/etasr.139.

Full text
Abstract:
This paper presents the evaluation, from a Decision Analysis point of view, of the feasibility of upgrading machinery at an existing metal-forming workshop. The Integral Decision Analysis (IDA) methodology is applied to clarify the decision and develop a decision model. One of the key advantages of the IDA is its careful selection of the problem frame, allowing a correct problem definition. While following most of the original IDA methodology, an addition to this methodology is proposed in this work, that of using the strategic Means-Ends Objective Network as a backbone for the development of the decision model. The constructed decision model uses influence diagrams to include factual operator and vendor expertise, simulation to evaluate the alternatives and a utility function to take into account the risk attitude of the decision maker. Three alternatives are considered: Base (no modification), CNC (installing an automatic lathe) and CF (installation of an automatic milling machine). The results are presented as a graph showing zones in which a particular alternative should be selected. The results show the potential of IDA to tackle technical decisions that are otherwise approached without the due care.
APA, Harvard, Vancouver, ISO, and other styles
29

Shamir, Ron, Roded Sharan, and Dekel Tsur. "Cluster graph modification problems." Discrete Applied Mathematics 144, no. 1-2 (November 2004): 173–82. http://dx.doi.org/10.1016/j.dam.2004.01.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Jiang, Chenhan, Shaoju Wang, Xiaodan Liang, Hang Xu, and Nong Xiao. "ElixirNet: Relation-Aware Network Architecture Adaptation for Medical Lesion Detection." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11093–100. http://dx.doi.org/10.1609/aaai.v34i07.6765.

Full text
Abstract:
Most advances in medical lesion detection network are limited to subtle modification on the conventional detection network designed for natural images. However, there exists a vast domain gap between medical images and natural images where the medical image detection often suffers from several domain-specific challenges, such as high lesion/background similarity, dominant tiny lesions, and severe class imbalance. Is a hand-crafted detection network tailored for natural image undoubtedly good enough over a discrepant medical lesion domain? Is there more powerful operations, filters, and sub-networks that better fit the medical lesion detection problem to be discovered? In this paper, we introduce a novel ElixirNet that includes three components: 1) TruncatedRPN balances positive and negative data for false positive reduction; 2) Auto-lesion Block is automatically customized for medical images to incorporates relation-aware operations among region proposals, and leads to more suitable and efficient classification and localization. 3) Relation transfer module incorporates the semantic relationship and transfers the relevant contextual information with an interpretable graph, thus alleviates the problem of lack of annotations for all types of lesions. Experiments on DeepLesion and Kits19 prove the effectiveness of ElixirNet, achieving improvement of both sensitivity and precision over FPN with fewer parameters.
APA, Harvard, Vancouver, ISO, and other styles
31

Phetkaew, Thimaporn, Wanchai Rivepiboon, and Boonserm Kijsirikul. "Reordering Adaptive Directed Acyclic Graphs for Multiclass Support Vector Machines." Journal of Advanced Computational Intelligence and Intelligent Informatics 7, no. 3 (October 20, 2003): 315–21. http://dx.doi.org/10.20965/jaciii.2003.p0315.

Full text
Abstract:
The problem of extending binary support vector machines (SVMs) for multiclass classification is still an ongoing research issue. Ussivakul and Kijsirikul proposed the Adaptive Directed Acyclic Graph (ADAG) approach that provides accuracy comparable to that of the standard algorithm-Max Wins and requires low computation. However, different sequences of nodes in the ADAG may provide different accuracy. In this paper we present a new method for multiclass classification, Reordering ADAG, which is the modification of the original ADAG method. We show examples to exemplify that the margin (or 2/|w| value) between two classes of each binary SVM classifier affects the accuracy of classification, and this margin indicates the magnitude of confusion between the two classes. In this paper, we propose an algorithm to choose an optimal sequence of nodes in the ADAG by considering the |w| values of all classifiers to be used in data classification. We then compare our performance with previous methods including the ADAG and the Max Wins algorithm. Experimental results demonstrate that our method gives higher accuracy. Moreover it runs faster than Max Wins, especially when the number of classes and/or the number of dimensions are relatively large.
APA, Harvard, Vancouver, ISO, and other styles
32

Fahrenthold, E. P., and J. D. Wargo. "Lagrangian Bond Graphs for Solid Continuum Dynamics Modeling." Journal of Dynamic Systems, Measurement, and Control 116, no. 2 (June 1, 1994): 178–92. http://dx.doi.org/10.1115/1.2899209.

Full text
Abstract:
The limitations of existing continuum bond graph modeling techniques have effectively precluded their use in large order problems, where nonrepetitive graph structures and causal patterns are normally present. As a result, despite extensive publication of bond graph models for continuous systems simulations, bond graph methods have not offered a viable alternative to finite element analysis for the vast majority of practical problems. However, a new modeling approach combining Lagrangian (mass fixed) bond graphs with a selected finite element discretization scheme allows for direct simulation of a wide range of large order solid continuum dynamics problems. With appropriate modifications, including the use of Eulerian (space fixed) bond graphs, the method may be extended to include fluid dynamics modeling.
APA, Harvard, Vancouver, ISO, and other styles
33

Garcia-Hernandez, Carlos, Alberto Fernández, and Francesc Serratosa. "Learning the Edit Costs of Graph Edit Distance Applied to Ligand-Based Virtual Screening." Current Topics in Medicinal Chemistry 20, no. 18 (August 24, 2020): 1582–92. http://dx.doi.org/10.2174/1568026620666200603122000.

Full text
Abstract:
Background: Graph edit distance is a methodology used to solve error-tolerant graph matching. This methodology estimates a distance between two graphs by determining the minimum number of modifications required to transform one graph into the other. These modifications, known as edit operations, have an edit cost associated that has to be determined depending on the problem. Objective: This study focuses on the use of optimization techniques in order to learn the edit costs used when comparing graphs by means of the graph edit distance. Methods: Graphs represent reduced structural representations of molecules using pharmacophore-type node descriptions to encode the relevant molecular properties. This reduction technique is known as extended reduced graphs. The screening and statistical tools available on the ligand-based virtual screening benchmarking platform and the RDKit were used. Results: In the experiments, the graph edit distance using learned costs performed better or equally good than using predefined costs. This is exemplified with six publicly available datasets: DUD-E, MUV, GLL&GDD, CAPST, NRLiSt BDB, and ULS-UDS. Conclusion: This study shows that the graph edit distance along with learned edit costs is useful to identify bioactivity similarities in a structurally diverse group of molecules. Furthermore, the target-specific edit costs might provide useful structure-activity information for future drug-design efforts.
APA, Harvard, Vancouver, ISO, and other styles
34

Chepurko, V. A., and A. N. Chernyaev. "Method of estimating the size of an SPTA with a safety stock." Dependability 21, no. 3 (September 21, 2021): 13–19. http://dx.doi.org/10.21683/1729-2646-2021-21-3-13-19.

Full text
Abstract:
Aim. To modify the classical method [1, 4] that causes incorrect estimation of the required size of SPTA in cases when the replacement rate of failed parts is comparable to the SPTA replenishment rate. The modification is based on the model of SPTA target level replenishment. The model considers two situations: with and without the capability to correct requests in case of required increase of the size of replenishment. The paper also aims to compare the conventional and adjusted solution and to develop recommendations for the practical application of the method of SPTA target level replenishment. Methods. Markovian models [2, 3, 5] are used for describing the system. The flows of events are simple. The final probabilities were obtained using the Kolmogorov equation. The Kolmogorov system of equations has a stationary solution. Classical methods of the probability theory and mathematical theory of dependability [6] were used. Conclusions. The paper improves upon the known method of estimating the required size of the SPTA with a safety stock. The paper theoretically substantiates the dependence of the rate of backward transitions on the graph state index. It is shown that in situations when the application is not adjusted, the rates of backward transitions from states in which the SPTA safety stock has been reached and exceeded should gradually increase as the stock continues to decrease. The multiplier will have a power-law dependence on the transition rate index. It was theoretically and experimentally proven that the classical method causes SPTA overestimation. Constraint (3) was theoretically derived, under which the problem is solved sufficiently simply using the classical methods. It was shown that if constraint (3) is not observed, mathematically, the value of the backward transition rate becomes uncertain. In this case, correct problem definition results in graphs with a linearly increasing number of states, thus, by default, the problem falls into the category of labour-intensive. If the limits are not observed, a simplifying assumption is made, under which a stationary solution of the problem has been obtained. It is shown that, under that assumption, the solution of the problem is conservative. It was shown that, if the application is adjusted, the rate of backward transition from the same states should gradually decrease as the stock diminishes. The multiplier will have a hyperbolic dependence on the transition rate index. This dependence results in a conservative solution of the problem of replenishment of SPTA with application adjustment. The paper defines the ratio that regulates the degree of conservatism. It is theoretically and experimentally proven that in such case the classical method causes SPTA underestimation. A stationary solution of the problem of SPTA replenishment with application adjustment has been obtained. In both cases of application adjustment reporting, a criterion has been formulated for SPTA replenishment to a specified level. A comparative analysis of the methods was carried out.
APA, Harvard, Vancouver, ISO, and other styles
35

Baiche, Karim, Yassine Meraihi, Manolo Dulva Hina, Amar Ramdane-Cherif, and Mohammed Mahseur. "Solving Graph Coloring Problem Using an Enhanced Binary Dragonfly Algorithm." International Journal of Swarm Intelligence Research 10, no. 3 (July 2019): 23–45. http://dx.doi.org/10.4018/ijsir.2019070102.

Full text
Abstract:
The graph coloring problem (GCP) is one of the most interesting classical combinatorial optimization problems in graph theory. It is known to be an NP-Hard problem, so many heuristic algorithms have been employed to solve this problem. In this article, the authors propose a new enhanced binary dragonfly algorithm to solve the graph coloring problem. The binary dragonfly algorithm has been enhanced by introducing two modifications. First, the authors use the Gaussian distribution random selection method for choosing the right value of the inertia weight w used to update the step vector (∆X). Second, the authors adopt chaotic maps to determine the random parameters s, a, c, f, and e. The aim of these modifications is to improve the performance and the efficiency of the binary dragonfly algorithm and ensure the diversity of solutions. The authors consider the well-known DIMACS benchmark graph coloring instances to evaluate the performance of their algorithm. The simulation results reveal the effectiveness and the successfulness of the proposed algorithm in comparison with some well-known algorithms in the literature.
APA, Harvard, Vancouver, ISO, and other styles
36

Jan, Naeem, Kifayat Ullah, Tahir Mahmood, Harish Garg, Bijan Davvaz, Arsham Saeid, and Said Broumi. "Some Root Level Modifications in Interval Valued Fuzzy Graphs and Their Generalizations Including Neutrosophic Graphs." Mathematics 7, no. 1 (January 10, 2019): 72. http://dx.doi.org/10.3390/math7010072.

Full text
Abstract:
Fuzzy graphs (FGs) and their generalizations have played an essential role in dealing with real-life problems involving uncertainties. The goal of this article is to show some serious flaws in the existing definitions of several root-level generalized FG structures with the help of some counterexamples. To achieve this, first, we aim to improve the existing definition for interval-valued FG, interval-valued intuitionistic FG and their complements, as these existing definitions are not well-defined; i.e., one can obtain some senseless intervals using the existing definitions. The limitations of the existing definitions and the validity of the new definitions are supported with some examples. It is also observed that the notion of a single-valued neutrosophic graph (SVNG) is not well-defined either. The consequences of the existing definition of SVNG are discussed with the help of examples. A new definition of SVNG is developed, and its improvement is demonstrated with some examples. The definition of an interval-valued neutrosophic graph is also modified due to the shortcomings in the current definition, and the validity of the new definition is proved. An application of proposed work is illustrated through a decision-making problem under the framework of SVNG, and its performance is compared with existing work.
APA, Harvard, Vancouver, ISO, and other styles
37

Yunlong Liu, Jianxin Wang, and Jiong Guo. "An overview of kernelization algorithms for graph modification problems." Tsinghua Science and Technology 19, no. 4 (August 2014): 346–57. http://dx.doi.org/10.1109/tst.2014.6867517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Bessy, Stéphane, Christophe Paul, and Anthony Perez. "Polynomial kernels for 3-leaf power graph modification problems." Discrete Applied Mathematics 158, no. 16 (August 2010): 1732–44. http://dx.doi.org/10.1016/j.dam.2010.07.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

CEDERBAUM, I. "SPECTRAL PROPERTIES OF ADMITTANCE MATRICES OF RESISTIVE TREE NETWORKS." Journal of Circuits, Systems and Computers 04, no. 01 (March 1994): 53–70. http://dx.doi.org/10.1142/s0218126694000041.

Full text
Abstract:
In this paper spectral properties of the admittance matrix of a resistive network whose underlying graph forms a general tree are studied. The algebraic presentation of the network is provided by its real node admittance matrix with respect to one of its terminal vertices, considered to be the root of the tree. The spectral properties of this matrix are studied by application of the theory of two-element-kind (R, C) networks. A mechanical analogue of a particular case of a similar problem, corresponding to a linear tree has been studied in the classical work of Gantmacher and Krein.7 Generalization of the study to networks based on trees of arbitrary structure calls for a modification of the mathematical approach. Instead of polynomial Sturm sequences applied in Ref. 7 the paper applies sequences of rational functions obeying the two basic Sturm conditions. In the special case of a linear tree these rational functions turn out to be polynomials, and the results are equivalent to those in Ref. 7. For a general tree the paper takes into consideration any root—leaf path of the tree. It is shown that the conditions on such a path are similar to those taking place on a linear tree. Some difference occurs in the number of sign reversals in the sequence of coordinates of characteristic vectors. In the case of a linear tree this number depends only on the position of the corresponding characteristic frequency in the spectrum of the matrix. In the case of a root-leaf path of a general tree, this number has to be normally decreased. The correction (which might be zero) is equal to the number of poles of the determinant of the reduced admittance matrix corresponding to the path considered, which does not exceed the characteristic frequency.
APA, Harvard, Vancouver, ISO, and other styles
40

Cai, Leizhen. "Fixed-parameter tractability of graph modification problems for hereditary properties." Information Processing Letters 58, no. 4 (May 1996): 171–76. http://dx.doi.org/10.1016/0020-0190(96)00050-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Schaller, David, Peter F. Stadler, and Marc Hellmuth. "Complexity of modification problems for best match graphs." Theoretical Computer Science 865 (April 2021): 63–84. http://dx.doi.org/10.1016/j.tcs.2021.02.037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Akram, Muhammad, Jawaria Mohsan Dar, and Sundas Shahzadi. "Decision Making Approach under Pythagorean Dombi Fuzzy Graphs for Selection of Leading Textile Industry." Mathematical and Computational Applications 24, no. 4 (December 13, 2019): 102. http://dx.doi.org/10.3390/mca24040102.

Full text
Abstract:
Graphs play a pivotal role in structuring real-world scenarios such as network security and expert systems. Numerous extensions of graph theoretical conceptions have been established for modeling uncertainty in graphical network situations. The Pythagorean Dombi fuzzy graph (PDFG), a generalization of the fuzzy Dombi graph (FDG), is very useful in representing vague relations between several objects, whereas the operational parameter has a flexible nature in decision-making problems. The main objective of this research study is to expand the area of discussion on PDFGs by establishing fruitful results and notions related to operations such as the direct product, Cartesian product, semi-strong product, strong product, and composition on PDFGs. Certain concepts, including the degree of vertices and total degree, are discussed as its modifications. Meanwhile, these outcomes are considered on PDFGs maintaining the strongness property. At the end, an algorithm for Pythagorean Dombi fuzzy multi-criteria decision-making is given, and a numerical example based on the selection of a leading textile industry is put forward to clarify the suitability of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
43

Michalak, Łukasz Patryk. "Combinatorial Modifications of Reeb Graphs and the Realization Problem." Discrete & Computational Geometry 65, no. 4 (January 13, 2021): 1038–60. http://dx.doi.org/10.1007/s00454-020-00260-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Gramm, Jens, Jiong Guo, Falk Hüffner, and Rolf Niedermeier. "Automated Generation of Search Tree Algorithms for Hard Graph Modification Problems." Algorithmica 39, no. 4 (March 19, 2004): 321–47. http://dx.doi.org/10.1007/s00453-004-1090-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hellmuth, Marc, Manuela Geiß, and Peter F. Stadler. "Complexity of modification problems for reciprocal best match graphs." Theoretical Computer Science 809 (February 2020): 384–93. http://dx.doi.org/10.1016/j.tcs.2019.12.033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Olevska, Yu B., V. I. Olevskyi, K. I. Timchy, and О. V. Olevskyi. "The method of fuzzy determination of the concentration of heavy metals in the atomic absorption spectral analysis of bottom sediments." Computer Modeling: Analysis, Control, Optimization 7, no. 1 (2020): 29–36. http://dx.doi.org/10.32434/2521-6406-2020-1-7-29-36.

Full text
Abstract:
Due to the technogenic impact on the biosphere and its components, a significant amount of heavy metals and radionuclides ends up in the environment. One of the main directions for improving the ecological components of environmental safety is the biotransformation of bottom sediments of reservoirs containing heavy metals, with the help of vermiculture, into biologically safe organic fertilizer. Assessment of the concentration of heavy metals in bottom sediments is an urgent task, the solution of which will allow preserving the natural environment, improving the condition of soils and, as a result, human health. The problem of using bottom deposits in this case is the accuracy of determining the content of various heavy metals in them, which affect the vital activity of earthworms. The gross and mobile forms of heavy metals in experimental substrates can be most accurately determined by atomic absorption spectral analysis. Atomic absorption analysis is a method of analytical chemistry based on the selective absorption of electromagnetic radiation of a certain wavelength by neutral atoms of the element being determined free of all molecular bonds. In the process of absorption, an electron moves from the main energy level to a higher one as a result of photon excitation. In this case, the intensity of the exciting light of a given frequency decreases. Accurate quantification is often hampered by significant matrix interference and non-uniform analyte distribution. To achieve the accuracy and reliability of the method required for vermicultivation, this work proposes a modification of the analysis method by applying fuzzy modeling of the experimental results. From a mathematical point of view, the process of constructing a calibration graph can be implemented using the procedure for constructing a fuzzy scale in the method for decoding the weight of proteins during electrophoresis. An algorithm is described for determining the fuzzy concentration of a metal from the atomic absorption signal data, followed by defuzzification of the obtained fuzzy concentration for analysis and practical use. Keywords: fuzzy modeling, spectral analysis, heavy metals.
APA, Harvard, Vancouver, ISO, and other styles
47

Holczinger, Tibor, Olivér Ősz, and Máté Hegyháti. "Scheduling approach for on-site jobs of service providers." Flexible Services and Manufacturing Journal 32, no. 4 (May 25, 2019): 913–48. http://dx.doi.org/10.1007/s10696-019-09359-2.

Full text
Abstract:
AbstractNowadays the successful operation of a company is unimaginable without fast and reliable communication. As a result, so-called Communication Service Providers play an important role in today’s business life. Their orders have to be carried out promptly and dependably, let them be requests for new installations, modifications, or maintenance tasks. These orders have to be performed at different locations and they often have deadlines or strict starting times. Violating such a timing requirement usually implies penalties. In this paper, scheduling problems arising at a Hungarian service provider are examined. At this company, orders are decomposed into smaller tasks, which can be performed by specially trained personnel. Transportation of these specialists contributes a lot to the costs and to the complexity of their scheduling, as well. The goal is to minimize the overall cost of satisfying all orders within the given time horizon with the available assets of the company. The proposed approach relies on the S-graph framework, which has been applied to various production scheduling problems in the literature. In addition to an unambiguous and sound S-graph model of the examined problem, slight modifications of the scheduling algorithms for cost minimization, and new bounding methods have been developed. Several of such bounds have been provided and tested for performance and scalability over a large number of generated examples. The sensitivity of the approach for certain problem features has also been examined.
APA, Harvard, Vancouver, ISO, and other styles
48

Peng, Yun, James A. Reggia, and Tao Li. "A CONNECTIONIST APPROACH TO VERTEX COVERING PROBLEMS." International Journal of Neural Systems 03, no. 01 (January 1992): 43–56. http://dx.doi.org/10.1142/s012906579200005x.

Full text
Abstract:
This paper describes a connectionist approach to solving computationally difficult minimum vertex covering problems. This approach uses the graph representing the vertex covering problem as the connectionist network without any modifications (nodes of the connectionist network represent vertices and links represent edges of the given graph). The activation rule governing node behavior is derived by breaking down the global constraints on a solution into local constraints on individual nodes. The resulting model uses a competitive activation mechanism to carry out the computation where vertices compete not by explicit inhibitory links but through common resources (edges). Convergence and other properties of this model are formally established by introducing a monotonically non-increasing global energy function. Simulation results show that this model yields very high accuracy, significantly outperforming a well-known sequential approximation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
49

Javidian, Mohammad Ali, Marco Valtorta, and Pooyan Jamshidi. "AMP Chain Graphs: Minimal Separators and Structure Learning Algorithms." Journal of Artificial Intelligence Research 69 (October 7, 2020): 419–70. http://dx.doi.org/10.1613/jair.1.12101.

Full text
Abstract:
This paper deals with chain graphs (CGs) under the Andersson–Madigan–Perlman (AMP) interpretation. We address the problem of finding a minimal separator in an AMP CG, namely, finding a set Z of nodes that separates a given non-adjacent pair of nodes such that no proper subset of Z separates that pair. We analyze several versions of this problem and offer polynomial time algorithms for each. These include finding a minimal separator from a restricted set of nodes, finding a minimal separator for two given disjoint sets, and testing whether a given separator is minimal. To address the problem of learning the structure of AMP CGs from data, we show that the PC-like algorithm is order dependent, in the sense that the output can depend on the order in which the variables are given. We propose several modifications of the PC-like algorithm that remove part or all of this order-dependence. We also extend the decomposition-based approach for learning Bayesian networks (BNs) to learn AMP CGs, which include BNs as a special case, under the faithfulness assumption. We prove the correctness of our extension using the minimal separator results. Using standard benchmarks and synthetically generated models and data in our experiments demonstrate the competitive performance of our decomposition-based method, called LCD-AMP, in comparison with the (modified versions of) PC-like algorithm. The LCD-AMP algorithm usually outperforms the PC-like algorithm, and our modifications of the PC-like algorithm learn structures that are more similar to the underlying ground truth graphs than the original PC-like algorithm, especially in high-dimensional settings. In particular, we empirically show that the results of both algorithms are more accurate and stabler when the sample size is reasonably large and the underlying graph is sparse
APA, Harvard, Vancouver, ISO, and other styles
50

Egorov, V. V. "Multi-criteria path rationalization in the conditions of multi-type passenger transport systems." Vestnik Universiteta, no. 5 (July 6, 2021): 109–16. http://dx.doi.org/10.26425/1816-4277-2021-5-109-116.

Full text
Abstract:
The article proposes methods of searching passenger travel routes in conditions where one or more optimization criteria must be taken into account in the presence of a pedestrian system and multi-type transport systems with their topologies, sets of parameters and tariff plans. The author carried out the research by means of mathematical modeling of the transport system in the form of its deterministic graph model. The author chose Dijk-stra's algorithm as the basic algorithm, on the basis of which the modifications of the previous ones were carried out and the construction of a new search technique was carried out. As a result, the study obtained algorithms for solving single-criteria and multi-criteria problems on graphs. For multicriterial problems, the author used the convolution method and the method of ordering criteria by the degree of decreasing their significance. The field of application of the developed algorithms is information systems focused on the end user and on the structures that design and manage transport networks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography