Academic literature on the topic 'Complete Call Graph'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Complete Call Graph.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Complete Call Graph"

1

Kaliraj, K., V. Kowsalya, and Vernold Vivin. "On star coloring of Mycielskians." Indonesian Journal of Combinatorics 2, no. 2 (December 21, 2018): 82. http://dx.doi.org/10.19184/ijc.2018.2.2.3.

Full text
Abstract:
<p>In a search for triangle-free graphs with arbitrarily large chromatic numbers, Mycielski developed a graph transformation that transforms a graph <span class="math"><em>G</em></span> into a new graph <span class="math"><em>μ</em>(<em>G</em>)</span>, we now call the Mycielskian of <span class="math"><em>G</em></span>, which has the same clique number as <span class="math"><em>G</em></span> and whose chromatic number equals <span class="math"><em>χ</em>(<em>G</em>) + 1</span>. In this paper, we find the star chromatic number for the Mycielskian graph of complete graphs, paths, cycles and complete bipartite graphs.</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Badawi, Ayman, and Roswitha Rissner. "Ramsey numbers of partial order graphs (comparability graphs) and implications in ring theory." Open Mathematics 18, no. 1 (December 31, 2020): 1645–57. http://dx.doi.org/10.1515/math-2020-0085.

Full text
Abstract:
Abstract For a partially ordered set (A,\le ) , let {G}_{A} be the simple, undirected graph with vertex set A such that two vertices a\ne b\in A are adjacent if either a\le b or b\le a . We call {G}_{A} the partial order graph or comparability graph of A. Furthermore, we say that a graph G is a partial order graph if there exists a partially ordered set A such that G={G}_{A} . For a class {\mathcal{C}} of simple, undirected graphs and n, m\ge 1 , we define the Ramsey number { {\mathcal R} }_{{\mathcal{C}}}(n,m) with respect to {\mathcal{C}} to be the minimal number of vertices r such that every induced subgraph of an arbitrary graph in {\mathcal{C}} consisting of r vertices contains either a complete n-clique {K}_{n} or an independent set consisting of m vertices. In this paper, we determine the Ramsey number with respect to some classes of partial order graphs. Furthermore, some implications of Ramsey numbers in ring theory are discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Voorhees, Burton, and Bergerud Ryder. "Simple graph models of information spread in finite populations." Royal Society Open Science 2, no. 5 (May 2015): 150028. http://dx.doi.org/10.1098/rsos.150028.

Full text
Abstract:
We consider several classes of simple graphs as potential models for information diffusion in a structured population. These include biases cycles, dual circular flows, partial bipartite graphs and what we call ‘single-link’ graphs. In addition to fixation probabilities, we study structure parameters for these graphs, including eigenvalues of the Laplacian, conductances, communicability and expected hitting times. In several cases, values of these parameters are related, most strongly so for partial bipartite graphs. A measure of directional bias in cycles and circular flows arises from the non-zero eigenvalues of the antisymmetric part of the Laplacian and another measure is found for cycles as the value of the transition probability for which hitting times going in either direction of the cycle are equal. A generalization of circular flow graphs is used to illustrate the possibility of tuning edge weights to match pre-specified values for graph parameters; in particular, we show that generalizations of circular flows can be tuned to have fixation probabilities equal to the Moran probability for a complete graph by tuning vertex temperature profiles. Finally, single-link graphs are introduced as an example of a graph involving a bottleneck in the connection between two components and these are compared to the partial bipartite graphs.
APA, Harvard, Vancouver, ISO, and other styles
4

Corò, Federico, Gianlorenzo D'Angelo, and Cristina M. Pinotti. "Adding Edges for Maximizing Weighted Reachability." Algorithms 13, no. 3 (March 18, 2020): 68. http://dx.doi.org/10.3390/a13030068.

Full text
Abstract:
In this paper, we consider the problem of improving the reachability of a graph. We approach the problem from a graph augmentation perspective, in which a limited set size of edges is added to the graph to increase the overall number of reachable nodes. We call this new problem the Maximum Connectivity Improvement (MCI) problem. We first show that, for the purpose of solve solving MCI, we can focus on Directed Acyclic Graphs (DAG) only. We show that approximating the MCI problem on DAG to within any constant factor greater than 1 − 1 / e is NP -hard even if we restrict to graphs with a single source or a single sink, and the problem remains NP -complete if we further restrict to unitary weights. Finally, this paper presents a dynamic programming algorithm for the MCI problem on trees with a single source that produces optimal solutions in polynomial time. Then, we propose two polynomial-time greedy algorithms that guarantee ( 1 − 1 / e ) -approximation ratio on DAGs with a single source, a single sink or two sources.
APA, Harvard, Vancouver, ISO, and other styles
5

ORENSHTEIN, TAL, and IGOR SHINKAR. "Greedy Random Walk." Combinatorics, Probability and Computing 23, no. 2 (November 20, 2013): 269–89. http://dx.doi.org/10.1017/s0963548313000552.

Full text
Abstract:
We study a discrete time self-interacting random process on graphs, which we call greedy random walk. The walker is located initially at some vertex. As time evolves, each vertex maintains the set of adjacent edges touching it that have not yet been crossed by the walker. At each step, the walker, being at some vertex, picks an adjacent edge among the edges that have not traversed thus far according to some (deterministic or randomized) rule. If all the adjacent edges have already been traversed, then an adjacent edge is chosen uniformly at random. After picking an edge the walker jumps along it to the neighbouring vertex. We show that the expected edge cover time of the greedy random walk is linear in the number of edges for certain natural families of graphs. Examples of such graphs include the complete graph, even degree expanders of logarithmic girth, and the hypercube graph. We also show that GRW is transient in$\mathbb{Z}^d$for alld≥ 3.
APA, Harvard, Vancouver, ISO, and other styles
6

Simonyi, Gábor. "On Colorful Edge Triples in Edge-Colored Complete Graphs." Graphs and Combinatorics 36, no. 6 (September 9, 2020): 1623–37. http://dx.doi.org/10.1007/s00373-020-02214-4.

Full text
Abstract:
Abstract An edge-coloring of the complete graph $$K_n$$ K n we call F-caring if it leaves no F-subgraph of $$K_n$$ K n monochromatic and at the same time every subset of |V(F)| vertices contains in it at least one completely multicolored version of F. For the first two meaningful cases, when $$F=K_{1,3}$$ F = K 1 , 3 and $$F=P_4$$ F = P 4 we determine for infinitely many n the minimum number of colors needed for an F-caring edge-coloring of $$K_n$$ K n . An explicit family of $$2\lceil \log _2 n\rceil $$ 2 ⌈ log 2 n ⌉ 3-edge-colorings of $$K_n$$ K n so that every quadruple of its vertices contains a totally multicolored $$P_4$$ P 4 in at least one of them is also presented. Investigating related Ramsey-type problems we also show that the Shannon (OR-)capacity of the Grötzsch graph is strictly larger than that of the cycle of length 5.
APA, Harvard, Vancouver, ISO, and other styles
7

Górska, Joanna, and Zdzisław Skupień. "A partial refining of the Erdős-Kelly regulation." Opuscula Mathematica 39, no. 3 (2019): 355–60. http://dx.doi.org/10.7494/opmath.2019.39.3.355.

Full text
Abstract:
The aim of this note is to advance the refining of the Erdős-Kelly result on graphical inducing regularization. The operation of inducing regulation (on graphs or multigraphs) with prescribed maximum vertex degree is originated by D. König in 1916. As is shown by Chartrand and Lesniak in their textbook Graphs & Digraphs (1996), an iterated construction for graphs can result in a regularization with many new vertices. Erdős and Kelly have presented (1963, 1967) a simple and elegant numerical method of determining for any simple \(n\)-vertex graph \(G\) with maximum vertex degree \(\Delta\), the exact minimum number, say \(\theta =\theta(G)\), of new vertices in a \(\Delta\)-regular graph \(H\) which includes \(G\) as an induced subgraph. The number \(\theta(G)\), which we call the cost of regulation of \(G\), has been upper-bounded by the order of \(G\), the bound being attained for each \(n\ge4\), e.g. then the edge-deleted complete graph \(K_n-e\) has \(\theta=n\). For \(n\ge 4\), we present all factors of \(K_n\) with \(\theta=n\) and next \(\theta=n-1\). Therein in case \(\theta=n-1\) and \(n\) odd only, we show that a specific extra structure, non-matching, is required.
APA, Harvard, Vancouver, ISO, and other styles
8

Garamvölgyi, Dániel, and Tibor Jordán. "Graph Reconstruction from Unlabeled Edge Lengths." Discrete & Computational Geometry 66, no. 1 (February 26, 2021): 344–85. http://dx.doi.org/10.1007/s00454-021-00275-7.

Full text
Abstract:
AbstractA d-dimensional framework is a pair (G, p), where $$G=(V,E)$$ G = ( V , E ) is a graph and p is a map from V to $$\mathbb {R}^d$$ R d . The length of an edge $$uv\in E$$ u v ∈ E in (G, p) is the distance between p(u) and p(v). The framework is said to be globally rigid in $$\mathbb {R}^d$$ R d if every other d-dimensional framework (G, q), in which the corresponding edge lengths are the same, is congruent to (G, p). In a recent paper Gortler, Theran, and Thurston proved that if every generic framework (G, p) in $$\mathbb {R}^d$$ R d is globally rigid for some graph G on $$n\ge d+2$$ n ≥ d + 2 vertices (where $$d\ge 2$$ d ≥ 2 ), then already the set of (unlabeled) edge lengths of a generic framework (G, p), together with n, determine the framework up to congruence. In this paper we investigate the corresponding unlabeled reconstruction problem in the case when the above generic global rigidity property does not hold for the graph. We provide families of graphs G for which the set of (unlabeled) edge lengths of any generic framework (G, p) in d-space, along with the number of vertices, uniquely determine the graph, up to isomorphism. We call these graphs weakly reconstructible. We also introduce the concept of strong reconstructibility; in this case the labeling of the edges is also determined by the set of edge lengths of any generic framework. For $$d=1,2$$ d = 1 , 2 we give a partial characterization of weak reconstructibility as well as a complete characterization of strong reconstructibility of graphs. In particular, in the low-dimensional cases we describe the family of weakly reconstructible graphs that are rigid but not redundantly rigid.
APA, Harvard, Vancouver, ISO, and other styles
9

Manuel, Paul, Sandi Klavžar, Antony Xavier, Andrew Arokiaraj, and Elizabeth Thomas. "Strong edge geodetic problem in networks." Open Mathematics 15, no. 1 (October 3, 2017): 1225–35. http://dx.doi.org/10.1515/math-2017-0101.

Full text
Abstract:
Abstract Geodesic covering problems form a widely researched topic in graph theory. One such problem is geodetic problem introduced by Harary et al. [Math. Comput. Modelling, 1993, 17, 89-95]. Here we introduce a variation of the geodetic problem and call it strong edge geodetic problem. We illustrate how this problem is evolved from social transport networks. It is shown that the strong edge geodetic problem is NP-complete. We derive lower and upper bounds for the strong edge geodetic number and demonstrate that these bounds are sharp. We produce exact solutions for trees, block graphs, silicate networks and glued binary trees without randomization.
APA, Harvard, Vancouver, ISO, and other styles
10

Zou, Deqing, Yueming Wu, Siru Yang, Anki Chauhan, Wei Yang, Jiangying Zhong, Shihan Dou, and Hai Jin. "IntDroid." ACM Transactions on Software Engineering and Methodology 30, no. 3 (May 2021): 1–32. http://dx.doi.org/10.1145/3442588.

Full text
Abstract:
Android, the most popular mobile operating system, has attracted millions of users around the world. Meanwhile, the number of new Android malware instances has grown exponentially in recent years. On the one hand, existing Android malware detection systems have shown that distilling the program semantics into a graph representation and detecting malicious programs by conducting graph matching are able to achieve high accuracy on detecting Android malware. However, these traditional graph-based approaches always perform expensive program analysis and suffer from low scalability on malware detection. On the other hand, because of the high scalability of social network analysis, it has been applied to complete large-scale malware detection. However, the social-network-analysis-based method only considers simple semantic information (i.e., centrality) for achieving market-wide mobile malware scanning, which may limit the detection effectiveness when benign apps show some similar behaviors as malware. In this article, we aim to combine the high accuracy of traditional graph-based method with the high scalability of social-network-analysis--based method for Android malware detection. Instead of using traditional heavyweight static analysis, we treat function call graphs of apps as complex social networks and apply social-network--based centrality analysis to unearth the central nodes within call graphs. After obtaining the central nodes, the average intimacies between sensitive API calls and central nodes are computed to represent the semantic features of the graphs. We implement our approach in a tool called IntDroid and evaluate it on a dataset of 3,988 benign samples and 4,265 malicious samples. Experimental results show that IntDroid is capable of detecting Android malware with an F-measure of 97.1% while maintaining a True-positive Rate of 99.1%. Although the scalability is not as fast as a social-network-analysis--based method (i.e., MalScan ), compared to a traditional graph-based method, IntDroid is more than six times faster than MaMaDroid . Moreover, in a corpus of apps collected from GooglePlay market, IntDroid is able to identify 28 zero-day malware that can evade detection of existing tools, one of which has been downloaded and installed by more than ten million users. This app has also been flagged as malware by six anti-virus scanners in VirusTotal, one of which is Symantec Mobile Insight .
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Complete Call Graph"

1

Knüpfer, Andreas. "Advanced Memory Data Structures for Scalable Event Trace Analysis." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1239979718089-56362.

Full text
Abstract:
The thesis presents a contribution to the analysis and visualization of computational performance based on event traces with a particular focus on parallel programs and High Performance Computing (HPC). Event traces contain detailed information about specified incidents (events) during run-time of programs and allow minute investigation of dynamic program behavior, various performance metrics, and possible causes of performance flaws. Due to long running and highly parallel programs and very fine detail resolutions, event traces can accumulate huge amounts of data which become a challenge for interactive as well as automatic analysis and visualization tools. The thesis proposes a method of exploiting redundancy in the event traces in order to reduce the memory requirements and the computational complexity of event trace analysis. The sources of redundancy are repeated segments of the original program, either through iterative or recursive algorithms or through SPMD-style parallel programs, which produce equal or similar repeated event sequences. The data reduction technique is based on the novel Complete Call Graph (CCG) data structure which allows domain specific data compression for event traces in a combination of lossless and lossy methods. All deviations due to lossy data compression can be controlled by constant bounds. The compression of the CCG data structure is incorporated in the construction process, such that at no point substantial uncompressed parts have to be stored. Experiments with real-world example traces reveal the potential for very high data compression. The results range from factors of 3 to 15 for small scale compression with minimum deviation of the data to factors &gt; 100 for large scale compression with moderate deviation. Based on the CCG data structure, new algorithms for the most common evaluation and analysis methods for event traces are presented, which require no explicit decompression. By avoiding repeated evaluation of formerly redundant event sequences, the computational effort of the new algorithms can be reduced in the same extent as memory consumption. The thesis includes a comprehensive discussion of the state-of-the-art and related work, a detailed presentation of the design of the CCG data structure, an elaborate description of algorithms for construction, compression, and analysis of CCGs, and an extensive experimental validation of all components
Diese Dissertation stellt einen neuartigen Ansatz für die Analyse und Visualisierung der Berechnungs-Performance vor, der auf dem Ereignis-Tracing basiert und insbesondere auf parallele Programme und das Hochleistungsrechnen (High Performance Computing, HPC) zugeschnitten ist. Ereignis-Traces (Ereignis-Spuren) enthalten detaillierte Informationen über spezifizierte Ereignisse während der Laufzeit eines Programms und erlauben eine sehr genaue Untersuchung des dynamischen Verhaltens, verschiedener Performance-Metriken und potentieller Performance-Probleme. Aufgrund lang laufender und hoch paralleler Anwendungen und dem hohen Detailgrad kann das Ereignis-Tracing sehr große Datenmengen produzieren. Diese stellen ihrerseits eine Herausforderung für interaktive und automatische Analyse- und Visualisierungswerkzeuge dar. Die vorliegende Arbeit präsentiert eine Methode, die Redundanzen in den Ereignis-Traces ausnutzt, um sowohl die Speicheranforderungen als auch die Laufzeitkomplexität der Trace-Analyse zu reduzieren. Die Ursachen für Redundanzen sind wiederholt ausgeführte Programmabschnitte, entweder durch iterative oder rekursive Algorithmen oder durch SPMD-Parallelisierung, die gleiche oder ähnliche Ereignis-Sequenzen erzeugen. Die Datenreduktion basiert auf der neuartigen Datenstruktur der &quot;Vollständigen Aufruf-Graphen&quot; (Complete Call Graph, CCG) und erlaubt eine Kombination von verlustfreier und verlustbehafteter Datenkompression. Dabei können konstante Grenzen für alle Abweichungen durch verlustbehaftete Kompression vorgegeben werden. Die Datenkompression ist in den Aufbau der Datenstruktur integriert, so dass keine umfangreichen unkomprimierten Teile vor der Kompression im Hauptspeicher gehalten werden müssen. Das enorme Kompressionsvermögen des neuen Ansatzes wird anhand einer Reihe von Beispielen aus realen Anwendungsszenarien nachgewiesen. Die dabei erzielten Resultate reichen von Kompressionsfaktoren von 3 bis 5 mit nur minimalen Abweichungen aufgrund der verlustbehafteten Kompression bis zu Faktoren &gt; 100 für hochgradige Kompression. Basierend auf der CCG_Datenstruktur werden außerdem neue Auswertungs- und Analyseverfahren für Ereignis-Traces vorgestellt, die ohne explizite Dekompression auskommen. Damit kann die Laufzeitkomplexität der Analyse im selben Maß gesenkt werden wie der Hauptspeicherbedarf, indem komprimierte Ereignis-Sequenzen nicht mehrmals analysiert werden. Die vorliegende Dissertation enthält eine ausführliche Vorstellung des Stands der Technik und verwandter Arbeiten in diesem Bereich, eine detaillierte Herleitung der neu eingeführten Daten-strukturen, der Konstruktions-, Kompressions- und Analysealgorithmen sowie eine umfangreiche experimentelle Auswertung und Validierung aller Bestandteile
APA, Harvard, Vancouver, ISO, and other styles
2

Knüpfer, Andreas. "Advanced Memory Data Structures for Scalable Event Trace Analysis." Doctoral thesis, Technische Universität Dresden, 2008. https://tud.qucosa.de/id/qucosa%3A23611.

Full text
Abstract:
The thesis presents a contribution to the analysis and visualization of computational performance based on event traces with a particular focus on parallel programs and High Performance Computing (HPC). Event traces contain detailed information about specified incidents (events) during run-time of programs and allow minute investigation of dynamic program behavior, various performance metrics, and possible causes of performance flaws. Due to long running and highly parallel programs and very fine detail resolutions, event traces can accumulate huge amounts of data which become a challenge for interactive as well as automatic analysis and visualization tools. The thesis proposes a method of exploiting redundancy in the event traces in order to reduce the memory requirements and the computational complexity of event trace analysis. The sources of redundancy are repeated segments of the original program, either through iterative or recursive algorithms or through SPMD-style parallel programs, which produce equal or similar repeated event sequences. The data reduction technique is based on the novel Complete Call Graph (CCG) data structure which allows domain specific data compression for event traces in a combination of lossless and lossy methods. All deviations due to lossy data compression can be controlled by constant bounds. The compression of the CCG data structure is incorporated in the construction process, such that at no point substantial uncompressed parts have to be stored. Experiments with real-world example traces reveal the potential for very high data compression. The results range from factors of 3 to 15 for small scale compression with minimum deviation of the data to factors &gt; 100 for large scale compression with moderate deviation. Based on the CCG data structure, new algorithms for the most common evaluation and analysis methods for event traces are presented, which require no explicit decompression. By avoiding repeated evaluation of formerly redundant event sequences, the computational effort of the new algorithms can be reduced in the same extent as memory consumption. The thesis includes a comprehensive discussion of the state-of-the-art and related work, a detailed presentation of the design of the CCG data structure, an elaborate description of algorithms for construction, compression, and analysis of CCGs, and an extensive experimental validation of all components.
Diese Dissertation stellt einen neuartigen Ansatz für die Analyse und Visualisierung der Berechnungs-Performance vor, der auf dem Ereignis-Tracing basiert und insbesondere auf parallele Programme und das Hochleistungsrechnen (High Performance Computing, HPC) zugeschnitten ist. Ereignis-Traces (Ereignis-Spuren) enthalten detaillierte Informationen über spezifizierte Ereignisse während der Laufzeit eines Programms und erlauben eine sehr genaue Untersuchung des dynamischen Verhaltens, verschiedener Performance-Metriken und potentieller Performance-Probleme. Aufgrund lang laufender und hoch paralleler Anwendungen und dem hohen Detailgrad kann das Ereignis-Tracing sehr große Datenmengen produzieren. Diese stellen ihrerseits eine Herausforderung für interaktive und automatische Analyse- und Visualisierungswerkzeuge dar. Die vorliegende Arbeit präsentiert eine Methode, die Redundanzen in den Ereignis-Traces ausnutzt, um sowohl die Speicheranforderungen als auch die Laufzeitkomplexität der Trace-Analyse zu reduzieren. Die Ursachen für Redundanzen sind wiederholt ausgeführte Programmabschnitte, entweder durch iterative oder rekursive Algorithmen oder durch SPMD-Parallelisierung, die gleiche oder ähnliche Ereignis-Sequenzen erzeugen. Die Datenreduktion basiert auf der neuartigen Datenstruktur der &quot;Vollständigen Aufruf-Graphen&quot; (Complete Call Graph, CCG) und erlaubt eine Kombination von verlustfreier und verlustbehafteter Datenkompression. Dabei können konstante Grenzen für alle Abweichungen durch verlustbehaftete Kompression vorgegeben werden. Die Datenkompression ist in den Aufbau der Datenstruktur integriert, so dass keine umfangreichen unkomprimierten Teile vor der Kompression im Hauptspeicher gehalten werden müssen. Das enorme Kompressionsvermögen des neuen Ansatzes wird anhand einer Reihe von Beispielen aus realen Anwendungsszenarien nachgewiesen. Die dabei erzielten Resultate reichen von Kompressionsfaktoren von 3 bis 5 mit nur minimalen Abweichungen aufgrund der verlustbehafteten Kompression bis zu Faktoren &gt; 100 für hochgradige Kompression. Basierend auf der CCG_Datenstruktur werden außerdem neue Auswertungs- und Analyseverfahren für Ereignis-Traces vorgestellt, die ohne explizite Dekompression auskommen. Damit kann die Laufzeitkomplexität der Analyse im selben Maß gesenkt werden wie der Hauptspeicherbedarf, indem komprimierte Ereignis-Sequenzen nicht mehrmals analysiert werden. Die vorliegende Dissertation enthält eine ausführliche Vorstellung des Stands der Technik und verwandter Arbeiten in diesem Bereich, eine detaillierte Herleitung der neu eingeführten Daten-strukturen, der Konstruktions-, Kompressions- und Analysealgorithmen sowie eine umfangreiche experimentelle Auswertung und Validierung aller Bestandteile.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Complete Call Graph"

1

Lepiller, Julien, Ruzica Piskac, Martin Schäf, and Mark Santolucito. "Analyzing Infrastructure as Code to Prevent Intra-update Sniping Vulnerabilities." In Tools and Algorithms for the Construction and Analysis of Systems, 105–23. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72013-1_6.

Full text
Abstract:
AbstractInfrastructure as Code is a new approach to computing infrastructure management that allows users to leverage tools such as version control, automatic deployments, and program analysis for infrastructure configurations. This approach allows for faster and more homogeneous configuration of a complete infrastructure. Infrastructure as Code languages, such as CloudFormation or TerraForm, use a declarative model so that users only need to describe the desired state of the infrastructure. However, in practice, these languages are not processed atomically. During an upgrade, the infrastructure goes through a series of intermediate states. We identify a security vulnerability that occurs during an upgrade even when the initial and final states of the infrastructure are secure, and we show that those vulnerability are possible in Amazon’s AWS and Google Cloud. We call such attacks intra-update sniping vulnerabilities. In order to mitigate this shortcoming, we present a technique that detects such vulnerabilities and pinpoints the root causes of insecure deployment migrations. We implement this technique in a tool, Häyhä, that uses dataflow graph analysis. We evaluate our tool on a set of open-source CloudFormation templates and find that it is scalable and could be used as part of a deployment workflow.
APA, Harvard, Vancouver, ISO, and other styles
2

Knüpfer, Andreas, and Wolfgang E. Nagel. "New Algorithms for Performance Trace Analysis Based on Compressed Complete Call Graphs." In Lecture Notes in Computer Science, 116–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11428848_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Robinson, Marin S., Fredricka L. Stoller, Molly Constanza-Robinson, and James K. Jones. "Formatting Figures, Tables, and Schemes." In Write Like a Chemist. Oxford University Press, 2008. http://dx.doi.org/10.1093/oso/9780195367423.003.0025.

Full text
Abstract:
This chapter focuses on general formatting guidelines for three commonly used graphics in chemistry writing: figures, tables, and schemes. The major purposes and uses for each graphic are described, and common formatting expectations are shared. Before-correction and after-correction examples are used to identify common formatting errors and ways to correct them. Each section of the chapter ends with a table of useful guidelines. By the end of the chapter, you will be able to do the following: ■ Know when it is appropriate to include a figure, table, or scheme ■ Recognize common formatting mistakes in figures, tables, and schemes ■ Format figures, tables, and schemes in appropriate and conventional ways As you work through the chapter, you will format your own graphic, guided by the Formatting on Your Own task at the end of the chapter. Graphics, in combination with the text, allow authors to communicate complex information efficiently. When done properly, text and graphics work together, reinforcing each other without duplicating information. Like the text, graphics must follow formatting conventions. In this chapter, we call your attention to some common formatting practices. Of course, we cannot address all of the formatting practices in chemistry, nor can we anticipate how these conventions will change over time. Thus, use this chapter for basic formatting information and for insights into the many details involved in a properly formatted graphic. As always, consult The ACS Style Guide and your targeted journal’s Information for Authors for more detailed and current information. Authors use figures (e.g., graphs, illustrations, photographs) to display scientific information. Examples of figures are included throughout the textbook, for instance, an ion source (excerpt 3S), a comet assay (excerpt 4E), a chromatogram (excerpt 9F), and an illustration of hydrogel adsorption (excerpt 131). Figures are numbered consecutively throughout a paper (Figure 1, Figure 2, etc.) and mentioned by name and number in text preceding the figure. Although many figure types exist, by far the most common is the graph. Because of their frequency, we devote this section of the chapter solely to formatting graphs; however, the guidelines presented are applicable to many other figure types as well.
APA, Harvard, Vancouver, ISO, and other styles
4

Saglietto, Laurence, Delphine David, and Cécile Cezanne. "Rethinking Social Capital Measurement." In Advances in Finance, Accounting, and Economics, 248–68. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-0959-2.ch012.

Full text
Abstract:
Social capital is decisive to many of strategic objectives of organizations. Although it has been extensively defined in the literature, social capital continues to be discussed in particular concerning its measurement. What is the usability of the existing indicators of social capital? How do managers decide which one they would prefer? An overview of the literature reveals that there are very few measurements of social capital and those which have already developed are very complex. Direct measurements appear to provide a better understanding of the complexity of relationships than aggregated measurements. Yet, we show that they are of unsatisfactory quality. Using simple counter-examples, we advance that they give rise to contradictions. From this discussion and using Graph Theory, we propose two complementary indicators of social capital which we call “relational strength” and “relational potential”. These operational indicators can be handled by any actors to position themselves within their social sphere.
APA, Harvard, Vancouver, ISO, and other styles
5

Lovejoy, Shaun. "New worlds versus scaling: From van Leeuwenhoek to Mandelbrot." In Weather, Macroweather, and the Climate. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190864217.003.0006.

Full text
Abstract:
We just took a voyage through scales, noticing structures in cloud photographs and wiggles on graphs. Collectively, they spanned ranges of scale over factors of billions in space and billions of billions in time. We are immediately confronted with the question: How can we conceptualize and model such fantastic variation? Two extreme approaches have developed. For the moment, I call the domi­nant one the new worlds view, after Antoni van Leeuwenhoek (1632– 1723), who developed a powerful early microscope. The other is the self- similar (scaling) view by Benoit Mandelbrot which I discuss in the next section. My own view— scaling but with the notion of scale itself an emergent property— is discussed in Chapter 3. When van Leeuwenhoek peered through his microscope, in his amazement he is said to have discovered a “new world in a drop of water”: “animalcules,” the first microorganisms (Fig. 2.1). Since then, the idea that zooming reveals something completely new has become second nature. In the twenty- first cen­tury, atom- imaging microscopes are developed precisely because of the promise of such new worlds. The scale- by- scale “newness” idea was graphically illustrated by K. Boeke’s highly influential book Cosmic View, which starts with a photograph of a girl holding a cat, first zooming away to show the surrounding vast reaches of outer space, and then zooming in until reaching the nucleus of an atom. The book was incredibly successful. It was included in Hutchins and Adler’s Gateway to the Great Books, a ten- volume series featuring works by Aristotle, Shakespeare, Einstein, and others. In 1968, two films were based on Boeke’s book— Cosmic Zoom and Powers of Ten (1968, re- released in 1977), encouraging the idea that nearly every power of ten in scale hosts different phenomena. More recently (2012), there’s even the interactive Cosmic Eye app for the iPad, iPhone, or iPod, not to mention a lavish update: the “Zoomable Universe.”
APA, Harvard, Vancouver, ISO, and other styles
6

Nowak, Martin A., and Karl Sigmund. "How populations cohere: five rules for cooperation." In Theoretical Ecology. Oxford University Press, 2007. http://dx.doi.org/10.1093/oso/9780199209989.003.0005.

Full text
Abstract:
Subsequent chapters in this volume deal with populations as dynamic entities in time and space. Populations are, of course, made up of individuals, and the parameters which characterize aggregate behavior—population growth rate and so on— ultimately derive from the behavioral ecology and life-history strategies of these constituent individuals. In evolutionary terms, the properties of populations can only be understood in terms of individuals, which comes down to studying how life-history choices (and consequent genefrequency distributions) are shaped by environmental forces. Many important aspects of group behavior— from alarm calls of birds and mammals to the complex institutions that have enabled human societies to flourish—pose problems of how cooperative behavior can evolve and be maintained. The puzzle was emphasized by Darwin, and remains the subject of active research today. In this book, we leave the large subject of individual organisms’ behavioral ecology and lifehistory choices to texts in that field (e.g. Krebs and Davies, 1997). Instead, we lead with a survey of work, much of it very recent, on five different kinds of mechanism whereby cooperative behavior may be maintained in a population, despite the inherent difficulty that cheats may prosper by enjoying the benefits of cooperation without paying the associated costs. Cooperation means that a donor pays a cost, c, for a recipient to get a benefit, b. In evolutionary biology, cost and benefit are measured in terms of fitness. While mutation and selection represent the main forces of evolutionary dynamics, cooperation is a fundamental principle that is required for every level of biological organization. Individual cells rely on cooperation among their components. Multicellular organisms exist because of cooperation among their cells. Social insects are masters of cooperation. Most aspects of human society are based on mechanisms that promote cooperation. Whenever evolution constructs something entirely new (such as multicellularity or human language), cooperation is needed. Evolutionary construction is based on cooperation. The five rules for cooperation which we examine in this chapter are: kin selection, direct reciprocity, indirect reciprocity, graph selection, and group selection. Each of these can promote cooperation if specific conditions are fulfilled.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Complete Call Graph"

1

Al-Ghafees, Mohammed, and James Whittaker. "Markov Chain-based Test Data Adequacy Criteria: a Complete Family." In 2002 Informing Science + IT Education Conference. Informing Science Institute, 2002. http://dx.doi.org/10.28945/2435.

Full text
Abstract:
The idea of using white box data flow information to select test cases is well established and has proven an effective testing strategy. This paper extends the concept of data flow testing to the case in which the source code is unavailable and only black box information can be used to make test selection decisions. In such cases, data flow testing is performed by constructing a behavior model of the software under test to act as a surrogate for the program flow graph upon which white box data flow testing is based. The behavior model is a graph representation of externally-visible software state and input-induced state transitions. We first summarize the modeling technique and then define the new data flow selection rules and describe how they are used to generate test cases. Theoretical proof of concept is provided based on a characteristic we call transition variation. Finally, we present results from a laboratory experiments in which we compare the fault detection capability of black box data flow tests to other common techniques of test generation from graphs, including simple random sampling, operational profile sampling and state transition coverage.
APA, Harvard, Vancouver, ISO, and other styles
2

Park, Hogun, and Jennifer Neville. "Exploiting Interaction Links for Node Classification with Deep Graph Neural Networks." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/447.

Full text
Abstract:
Node classification is an important problem in relational machine learning. However, in scenarios where graph edges represent interactions among the entities (e.g., over time), the majority of current methods either summarize the interaction information into link weights or aggregate the links to produce a static graph. In this paper, we propose a neural network architecture that jointly captures both temporal and static interaction patterns, which we call Temporal-Static-Graph-Net (TSGNet). Our key insight is that leveraging both a static neighbor encoder, which can learn aggregate neighbor patterns, and a graph neural network-based recurrent unit, which can capture complex interaction patterns, improve the performance of node classification. In our experiments on node classification tasks, TSGNet produces significant gains compared to state-of-the-art methods—reducing classification error up to 24% and an average of 10% compared to the best competitor on four real-world networks and one synthetic dataset.
APA, Harvard, Vancouver, ISO, and other styles
3

Orru, Matteo, Simone Porru, Roberto Tonelli, and Michele Marchesi. "A Preliminary Study on Mobile Apps Call Graphs through a Complex Network Approach." In 2015 11th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS). IEEE, 2015. http://dx.doi.org/10.1109/sitis.2015.95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bohnet, Johannes, and Jürgen Döllner. "Visual exploration of function call graphs for feature location in complex software systems." In the 2006 ACM symposium. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1148493.1148508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sharma, Govind, Prasanna Patil, and M. Narasimha Murty. "C3MM: Clique-Closure based Hyperlink Prediction." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/465.

Full text
Abstract:
Usual networks lossily (if not incorrectly) represent higher-order relations, i.e. those between multiple entities instead of a pair. This calls for complex structures such as hypergraphs to be used instead. Akin to the link prediction problem in graphs, we deal with hyperlink (higher-order link) prediction in hypergraphs. With a handful of solutions in the literature that seem to have merely scratched the surface, we provide improvements for the same. Motivated by observations in recent literature, we first formulate a "clique-closure" hypothesis (viz., hyperlinks are more likely to be formed from near-cliques rather than from non-cliques), test it on real hypergraphs, and then exploit it for our very problem. In the process, we generalize hyperlink prediction on two fronts: (1) from small-sized to arbitrary-sized hyperlinks, and (2) from a couple of domains to a handful. We perform experiments (both the hypothesis-test as well as the hyperlink prediction) on multiple real datasets, report results, and provide both quantitative and qualitative arguments favoring better performances w.r.t. the state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography