To see the other types of publications on this topic, follow the link: Maximum Number of Nodes.

Dissertations / Theses on the topic 'Maximum Number of Nodes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Maximum Number of Nodes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bekos, Michael A., Michael Kaufmann, Stephen G. Kobourov, Konstantinos Stavropoulos, and Sankar Veeramoni. "The maximum k-differential coloring problem." ELSEVIER SCIENCE BV, 2017. http://hdl.handle.net/10150/626126.

Full text
Abstract:
Given an n-vertex graph Gand two positive integers d, k is an element of N, the (d, kn)-differential coloring problem asks for a coloring of the vertices of G(if one exists) with distinct numbers from 1 to kn(treated as colors), such that the minimum difference between the two colors of any adjacent vertices is at least d. While it was known that the problem of determining whether a general graph is (2, n)-differential colorable is NP-complete, our main contribution is a complete characterization of bipartite, planar and outerplanar graphs that admit (2, n)-differential colorings. For practical reasons, we also consider color ranges larger than n, i.e., k > 1. We show that it is NP-complete to determine whether a graph admits a (3, 2n)-differential coloring. The same negative result holds for the (left perpendicular 2n/3 right pendicular, 2n)-differential coloring problem, even in the case where the input graph is planar.
APA, Harvard, Vancouver, ISO, and other styles
2

Nieuwoudt, Isabelle. "On the maximum degree chromatic number of a graph." Thesis, Stellenbosch : Stellenbosch University, 2007. http://hdl.handle.net/10019.1/46214.

Full text
Abstract:
ENGLISH ABSTRACT: Determining the (classical) chromatic number of a graph (i.e. finding the smallest number of colours with which the vertices of a graph may be coloured so that no two adjacent vertices receive the same colour) is a well known combinatorial optimization problem and is widely encountered in scheduling problems. Since the late 1960s the notion of the chromatic number has been generalized in several ways by relaxing the restriction of independence of the colour classes.<br>AFRIKAANSE OPSOMMING: Die bepaling van die (klassieke) chromatiese getal van ’n grafiek (naamlik die kleinste aantal kleure waarmee die punte van ’n grafiek gekleur kan word sodat geen twee naasliggende punte dieselfde kleur ontvang nie) is ’n bekende kombinatoriese optimeringsprobleem wat wyd in skeduleringstoepassings te¨egekom word. Sedert die laat 1960s is die definisie van die chromatiese getal op verskeie maniere veralgemeen deur die vereiste van onafhanklikheid van die kleurklasse te verslap.<br>Thesis (DPhil)--Stellenbosch University, 2007.
APA, Harvard, Vancouver, ISO, and other styles
3

GALATI, CONCETTINA. "Number of moduli of families of plane curves with nodes and cusps." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2006. http://hdl.handle.net/2108/210.

Full text
Abstract:
In my Ph.D.-thesis I computed the number of moduli of certain families of plane curves with nodes and cusps. Let Σn k,d ⊂ P(H0(P2,OP2(n))) := PN, with N = n(n+3)2 , be the closure, in the Zariski’s topology, of the locally closed set of reduced and irreducible plane curves of degree n with k cusps and d nodes. We recall that, if k = 0, the varieties Vn,g = Σn0,d are called the Severi varieties of irreducible plane curves of degree n and geometric genus g = n−1 2 − d. Let Σ⊂ Σn k,d be an irreducible component of Σn k,d and let g = n−1 2 −d−k be the geometric genus of the plane curve corresponding to the general point of Σ. It is naturally defined a rational map ΠΣ : Σ Mg, sending the general point [Γ] ∈ Σ to the isomorphism class of the normalization of the plane curve Γ corresponding to the point [Γ]. We set number of moduli of Σ := dim(ΠΣ(Σ)). If k < 3n, then (1) dim(ΠΣ(Σ)) ≤ min(dim(Mg), dim(Mg) + ρ − k), where ρ := ρ(2, g, n) = 3n − 2g − 6 is the Brill-Neother number of the linear series of degree n and dimension 2 on a smooth curve of genus g. We say that Σ has the expected number of moduli if the equality holds in (1). By classical Brill-Neother theory when ρ is positive and by a well know result of Sernesi when ρ ≤ 0, we have that Σn0,d, (which is irreducible), has the expected number of moduli for every d ≤ n−1 2 . Working out the main ideas and techniques that Sernesi uses in [1], under the hypothesis k > 0, in my Ph.D.-thesis I find sufficient conditions in order that an irreducible component Σ ⊂ Σn k,d has the expected number of moduli. If Σ verifies these properties, then ρ ≤ 0. By using induction on the degree n and on the genus g of the general curve of the family, I prove that, if ρ ≤ 0 and k ≤ 6, then there exists at least one irreducible component of Σn k,d with expected number of moduli equal to 3g−3+ρ−k. By using this result and a result of Eisembud and Harris, from which it follows that, if ρ is positive enough and k ≤ 3 then dim(ΠΣ(Σ)) = 3g − 3, I prove that Σn1,d (which is irreducible) has the expected number of moduli for every d ≤ n−1 2 , i.e. for every ρ. I am extending this result to the case k ≤ 3. Finally, I consider the case of irreducible sextics with six cusps. It is classically know that Σ66,0 contains at least two irreducible components Σ1 and Σ2. The general point of Σ1 parametrizes a sextic with six cusps on a conic, whereas the general element of Σ2 corresponds to a sextic with six cusps not on a conic. I prove that Σ1 and Σ2 have expected number of moduli. I don’t still know example of irreducible complete families of plane curves with nodes and cusps having number of moduli smaller that the expected. Finally, in the first sections of my thesis, following essentially Zariski’s papers, I introduce classical techniques used to study and describe the geometry of a family of plane curves with assigned singularities. Then, I briefly resume the more modern results by Wahl on families of plane curves with nodes and cusps. I also give some applications of Horikawa deformation theory to the study of deformations of plane curves. Finally, I devoted a section of my thesis to the versal deformation family of plane curve singularity. In particular, by using the results of [3] and [2] and a simple argument of projective geometry, I proved that in the equigeneric locus of the ´etale versal deformation space B of an ordinary plane curve singularity there are only points corresponding to a plane curve with only ordinary multiple points. I mean that this result is known, but I haven’t found in literature a proof of this. References [1] E. Sernesi:On the existence of certain families of curves, Invent. math. vol. 75, (1984). [2] A. Morelli:Un’osservazione sulle singolarita’ delle trasformate birazionali di una curva algebrica, Rend. Acc. Sci. Napoli, serie 4 vol. 29 (1962), p.59-64. [3] A. Franchetta: Osservazioni sui punti doppi delle superfici algebriche, Rend. Acc. dei Lincei, gennaio 1946.
APA, Harvard, Vancouver, ISO, and other styles
4

Farzad, Babak. "When the chromatic number is close to the maximum degree." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ58773.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Owens, Kayla Denise. "Properties of the Zero Forcing Number." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2216.

Full text
Abstract:
The zero forcing number is a graph parameter first introduced as a tool for solving the minimum rank problem, which is: Given a simple, undirected graph G, and a field F, let S(F,G) denote the set of all symmetric matrices A=[a_{ij}] with entries in F such that a_{ij} doess not equal 0 if and only if ij is an edge in G. Find the minimum possible rank of a matrix in S(F,G). It is known that the zero forcing number Z(G) provides an upper bound for the maximum nullity of a graph. I investigate properties of the zero forcing number, including its behavior under various graph operations.
APA, Harvard, Vancouver, ISO, and other styles
6

Katzenbeisser, Walter, and Wolfgang Panny. "On the Number of Times where a simple Random Walk reaches its Maximum." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1990. http://epub.wu.ac.at/834/1/document.pdf.

Full text
Abstract:
Let Q, denote the number of times where a simple random walk reaches its maximum, where the random walk starts at the origin and returns to the origin after 2n steps. Such random walks play an important r6le in probability and statistics. In this paper the distribution and the moments of Q, are considered and their asymptotic behavior is studied. (author's abstract)<br>Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Jinhua. "A Wide Input Power Line Energy Harvesting Circuit For Wireless Sensor Nodes." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103426.

Full text
Abstract:
Massive deployment of wireless IoT (Internet of Things) devices makes replacement or recharge of batteries expensive and impractical for some applications. Energy harvesting is a promising solution, and various designs are proposed to harvest power from ambient resources including thermal, vibrational, solar, wind, and RF sources. Among these ambient resources, AC powerlines are a stable energy source in an urban environment. Many researchers investigated methods to exploit this stable source of energy to power wireless IoT devices. The proposed circuit aims to harvest energy from AC powerlines with a wide input range of from 10 to 50 A. The proposed system includes a wake-up circuit and is capable of cold-start. A buck-boost converter operating in DCM is adopted for impedance matching, where the impedance is rather independent of the operation conditions. So, the proposed system can be applied to various types of wireless sensor nodes with different internal impedances. Experimental results show that the proposed system achieves an efficiency of 80.99% under the powerline current of 50 A.<br>M.S.<br>Nowadays, with the magnificent growth of IoT devices, a reliable, and efficient energy supply system becomes more and more important, because, for some applications, battery replacement is very expensive and sometimes even impossible. At this time, a well-designed self-contained energy harvesting system is a good solution. The energy harvesting system can extend the service life of the IoT devices and reduce the frequency of charging or checking the device. In this work, the proposed circuit aims to harvest energy from the AC power lines, and the harvested power intends to power wireless sensor nodes (WSNs). By utilizing the efficient and self-contained EH system, WSNs can be used to monitor the temperature, pressure, noise level and humidity etc. The proposed energy harvesting circuit was implemented with discrete components on a printed circuit board (PCB). Under a power line current of 50 A @ 50 Hz, the proposed energy harvesting circuit can harvest 156.6 mW, with a peak efficiency of 80.99 %.
APA, Harvard, Vancouver, ISO, and other styles
8

Hernandez, Baez Diana Margarita. "Establishing the maximum carbon number for reliable quantitative gas chromatographic analysis of heavy ends hydrocarbons." Thesis, Heriot-Watt University, 2013. http://hdl.handle.net/10399/2674.

Full text
Abstract:
This Thesis investigates the two main limitations of high temperature gas chromatography (HTGC) in the analysis of heavy n-alkanes: pyrolysis inside the GC column and incomplete elution. The former is studied by developing and reducing a radical pyrolysis model (7055 reactions) into a molecular pyrolysis model (127 reactions) capable of predicting low conversions of (nC14H30-nC80H162) at temperatures up to 430°C. Validation of predicted conversion with literature data for nC14H30, nC16H34 and nC25H52 yielded an error lower than 5.4%. The latter is addressed by developing an analytical model which solves recursively the diffusion and convection phenomena separately. The model is capable of predicting the position and molar distribution of components, using as main input the analytes’ distribution factors and yielded an error lower than 4.4% in the prediction of retention times. This thesis provides an extension of the data set of distribution factors of (nC12H26– nC98H198) in a SGE HT5 GC capillary column, based on isothermal GC measurements at both constant inlet pressure and flow rate. Finally, the above two models were coupled, yielding a maximum mass lost of 1.3 % in the case of nC80H162 due to pyrolysis and complete elution up to nC70H142, in a 12 m HT5 column.
APA, Harvard, Vancouver, ISO, and other styles
9

Ozisik, Sevtap. "Fully Computable Convergence Analysis Of Discontinous Galerkin Finite Element Approximation With An Arbitrary Number Of Levels Of Hanging Nodes." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614345/index.pdf.

Full text
Abstract:
In this thesis, we analyze an adaptive discontinuous finite element method for symmetric second order linear elliptic operators. Moreover, we obtain a fully computable convergence analysis on the broken energy seminorm in first order symmetric interior penalty discontin- uous Galerkin finite element approximations of this problem. The method is formulated on nonconforming meshes made of triangular elements with first order polynomial in two di- mension. We use an estimator which is completely free of unknown constants and provide a guaranteed numerical bound on the broken energy norm of the error. This estimator is also shown to provide a lower bound for the broken energy seminorm of the error up to a constant and higher order data oscillation terms. Consequently, the estimator yields fully reliable, quantitative error control along with efficiency. As a second problem, explicit expression for constants of the inverse inequality are given in 1D, 2D and 3D. Increasing mathematical analysis of finite element methods is motivating the inclusion of mesh dependent terms in new classes of methods for a variety of applications. Several inequalities of functional analysis are often employed in convergence proofs. Inverse estimates have been used extensively in the analysis of finite element methods. It is char- acterized as tools for the error analysis and practical design of finite element methods with terms that depend on the mesh parameter. Sharp estimates of the constants of this inequality is provided in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
10

Marrikie, Rami, and Waled Rached. "A comparative study on Tor’s client compromise rates when changing the number of guard nodes and their rotation time." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302417.

Full text
Abstract:
The Tor Project is a non-profit organization with the belief that Internet users should have private access to an uncensored Web. Tor accomplishes this through onion routing, which uses multi-layered encryption. Tor’s design is not flawless and different attacks are being performed to deanonymize users. In this report, we studied one of the key strategies, that Tor implemented to reduce the number of such attacks. This strategy is about assigning a list containing three guard nodes to each client. To find a lower client compromise rate, a software called COGS was used for simulation purposes. In our simulations, we changed the time interval, in which guard nodes are assigned to a user, either by decreasing it to 15 - 30 days or increasing it to 60 - 90 days. Another parameter we changed was the number of guard nodes in the client’s guard node list, either by decreasing it to one guard node or increasing it to five guard nodes. After plotting the output data from our simulations, we conclude that decreasing the number of guard nodes in a client’s guard list while increasing the guard rotation time yields the lowest client compromise rate possible. This setup could harm the performance of the Tor network, since guard nodes would be accumulating clients over time and therefore, a bigger study that includes other factors like performance should be conducted to be able to find a better balance between anonymity and performance.<br>Tor-projektet är en ideell organisation med tron att internetanvändare har rätt till ett privat och ocensurerat nät. Tor åstadkommer detta genom så kallad Onion Routing, som använder kryptering i flera lager. Tors design är inte felfri och olika attacker utförs för att deanonymisera användare. I denna rapport studerade vi en av de viktigaste strategierna, som Tor implementerade, för att minska antalet av sådana attacker. Denna strategi handlar om att tilldela en lista som innehåller tre guard-noder till varje klient. För att hitta en lägre klientangreppsfrekvens användes en programvara som heter COGS för simuleringsändamål. I våra simuleringar ändrade vi tidsintervallet, under vilket guard-noder tilldelas en användare, antingen genom att minska det till 15 - 30 dagar eller öka det till 60 - 90 dagar. En annan parameter som vi ändrade var antalet guard-noder i klientens guard-nodslista, antingen genom att minska det till en guard-nod eller öka det till fem guard-noder. Efter att ha plottat utdata från våra simuleringar drar vi slutsatsen att en minskning av antalet guard-noder i en klients guard-nodslista samtidigt som dess rotationstid ökar ger lägsta möjliga kompromissnivå för klienten. Denna inställning kan skada Tor-nätverkets prestanda, eftersom guard-noder skulle samla klienter över tiden och därför bör en större studie, som inkluderar andra faktorer som prestanda, genomföras för att kunna hitta en bättre balans mellan anonymitet och prestanda.
APA, Harvard, Vancouver, ISO, and other styles
11

Short, Taylor. "KE Theory & the Number of Vertices Belonging to All Maximum Independent Sets in a Graph." VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/2353.

Full text
Abstract:
For a graph $G$, let $\alpha (G)$ be the cardinality of a maximum independent set, let $\mu (G)$ be the cardinality of a maximum matching and let $\xi (G)$ be the number of vertices belonging to all maximum independent sets. Boros, Golumbic and Levit showed that in connected graphs where the independence number $\alpha (G)$ is greater than the matching number $\mu (G)$, $\xi (G) \geq 1 + \alpha(G) - \mu (G)$. For any graph $G$, we will show there is a distinguished induced subgraph $G[X]$ such that, under weaker assumptions, $\xi (G) \geq 1 + \alpha (G[X]) - \mu (G[X])$. Furthermore $1 + \alpha (G[X]) - \mu (G[X]) \geq 1 + \alpha (G) - \mu (G)$ and the difference between these bounds can be arbitrarily large. Lastly some results toward a characterization of graphs with equal independence and matching numbers is given.
APA, Harvard, Vancouver, ISO, and other styles
12

Eranki, Anitha. "A model to create bus timetables to attain maximum synchronization considering waiting times at transfer stops." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Poliquit, Elmer S. "A method for solving the minimization of the maximum number of open stacks problem within a cutting process." View electronic thesis, 2008. http://dl.uncw.edu/etd/2008-1/r1/poliquite/elmerpoliquit.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lê, Ngoc C. "Algorithms for the Maximum Independent Set Problem." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2015. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-172639.

Full text
Abstract:
This thesis focuses mainly on the Maximum Independent Set (MIS) problem. Some related graph theoretical combinatorial problems are also considered. As these problems are generally NP-hard, we study their complexity in hereditary graph classes, i.e. graph classes defined by a set F of forbidden induced subgraphs. We revise the literature about the issue, for example complexity results, applications, and techniques tackling the problem. Through considering some general approach, we exhibit several cases where the problem admits a polynomial-time solution. More specifically, we present polynomial-time algorithms for the MIS problem in: + some subclasses of $S_{2;j;k}$-free graphs (thus generalizing the classical result for $S_{1;2;k}$-free graphs); + some subclasses of $tree_{k}$-free graphs (thus generalizing the classical results for subclasses of P5-free graphs); + some subclasses of $P_{7}$-free graphs and $S_{2;2;2}$-free graphs; and various subclasses of graphs of bounded maximum degree, for example subcubic graphs. Our algorithms are based on various approaches. In particular, we characterize augmenting graphs in a subclass of $S_{2;k;k}$-free graphs and a subclass of $S_{2;2;5}$-free graphs. These characterizations are partly based on extensions of the concept of redundant set [125]. We also propose methods finding augmenting chains, an extension of the method in [99], and finding augmenting trees, an extension of the methods in [125]. We apply the augmenting vertex technique, originally used for $P_{5}$-free graphs or banner-free graphs, for some more general graph classes. We consider a general graph theoretical combinatorial problem, the so-called Maximum -Set problem. Two special cases of this problem, the so-called Maximum F-(Strongly) Independent Subgraph and Maximum F-Induced Subgraph, where F is a connected graph set, are considered. The complexity of the Maximum F-(Strongly) Independent Subgraph problem is revised and the NP-hardness of the Maximum F-Induced Subgraph problem is proved. We also extend the augmenting approach to apply it for the general Maximum Π -Set problem. We revise on classical graph transformations and give two unified views based on pseudo-boolean functions and αff-redundant vertex. We also make extensive uses of α-redundant vertices, originally mainly used for $P_{5}$-free graphs, to give polynomial solutions for some subclasses of $S_{2;2;2}$-free graphs and $tree_{k}$-free graphs. We consider some classical sequential greedy heuristic methods. We also combine classical algorithms with αff-redundant vertices to have new strategies of choosing the next vertex in greedy methods. Some aspects of the algorithms, for example forbidden induced subgraph sets and worst case results, are also considered. Finally, we restrict our attention on graphs of bounded maximum degree and subcubic graphs. Then by using some techniques, for example ff-redundant vertex, clique separator, and arguments based on distance, we general these results for some subclasses of $S_{i;j;k}$-free subcubic graphs.
APA, Harvard, Vancouver, ISO, and other styles
15

Thornton, Victoria Claire. "In search of a system which acquires the maximum number of organs and is consistent with a society's values." Thesis, Keele University, 2015. http://eprints.keele.ac.uk/2346/.

Full text
Abstract:
In 2008, the Organ Donation Taskforce was asked to consider the impact of introducing an opt-out system for organ donation in the United Kingdom. The Taskforce conducted a thorough investigation, which included information gathering from both the public and experts in the field of healthcare, ethics and law and a thorough appraisal of the countries currently operating an opt-out system. Having reviewed this evidence the ODT conceded that whilst the numbers of organs generated may increase under an opt-out system, conversely, because of the way the system actually works, they felt there was a risk that its introduction may cause a backlash amongst the general public resulting in a decrease in organ donations. They based their concerns around fears that such a system would remove the potential for spontaneous acts of goodwill, denying people the opportunity to give a gift, and may deny the opportunity for individuals to determine whether their organs should be donated, thereby precluding choice and the right to self-determination. This might ultimately compromise public trust in the system. This thesis challenges the assumptions made by the Organ Donation Taskforce in respect of introducing an opt-out system. It casts doubt on their claims about compromising privacy interests and then looks to reconcile the potential issues which may arise under an opt-out system; these are preventing the choice to act altruistically and acting in such a way as to undermine public trust. Both of these may result in policy failure. It will advocate a system which addresses the issues raised by the ODT and acts to provide respect for self-determination; this is a soft opt-out system with a combined registry. Such a system would increase the supply of organs for those in need of a transplant, and remain consistent with a society's values in terms of demonstrating respect for individual choice regarding donation.
APA, Harvard, Vancouver, ISO, and other styles
16

McPhillips, Kenneth J. "Far field shallow water horizontal wave number estimation given a linear towed array using fast maximum likelihood, matrix pencil, and subspace fitting techniques /." View online ; access limited to URI, 2007. http://0-digitalcommons.uri.edu.helin.uri.edu/dissertations/AAI3276997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Rix, James Gregory. "Hypercube coloring and the structure of binary codes." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2809.

Full text
Abstract:
A coloring of a graph is an assignment of colors to its vertices so that no two adjacent vertices are given the same color. The chromatic number of a graph is the least number of colors needed to color all of its vertices. Graph coloring problems can be applied to many real world applications, such as scheduling and register allocation. Computationally, the decision problem of whether a general graph is m-colorable is NP-complete for m ≥ 3. The graph studied in this thesis is a well-known combinatorial object, the k-dimensional hypercube, Qk. The hypercube itself is 2-colorable for all k; however, coloring the square of the cube is a much more interesting problem. This is the graph in which the vertices are binary vectors of length k, and two vertices are adjacent if and only if the Hamming distance between the two vectors is at most 2. Any color class in a coloring of Q2k is a binary (k;M, 3) code. This thesis will begin with an introduction to binary codes and their structure. One of the most fundamental combinatorial problems is finding optimal binary codes, that is, binary codes with the maximum cardinality satisfying a specified length and minimum distance. Many upper and lower bounds have been produced, and we will analyze and apply several of these. This leads to many interesting results about the chromatic number of the square of the cube. The smallest k for which the chromatic number of Q2k is unknown is k = 8; however, it can be determined that this value is either 13 or 14. Computational approaches to determine the chromatic number of Q28 were performed. We were unable to determine whether 13 or 14 is the true value; however, much valuable insight was learned about the structure of this graph and the computational difficulty that lies within. Since a 13-coloring of Q28 must have between 9 and 12 color classes being (8; 20; 3) binary codes, this led to a thorough investigation of the structure of such binary codes.
APA, Harvard, Vancouver, ISO, and other styles
18

Sinkovic, John Henry. "The Minimum Rank Problem for Outerplanar Graphs." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3722.

Full text
Abstract:
Given a simple graph G with vertex set V(G)={1,2,...,n} define S(G) to be the set of all real symmetric matrices A such that for all i not equal to j, the ijth entry of A is nonzero if and only if ij is in E(G). The range of the ranks of matrices in S(G) is of interest and can be determined by finding the minimum rank. The minimum rank of a graph, denoted mr(G), is the minimum rank achieved by a matrix in S(G). The maximum nullity of a graph, denoted M(G), is the maximum nullity achieved by a matrix in S(G). Note that mr(G)+M(G)=|V(G)| and so in finding the maximum nullity of a graph, the minimum rank of a graph is also determined. The minimum rank problem for a graph G asks us to determine mr(G) which in general is very difficult. A simple graph is planar if there exists a drawing of G in the plane such that any two line segments representing edges of G intersect only at a point which represents a vertex of G. A planar drawing partitions the rest of the plane into open regions called faces. A graph is outerplanar if there exists a planar drawing of G such that every vertex lies on the outer face. We consider the class of outerplanar graphs and summarize some of the recent results concerning the minimum rank problem for this class. The path cover number of a graph, denoted P(G), is the minimum number of vertex-disjoint paths needed to cover all the vertices of G. We show that for all outerplanar graphs G, P(G)is greater than or equal to M(G). We identify a subclass of outerplanar graphs, called partial 2-paths, for which P(G)=M(G). We give a different characterization for another subset of outerplanar graphs, unicyclic graphs, which determines whether M(G)=P(G) or M(G)=P(G)-1. We give an example of a 2-connected outerplanar graph for which P(G) ≥ M(G).A cover of a graph G is a collection of subgraphs of G such that the union of the edge sets of the subgraphs is equal to the E(G). The rank-sum of a cover C of G is denoted as rs(C) and is equal to the sum of the minimum ranks of the subgraphs in C. We show that for an outerplanar graph G, there exists an edge-disjoint cover of G consisting of cliques, stars, cycles, and double cycles such that the rank-sum of the cover is equal to the minimum rank of G. Using the fact that such a cover exists allows us to show that the minimum rank of a weighted outerplanar graph is equal to the minimum rank of its underlying simple graph.
APA, Harvard, Vancouver, ISO, and other styles
19

Johannsen, Fabian, and Mattias Hellsing. "Hadoop Read Performance During Datanode Crashes." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-130466.

Full text
Abstract:
This bachelor thesis evaluates the impact of datanode crashes on the performance of the read operations of a Hadoop Distributed File System, HDFS. The goal is to better understand how datanode crashes, as well as how certain parameters, affect the  performance of the read operation by looking at the execution time of the get command. The parameters used are the number of crashed nodes, block size and file size. By setting up a Linux test environment with ten virtual machines and Hadoop installed on them and running tests on it, data has been collected in order to answer these questions. From this data the average execution time and standard deviation of the get command was calculated. The network activity during the tests was also measured. The results showed that neither the number of crashed nodes nor block size had any significant effect on the execution time. It also demonstrated that the execution time of the get command was not directly proportional to the size of the fetched file. The execution time was up to 4.5 times as long when the file size was four times as large. A four times larger file did sometimes result in more than a four times as long execution time. Although, the consequences of a datanode crash while fetching a small file appear to be much greater than with a large file. The average execution time increased by up to 36% when a large file was fetched but it increased by as much as 85% when fetching a small file.
APA, Harvard, Vancouver, ISO, and other styles
20

Kreacic, Eleonora. "Some problems related to the Karp-Sipser algorithm on random graphs." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:3b2eb52a-98f5-4af8-9614-e4909b8b9ffa.

Full text
Abstract:
We study certain questions related to the performance of the Karp-Sipser algorithm on the sparse Erdös-Rényi random graph. The Karp-Sipser algorithm, introduced by Karp and Sipser [34] is a greedy algorithm which aims to obtain a near-maximum matching on a given graph. The algorithm evolves through a sequence of steps. In each step, it picks an edge according to a certain rule, adds it to the matching and removes it from the remaining graph. The algorithm stops when the remining graph is empty. In [34], the performance of the Karp-Sipser algorithm on the Erdös-Rényi random graphs G(n,M = [<sup>cn</sup>/<sub>2</sub>]) and G(n, p = <sup>c</sup>/<sub>n</sub>), c &GT; 0 is studied. It is proved there that the algorithm behaves near-optimally, in the sense that the difference between the size of a matching obtained by the algorithm and a maximum matching is at most o(n), with high probability as n → ∞. The main result of [34] is a law of large numbers for the size of a maximum matching in G(n,M = <sup>cn</sup>/<sub>2</sub>) and G(n, p = <sup>c</sup>/<sub>n</sub>), c &GT; 0. Aronson, Frieze and Pittel [2] further refine these results. In particular, they prove that for c &LT; e, the Karp-Sipser algorithm obtains a maximum matching, with high probability as n → ∞; for c &GT; e, the difference between the size of a matching obtained by the algorithm and the size of a maximum matching of G(n,M = <sup>cn</sup>/<sub>2</sub>) is of order Θ<sub>log n</sub>(n<sup>1/5</sup>), with high probability as n → ∞. They further conjecture a central limit theorem for the size of a maximum matching of G(n,M = <sup>cn</sup>/<sub>2</sub>) and G(n, p = <sup>c</sup>/<sub>n</sub>) for all c &GT; 0. As noted in [2], the central limit theorem for c &LT; 1 is a consequence of the result of Pittel [45]. In this thesis, we prove a central limit theorem for the size of a maximum matching of both G(n,M = <sup>cn</sup>/<sub>2</sub>) and G(n, p = <sup>c</sup>/<sub>n</sub>) for c &GT; e. (We do not analyse the case 1 ≤ c ≤ e). Our approach is based on the further analysis of the Karp-Sipser algorithm. We use the results from [2] and refine them. For c &GT; e, the difference between the size of a matching obtained by the algorithm and the size of a maximum matching is of order Θ<sub>log n</sub>(n<sup>1/5</sup>), with high probability as n → ∞, and the study [2] suggests that this difference is accumulated at the very end of the process. The question how the Karp-Sipser algorithm evolves in its final stages for c > e, motivated us to consider the following problem in this thesis. We study a model for the destruction of a random network by fire. Let us assume that we have a multigraph with minimum degree at least 2 with real-valued edge-lengths. We first choose a uniform random point from along the length and set it alight. The edges burn at speed 1. If the fire reaches a node of degree 2, it is passed on to the neighbouring edge. On the other hand, a node of degree at least 3 passes the fire either to all its neighbours or none, each with probability 1/2. If the fire extinguishes before the graph is burnt, we again pick a uniform point and set it alight. We study this model in the setting of a random multigraph with N nodes of degree 3 and α(N) nodes of degree 4, where α(N)/N → 0 as N → ∞. We assume the edges to have i.i.d. standard exponential lengths. We are interested in the asymptotic behaviour of the number of fires we must set alight in order to burn the whole graph, and the number of points which are burnt from two different directions. Depending on whether α(N) » √N or not, we prove that after the suitable rescaling these quantities converge jointly in distribution to either a pair of constants or to (complicated) functionals of Brownian motion. Our analysis supports the conjecture that the difference between the size of a matching obtained by the Karp-Sipser algorithm and the size of a maximum matching of the Erdös-Rényi random graph G(n,M = <sup>cn</sup>/<sub>2</sub>) for c > e, rescaled by n<sup>1/5</sup>, converges in distribution.
APA, Harvard, Vancouver, ISO, and other styles
21

Albishi, Njwd. "Three-and four-derivative Hermite-Birkhoff-Obrechkoff solvers for stiff ODE." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34332.

Full text
Abstract:
Three- and four-derivative k-step Hermite-Birkhoff-Obrechkoff (HBO) methods are constructed for solving stiff systems of first-order differential equations of the form y'= f(t,y), y(t0) = y0. These methods use higher derivatives of the solution y as in Obrechkoff methods. We compute their regions of absolute stability and show the three- and four-derivative HBO are A( 𝜶)-stable with 𝜶 > 71 ° and 𝜶 > 78 ° respectively. We conduct numerical tests and show that our new methods are more efficient than several existing well-known methods.
APA, Harvard, Vancouver, ISO, and other styles
22

MORETTI, RICCARDO. "Digital Nonlinear Oscillators: A Novel Class of Circuits for the Design of Entropy Sources in Programmable Logic Devices." Doctoral thesis, Università di Siena, 2021. http://hdl.handle.net/11365/1144376.

Full text
Abstract:
In recent years, cybersecurity is gaining more and more importance. Cryptography is used in numerous applications, such as authentication and encryption of data in communications, access control to restricted or protected areas, electronic payments. It is safe to assume that the presence of cryptographic systems in future technologies will become increasingly pervasive, leading to a greater demand for energy efficiency, hardware reliability, integration, portability, and security. However, this pervasiveness introduces new challenges: the implementation of conventional cryptographic standards approved by NIST requires the achievement of performance in terms of timing, chip area, power and resource consumption that are not compatible with reduced complexity hardware devices, such as IoT systems. In response to this limitation, lightweight cryptography comes into play - a branch of cryptography that provides tailor-made solutions for resource-limited devices. One of the fundamental classes of cryptographic hardware primitives is represented by Random Number Generators (RNGs), that is, systems that provide sequences of integers that are supposed to be unpredictable. The circuits and systems that implement RNGs can be divided into two categories, namely Pseudo Random Number Generators (PRNGs) and True Random Number Generators (TRNGs). PRNGs are deterministic and possibly periodic finite state machines, capable of generating sequences that appear to be random. In other words, a PRNG is a device that generates and repeats a finite random sequence, saved in memory, or generated by calculation. A TRNG, on the other hand, is a device that generates random numbers based on real stochastic physical processes. Typically, a hardware TRNG consists of a mixed-signal circuit that is classified according to the stochastic process on which it is based. Specifically, the most used sources of randomness are chaotic circuits, high jitter oscillators, circuits that measure other stochastic processes. A chaotic circuit is an analog or mixed-signal circuit in which currents and voltages vary over time based on certain mathematical properties. The evolution over time of these currents and voltages can be interpreted as the evolution of the state of a chaotic nonlinear dynamical system. Jitter noise can instead be defined as the deviation of the output signal of an oscillator from its true periodicity, which causes uncertainty in its low-high and high-low transition times. Other possible stochastic processes that a TRNG can use may involve radioactive decay, photon detection, or electronic noise in semiconductor devices. TRNG proposals presented in the literature are typically designed in the form of Application Specific Integrated Circuits (ASICs). On the other hand, in recent years more and more researchers are exploring the possibility of designing TRNGs in Programmable Logic Devices (PLDs). A PLD offers, compared to an ASIC, clear advantages in terms of cost and versatility. At the same time, however, there is currently a widespread lack of trust in these PLD-based architectures, particularly due to strong cryptographic weaknesses found in Ring Oscillator-based solutions. The goal of this thesis is to show how this mistrust does not depend on poor performance in cryptographic terms of solutions for the generation of random numbers based on programmable digital technologies, but rather on a still immature approach in the study of TRNG architectures designed on PLDs. During the thesis chapters a new class of nonlinear circuits based on digital hardware is introduced that can be used as entropy sources for TRNGs implemented in PLDs, identified by the denomination of Digital Nonlinear Oscillators (DNOs). In Chapter 2 a novel class of circuits that can be used to design entropy sources for True Random Number Generation, called Digital Nonlinear Oscillators (DNOs), is introduced. DNOs constitute nonlinear dynamical systems capable of supporting complex dynamics in the time-continuous domain, although they are based on purely digital hardware. By virtue of this characteristic, these circuits are suitable for their implementation on Programmable Logic Devices. By focusing the analysis on Digital Nonlinear Oscillators implemented in FPGAs, a preliminary comparison is proposed between three different circuit topologies referable to the introduced class, to demonstrate how circuits of this type can have different characteristics, depending on their dynamical behavior and the hardware implementation. In Chapter 3 a methodology for the analysis and design of Digital Nonlinear Oscillators based on the evaluation of their electronics aspects, their dynamical behavior, and the information they can generate is formalized. The presented methodology makes use of different tools, such as figures of merit, simplified dynamical models, advanced numerical simulations and experimental tests carried out through implementation on FPGA. Each of these tools is analyzed both in its theoretical premises and through explanatory examples. In Chapter 4 the analysis and design methodologies of Digital Nonlinear Oscillators formalized in Chapter 3 are used to describe the complete workflow followed for the design of a novel DNO topology. This DNO is characterized by chaotic dynamical behaviors and can achieve high performance in terms of generated entropy, downstream of a reduced hardware complexity and high sampling frequencies. By exploiting the simplified dynamical model, the advanced numerical simulations in Cadence Virtuoso and the FPGA implementation, the presented topology is extensively analyzed both from a theoretical point of view (notable circuit sub-elements that make up the topology, bifurcation diagrams, internal periodicities) and from an experimental point of view (generated entropy, source autocorrelation, sensitivity to routing, application of standard statistical tests). In Chapter 5 an algorithm, called Maximum Worst-Case Entropy Selector (MWCES), that aims to identify, within a set of entropy sources, which offers the best performance in terms of worst-case entropy, also known in literature as "min-entropy", is presented. This algorithm is designed to be implemented in low-complexity digital architectures, suitable for lightweight cryptographic applications, thus allowing online maximization of the performance of a random number generation system based on Digital Nonlinear Oscillators. This chapter presents the theoretical premises underlying the algorithm formulation, some notable examples of its generic application and, finally, considerations related to its hardware implementation in FPGA.
APA, Harvard, Vancouver, ISO, and other styles
23

Akyurek, Alper Sinan. "Swim: A New Multicast Routing Algorithm For Wireless Networks." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613348/index.pdf.

Full text
Abstract:
In this work, a new multicast routing algorithm for wireless networks is presented. The algorithm, called SWIM (Source-initiated WIreless Multicast), is a depth-optimal multicast tree formation algorithm. SWIM is fully distributed and has an average computational complexity of O(N 2 ). SWIM forms a shared tree from the source(s) to destinations<br>yet, as a by-product, it creates a multicast mesh structure by maintaining alternative paths at every tree node. This makes SWIM suitable for both ad hoc networks and access networks with multiple gateways. An extension to the main algorithm is presented for the use in dynamic networks with mobility and/or dynamic destination group. Performance of SWIM is studied with simulations and is compared to other algorithms in the literature. Due to depth optimality, SWIM achieves a lower average and maximum delay than the compared algorithms. The throughput performance is found to be high. Working capability with rateless codes are also studied.
APA, Harvard, Vancouver, ISO, and other styles
24

Gbenga, Abiodun J. "Mathematical modeling and analysis of HIV/AIDS control measures." Thesis, University of the Western Cape, 2012. http://hdl.handle.net/11394/4016.

Full text
Abstract:
>Magister Scientiae - MSc<br>In this thesis, we investigate the HIV/AIDS epidemic in a population which experiences a significant flow of immigrants. We derive and analyse a math- ematical model that describes the dynamics of HIV infection among the im- migrant youths and intervention that can minimize or prevent the spread of the disease in the population. In particular, we are interested in the effects of public-health education and of parental care.We consider existing models of public-health education in HIV/AIDS epidemi-ology, and provide some new insights on these. In this regard we focus atten-tion on the papers [b] and [c], expanding those researches by adding sensitivity analysis and optimal control problems with their solutions.Our main emphasis will be on the effect of parental care on HIV/AIDS epidemi-ology. In this regard we introduce a new model. Firstly, we analyse the model without parental care and investigate its stability and sensitivity behaviour.We conduct both qualitative and quantitative analyses. It is observed that in the absence of infected youths, disease-free equilibrium is achievable and is asymptotically stable. Further, we use optimal control methods to determine the necessary conditions for the optimality of intervention, and for disease eradication or control. Using Pontryagin’s Maximum Principle to check the effects of screening control and parental care on the spread of HIV/AIDS, we observe that parental care is more effective than screening control. However, the most efficient control strategy is in fact a combination of parental care and screening control. The results form the central theme of this thesis, and are included in the manuscript [a] which is now being reviewed for publication. Finally, numerical simulations are performed to illustrate the analytical results.
APA, Harvard, Vancouver, ISO, and other styles
25

Rêgo, Thiago Luiz de Oliveira do. "Sobre o número máximo de retas em superfícies não singular de grau 4 em P3." Universidade Federal da Paraíba, 2016. http://tede.biblioteca.ufpb.br:8080/handle/tede/9302.

Full text
Abstract:
Submitted by ANA KARLA PEREIRA RODRIGUES (anakarla_@hotmail.com) on 2017-08-23T13:08:07Z No. of bitstreams: 1 arquivototal.pdf: 1209071 bytes, checksum: 1eddcf2f494891c2466f5052f15d1ced (MD5)<br>Made available in DSpace on 2017-08-23T13:08:07Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1209071 bytes, checksum: 1eddcf2f494891c2466f5052f15d1ced (MD5) Previous issue date: 2016-09-14<br>Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq<br>In 1943 Beniamino Segrebelievedtohaveshownthatthemaximumnumberof lines containedinasmoothquarticsurfacein P3 is 64, ([16]).Butrecently,therewasa majoroverturnonthatthemewhenthemathematiciansRamsandSchuttfoundthat Segre hadmadeamistakeinhisworktoforgetthequartic'sfamily Z , ([14]),which essentiallycorrespondstothosequarticscontainingalinesthatcanbeincidenttomore than 18 lines containedinthesurface.Inthiswork,basedon([14]),weshowthatevery smoothquarticsurface,whichdoesnotbelongtofamily Z containsamaximumof 64 lines. Oneofthemostimportanttoolstoshowthisresult,isthestudyof_brations _l induced byaline l containedonthesurface,andtherelationshipbetweentheEuler characteristicofthebase(P1 in ourcase),the_bersandthesurfaceconcerned.<br>Em 1943,BeniaminoSegreacreditouterdemonstradoqueonúmeromáximo de retascontidasnumasuperfíciequárticanãosingularem P3 é 64; ([16]). Mas recentemente,houveumareviravoltanessetema,quandoosmatemáticosSªawomir Rams eMatthiasSchüttconstataramqueSegretinhacometidoumerroemseutrabalho ao esquecerasquárticasdafamília Z; ([14]), quecorrespondemessencialmenteas quárticas quepossuemretasquepodemserincidentesamaisde 18 retas contidas na superfície.Nestetrabalho,tendocomobase[14],mostramosquetodaquártica não singular,quenãopertenceafamília Z; contémnomáximo 64 retas. Umadas ferramentasmaisimportantes,paramostraresseresultado,éoestudodas_brações _l induzida porumareta l contidanasuperfície,earelaçãoqueexisteentrea característica deEulerdabase(emnossocaso P1), das_brassingulareseadasuperfície em questão.
APA, Harvard, Vancouver, ISO, and other styles
26

Silva, Sally Andria Vieira da. "Sobre o número máximo de retas em superfícies de grau d em P3." Universidade Federal da Paraíba, 2016. http://tede.biblioteca.ufpb.br:8080/handle/tede/9272.

Full text
Abstract:
Submitted by ANA KARLA PEREIRA RODRIGUES (anakarla_@hotmail.com) on 2017-08-16T14:45:10Z No. of bitstreams: 1 arquivototal.pdf: 923276 bytes, checksum: 684d210a074aefcedef691723f8d04e0 (MD5)<br>Made available in DSpace on 2017-08-16T14:45:10Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 923276 bytes, checksum: 684d210a074aefcedef691723f8d04e0 (MD5) Previous issue date: 2016-03-18<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES<br>It is well-known that planes and quadric surfaces in the projective space contain in - nitely many lines. For smooth cubic surface Cayley and Salmon, 1847, (and Clebsch later) proved that it has exactly 27 lines. For degree 4, in 1943 Segre proved that the maximum number of lines contained in a smooth quartic surface is 64. For surfaces of degree greater than 4 this number is unknown. In this work, we are going to explore what is the maximum number of lines that a smooth complex surface of degree d of the family Fd ; may contain. Thus, we obtain a lower bound to the maximum number of lines that non singular surfaces of degree d in P3 may contain. We emphasize that the determination of this numbers is based on the Klein's classi cation theorem of nitte subgroups of Aut(P1) and the study of 􀀀C; the subgroup of Aut(P1) whose elements leaves invariant the nite subset C of P1:<br>Sabe-se que planos e superf cies qu adricas no espa co projetivo cont em in nitas retas. No caso de uma superf cie c ubica n~ao singular Cayley e Salmon, em 1847, (e Clebsch, mais tarde) provaram que ela cont em exatamente 27 retas. No caso de grau 4, em 1943 Segre provou que o n umero m aximo de retas contidas numa superf cie qu artica n~ao singular e 64. Para superf cies de grau maior que 4 esse n umero e desconhecido. Neste trabalho vamos explorar qual e a quantidade m axima de retas que uma superf cie complexa n~ao singular de grau d na fam lia Fd ; pode conter. Assim obtemos uma cota inferior para o n umero m aximo de retas que as superf cies n~ao singulares de grau d em P3 podem conter. Salientamos que a determina c~ao destes n umeros tem como base o Teorema de Classi ca cao de Klein dos sugbrupos nitos de Aut(P1) e o estudo dos subgrupos 􀀀C de Aut(P1) que deixam invariante um subconjunto nito C de P1:
APA, Harvard, Vancouver, ISO, and other styles
27

Sekhi, Ikram. "Développement d'un alphabet structural intégrant la flexibilité des structures protéiques." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC084/document.

Full text
Abstract:
L’objectif de cette thèse est de proposer un Alphabet Structural (AS) permettant une caractérisation fine et précise des structures tridimensionnelles (3D) des protéines, à l’aide des chaînes de Markov cachées (HMM) qui permettent de prendre en compte la logique issue de l’enchaînement des fragments structuraux en intégrant l’augmentation des conformations 3D des structures protéiques désormais disponibles dans la banque de données de la Protein Data Bank (PDB). Nous proposons dans cette thèse un nouvel alphabet, améliorant l’alphabet structural HMM-SA27,appelé SAFlex (Structural Alphabet Flexibility), dans le but de prendre en compte l’incertitude des données (données manquantes dans les fichiers PDB) et la redondance des structures protéiques. Le nouvel alphabet structural SAFlex obtenu propose donc un nouveau modèle d’encodage rigoureux et robuste. Cet encodage permet de prendre en compte l’incertitude des données en proposant trois options d’encodages : le Maximum a posteriori (MAP), la distribution marginale a posteriori (POST)et le nombre effectif de lettres à chaque position donnée (NEFF). SAFlex fournit également un encodage consensus à partir de différentes réplications (chaînes multiples, monomères et homomères) d’une même protéine. Il permet ainsi la détection de la variabilité structurale entre celles-ci. Les avancées méthodologiques ainsi que l’obtention de l’alphabet SAFlex constituent les contributions principales de ce travail de thèse. Nous présentons aussi le nouveau parser de la PDB (SAFlex-PDB) et nous démontrons que notre parser a un intérêt aussi bien sur le plan qualitatif (détection de diverses erreurs)que quantitatif (rapidité et parallélisation) en le comparant avec deux autres parsers très connus dans le domaine (Biopython et BioJava). Nous proposons également à la communauté scientifique un site web mettant en ligne ce nouvel alphabet structural SAFlex. Ce site web représente la contribution concrète de cette thèse alors que le parser SAFlex-PDB représente une contribution importante pour le fonctionnement du site web proposé. Cette caractérisation précise des conformations 3D et la prise en compte de la redondance des informations 3D disponibles, fournies par SAFlex, a en effet un impact très important pour la modélisation de la conformation et de la variabilité des structures 3D, des boucles protéiques et des régions d’interface avec différents partenaires, impliqués dans la fonction des protéines<br>The purpose of this PhD is to provide a Structural Alphabet (SA) for more accurate characterization of protein three-dimensional (3D) structures as well as integrating the increasing protein 3D structure information currently available in the Protein Data Bank (PDB). The SA also takes into consideration the logic behind the structural fragments sequence by using the hidden Markov Model (HMM). In this PhD, we describe a new structural alphabet, improving the existing HMM-SA27 structural alphabet, called SAFlex (Structural Alphabet Flexibility), in order to take into account the uncertainty of data (missing data in PDB files) and the redundancy of protein structures. The new SAFlex structural alphabet obtained therefore offers a new, rigorous and robust encoding model. This encoding takes into account the encoding uncertainty by providing three encoding options: the maximum a posteriori (MAP), the marginal posterior distribution (POST), and the effective number of letters at each given position (NEFF). SAFlex also provides and builds a consensus encoding from different replicates (multiple chains, monomers and several homomers) of a single protein. It thus allows the detection of structural variability between different chains. The methodological advances and the achievement of the SAFlex alphabet are the main contributions of this PhD. We also present the new PDB parser(SAFlex-PDB) and we demonstrate that our parser is therefore interesting both qualitative (detection of various errors) and quantitative terms (program optimization and parallelization) by comparing it with two other parsers well-known in the area of Bioinformatics (Biopython and BioJava). The SAFlex structural alphabet is being made available to the scientific community by providing a website. The SAFlex web server represents the concrete contribution of this PhD while the SAFlex-PDB parser represents an important contribution to the proper function of the proposed website. Here, we describe the functions and the interfaces of the SAFlex web server. The SAFlex can be used in various fashions for a protein tertiary structure of a given PDB format file; it can be used for encoding the 3D structure, identifying and predicting missing data. Hence, it is the only alphabet able to encode and predict the missing data in a 3D protein structure to date. Finally, these improvements; are promising to explore increasing protein redundancy data and obtain useful quantification of their flexibility
APA, Harvard, Vancouver, ISO, and other styles
28

RANJBAR, Fariba. "Bounds on the maximal number of corrupted nodes via Boolean Network Tomography." Doctoral thesis, 2021. http://hdl.handle.net/11573/1563383.

Full text
Abstract:
In this thesis we are concentrating on identifying defective items in larger sets which is a main problem with many applications in real life situations, e.g., fault diagnosis, medical screening and DNA screening. We consider the problem of localizing defective nodes in networks through an approach based on Boolean Network Tomography (BNT), which is grounded on inferring informations from the Boolean outcomes of end-to-end measurement paths. In particular, we focus on the following three: • Studying Maximal Identifiability, which was recently introduced in BNT to measure the maximal number of corrupted nodes which can be uniquely localized in sets of end-to-end measurement paths on networks; • Central role of Vertex-Connectivity in maximal identifiability; • Investigating identifiability conditions on the set of paths which guarantee discovering or counting unambiguously the defective nodes and contributing this problem both from a theoretical and applied perspective. We prove tight upper and lower bounds on the maximal identifiability for sets of end-to-end paths in network topologies obtained from trees and d-(dimensional) grids over n^d nodes. For trees (both directed and undirected) we show that the maximal identifiability is 1. For undirected d-grids we prove that, using only 2d monitors, maximal identifiability is at least d − 1 and at most d. In the directed case proving that the maximal identifiability is d and can be reached at the cost of placing 2d(n − 1) + 2 monitors on the d-grid. This monitor placement is optimal and adding more monitors will not increase the identifiability. We also study maximal identifiability for directed topologies under embeddings establishing new relations with embeddability, graph dimension and proving that under the operation of transitive closure maximal identifiability grows linearly. Our results suggest the design of networks over n nodes reaching maximal identifiability Ω(log n) using O(log n) monitors and an heuristic to boost maximal identifiability increasing the minimal degree of the network which we test experimentally. Moreover we prove tight bounds on the maximal identifiability first in a particular class of graphs, the Line of Sight networks and then slightly weaker bounds for arbitrary networks. Furthermore we initiate the study of maximal identifiability in random networks. We investigate two models: the classical Erdős-Rényi model, and that of Random Regular graphs. The proposed framework allows a probabilistic analysis of the identifiability in random networks giving a tradeoff between the number of monitors to place and the maximal identifiability. Further in this thesis, we work on the precise tradeoff between number of nodes and number of paths such that at most k nodes can be identified unambiguously. The answer to this problem is known only for k = 1 and we answer it for any k, setting a problem implicitly left open in previous works. We focus on upper and lower bounds on the number of unambiguously identifiable nodes, introducing new identifiability measures (Separability and Distinguishability) which strictly imply and are strictly implied by the notion of identifiability introduced in [39]. We utilize these new measures to design algorithmic heuristics to count failure nodes in a fine-grained way and further to prove the first complexity hardness results on the problem of identifying failure nodes in networks via BNT. At last but not least, we introduce a random model so as to achieve lower bounds on the number of unambiguously identifiable defective nodes. We use this model to approximate that number on real networks by a maximum likelihood estimate approach.
APA, Harvard, Vancouver, ISO, and other styles
29

Bayer, Johann. "Gravitational Lensing and the Maximum Number of Images." Thesis, 2008. http://hdl.handle.net/1807/17298.

Full text
Abstract:
Gravitational lensing, initially a phenomenon used as a solid confirmation of General Relativity, has defined itself in the past decade as a standard astrophysical tool. The ability of a lensing system to produce multiple images of a luminous source is one of the aspects of gravitational lensing that is exploited both theoretically and observationally to improve our understanding of the Universe. In this thesis, within the field of multiple imaging we explore the case of maximal lensing, that is, the configurations and conditions under which a set of deflecting masses can produce the maximum number of images of a distant luminous source, as well as a study of the value for this maximum number itself. We study the case of a symmetric distribution of n-1 point-mass lenses at the vertices of a regular polygon of n-1 sides. By the addition of a perturbation in the form of an n-th mass at the center of the polygon it is proven that, as long as the mass is small enough, the system is a maximal lensing configuration that produces 5(n-1) images. Using the explicit value for the upper bound on the central mass that leads to maximal lensing, we illustrate how this result can be used to find and constrain the mass of planets or brown dwarfs in multiple star systems. For the case of more realistic mass distributions, we prove that when a point-mass is replaced with a distributed lens that does not overlap with existing images or lensing objects, an additional image is formed within the distributed mass while positions and numbers of existing images are left unchanged. This is then used to conclude that the maximum number of images that n isolated distributed lenses can produce is 6(n-1)+1. In order to explore the likelihood of observational verification, we analyze the stability properties of the symmetric maximal lensing configurations. Finally, for the cases of n=4, 5, and 6 point-mass lenses, we study asymmetric maximal lensing configurations and compare their stability properties against the symmetric case.
APA, Harvard, Vancouver, ISO, and other styles
30

Hsu, Ming-Fong, and 許銘峰. "Distributed Detection Using Censoring Schemes with an Unknown Number of Nodes." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/a48pye.

Full text
Abstract:
碩士<br>國立中山大學<br>通訊工程研究所<br>96<br>The energy efficiency issue, which is subjected to an energy constraint, is important for the applications in wireless sensor network. For the distributed detection problem considered in this thesis, the sensor makes a local decision based on its observation and transmits a one-bit message to the fusion center. We consider the local sensors employing a censoring scheme, where the sensors are silent and transmit nothing to fusion center if their observations are not very informative. The goal of this thesis is to achieve an energy efficiency design when the distributed detection employs the censoring scheme. Simulation results show that we can have the same error probabilities of decision fusion while conserving more energy simultaneously as compared with the detection without using censoring schemes. In this thesis, we also demonstrate that the error probability of decision fusion is a convex function of the censoring probability.
APA, Harvard, Vancouver, ISO, and other styles
31

Yu, Guan-Ru, and 余冠儒. "The Number of 2-Protected Nodes in Tries and PATRICIA Tries." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/78192373595702127944.

Full text
Abstract:
碩士<br>國立交通大學<br>應用數學系所<br>102<br>Digital trees are data structures which are of fundamental importance in Computer Science. Recently, so-called 2-protected nodes have attracted a lot of attention. For instance, J. Gaither, Y. Homma, M. Sellke, and M. D. Ward derived an asymptotic expansion for the mean of the number of 2-protected nodes in random tries. Moreover, J. Gaither and M. D. Ward found an asymptotic expansion of the variance and conjectured a central limit theorem. In this thesis, our main goal is to re-derive (and correct) their results by using a systematic method due to M. Fuchs, H.-K. Hwang, and V. Zacharovas. The resulting expressions we obtain are quite different from the paper from J. Gaither, Y. Homma, M. Sellke, and M. D. Ward, but numerically they of course coincide. Moreover, we prove the conjectured central limit theorem from J. Gaither and M. D. Ward. In fact, we prove even a more general result, namely, a bivariate central limit theorem for the number of internal nodes and the number of 2-protected nodes in random tries. From this, not only the conjecture from J. Gaither and M. D. Ward follows but we also obtain a central limit theorem for PATRICIA tries. Finally, we also derive asymptotic expansions of mean and variance for PATRICIA tries.
APA, Harvard, Vancouver, ISO, and other styles
32

劉逸彰. "The maximum number of edges of uniform C-hypergraphs." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/53822492244639747636.

Full text
Abstract:
碩士<br>國立政治大學<br>應用數學研究所<br>97<br>A mixed hypergraph is a triple H = (X, C,D), where X is the vertex set, and each of C,D is a list of subsets of X. A strict k-coloring is a onto mapping from X to {1,2, . . . , k} such that each C ∈ C contains two vertices have a common value and each D ∈ D has two vertices have distinct values. Each of C,D may be empty. The maximum(minimum) number of colors over all strict k-colorings is called the upper(lower) chromatic number of H and is denoted by χ^¯(H)(χ(H)). If a hypergraph H has no multiple edges and all its edges are of size r, then H is called an r-uniform hypergraph. We want to find the maximum number of edges for r-uniform C-hypergraph of order n with the condition χ^¯(H) ≥ k, where k is fixed. We will solve this problem according to three different cases, r < k, r = k and r > k.
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Li-Wei, and 陳立偉. "On the Maximum Transport Capacity of Gaussian Multiple Access Channels With Mobile Nodes." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/02014702407793880656.

Full text
Abstract:
碩士<br>國立清華大學<br>通訊工程研究所<br>96<br>In this thesis, we consider the transport capacity of a Gaussian multiple access channel (MAC) in a mobile communications scenario in which all the nodes are allowed to be mobile. The transport capacity was first proposed by Gupta and Kumar as a figure of merit about how effectively a wireless network operates, and is given by the summation of the ratereward products (the products of the data rates and the associated rewards for the successful transmission of the data) over all transmitter-receiver pairs. Most existing works on the maximal transport capacity are for a fixed wireless communications network in which all the nodes are located at fixed positions. However, it has been shown by Grossglauser and Tse that if mobility could be exploited to increase the maximal transport capacity. Therefore, in this thesis we investigate the maximal transport capacity of Gaussian MACs in the scenario that all the nodes are allowed to be mobile, and study the optimal positions of the mobile nodes that achieve the largest possible maximum transport capacity.
APA, Harvard, Vancouver, ISO, and other styles
34

Hong, Wei-Ping, and 洪偉評. "Distributed Detection in Wireless Sensor Networks with an Unknown Number of Nodes." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/79792628852021143755.

Full text
Abstract:
碩士<br>國立暨南國際大學<br>通訊工程研究所<br>95<br>This work considers the problem of collaborative detection in sensor networks with an unknown number of operating nodes. In wireless sensor networks, both the energy resource and the bandwidth of communication channel are limited. This work employs the sensor censoring scheme to achieve energy-efficiency and low communication rate on the design of distributed decision fusion when the number of operating sensors is unknown to the fusion center. Very surprisingly, in this work, we showed that the energy conservation does not necessary result in the degradation of fusion performance in both theoretical analysis and numerical simulations. Indeed, in many cases, utilizing more energy or bandwidth actually degrades the fusion performance, and the design of energy-efficient local detection rule should start from a nonzero censoring rate, which gives the optimal fusion performance.
APA, Harvard, Vancouver, ISO, and other styles
35

CHEN, JIN-FA, and 陳金發. "The size and number of maximum minimal cutsets in N-Cube." Thesis, 1986. http://ndltd.ncl.edu.tw/handle/41905410868201575498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Huang, Chien-Ho, and 黃謙和. "QoS Scheduling for Maximum Guaranteed Flow Number in OFDMA-Based System." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/67134255025674431411.

Full text
Abstract:
碩士<br>國立交通大學<br>電信工程研究所<br>101<br>In recently years, broadband wireless access, an attractive technology to support various applications in our daily life, has been developed rapidly. Those applications usually have QoS requirements, such that packet loss ratio and delay bound. Therefore, it is important to design a scheduling scheme that provides QoS and uses spectrum efficiently. In this thesis, we propose a two-stage scheduling scheme in OFDMA-based wireless system. Flows are divided into two sets and served with two resource allocation algorithm in two stages. The simulation results show that our proposed scheme can serve more flows than previous work, under the same QoS requirements.
APA, Harvard, Vancouver, ISO, and other styles
37

Du, Plooy Philippus Theunis. "Number of lymph nodes identified in resected specimens of colorectal cancer from a variety of South African Hospitals: a retrospective study." Thesis, 2011. http://hdl.handle.net/10539/10836.

Full text
Abstract:
a variety of South African Hospitals: a retrospective study Purpose: To examine the number of lymph nodes present in specimens submitted for histological examination from a variety of South African hospitals; the identification of factors that influence nodal yield and node positivity; determining whether oncological clearance is improved based on the number of nodes examined in high volume centers versus low volume centres; the establishment of guidelines on where surgery for colorectal cancer should ideally be performed. Patients and methods: Pathology reports of resected specimens of colorectal adenocarcinoma in the database of the National Health Laboratory Service Johannesburg laboratory from 2000 to 2005, were examined for patient demographics, referring hospital, tumour specific features of T-stage, degree of differentiation, lymphovascular invasion and adenocarcinoma subtype (mucinous versus non-mucinous), number of lymph nodes identified, number of nodes positive and whether preoperative radiotherapy was administered. Hospitals were grouped into four groups of Charlotte Maxeke Johannesburg Academic Hospital, Helen Joseph Hospital, private hospitals and non-academic public hospitals. Patients were grouped according to the number of lymph nodes retrieved into the following groups: not recorded, no nodes identified,1-7 nodes identified, 8-12 nodes, 13-18 nodes, and greater than 18 nodes identified. Additionally, patients were subdivided into those with nodal metastasis and those without, and into colon and rectal cancer respectively. Multivariate analysis was performed via StatSoft, Inc. (2008) STATISTICA (data analysis software system), version 8.0 on the different lymph node groups versus the abovementioned covariates. Results: Of the 365 patients identified, the mean number of lymph nodes examined per resected specimen was 8.9 (±6.2SD), with significant differences noted between the different resection subtypes (p < 0.001). No statistically significant difference in mean number of nodes identified could be seen between the various hospitals. Alarmingly, in the group of patients where no metastatic nodes could be identified, the recommendation of 12 or more nodes examined per specimen was upheld in only 29% of cases. Factors associated with positive lymph nodes in this study include T-stage, degree of differentiation and lymphovascular invasion by the tumour. No significant benefit in terms of finding metastasis nodes could be demonstrated by examining more than 18 nodes. Conclusions and recommendations: This study highlights a substandard nodal assessment in colorectal cancer specimens overall, including the academic hospitals. More than 70% of node negative patients in this series may have been understaged. Close liaison between the surgeon and examining pathologist is recommended. In the presence of the identified high risk factors for nodal involvement and a substandard nodal assessment, additional measures i.e. fat clearance and immunohistochemistry need employment. A prospective study assessing quality of surgery is necessary, as is the creation of a central database to improve overall quality of cancer care.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Ya-Chi, and 陳雅琪. "Embedding the Maximum Number of Congestion-Free Spanning Trees in Arrangement Graphs." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/20973566596347105767.

Full text
Abstract:
碩士<br>逢甲大學<br>資訊工程學系<br>89<br>It has been shown that the star graph is superior to the widely used hypercube, such as smaller degree, diameter, and average distance. However, a major practical difficulty of the star graph is its restriction on the number of nodes. The (n,k)-arrangement graph is a generalized version of star graphs that overcomes the restriction of the star graph on number nodes, and preserves many attractive properties of star graphs, such as regular structure, vertex symmetry, edge symmetry, hierarchical structure, and maximally fault-tolerance, etc. It is a good interconnection network. The broadcasting is an important issue in interconnection networks. To establish the basic infrastructure of network broadcasting is to find the spanning trees of the network. Once we can discover large number of congestion-free spanning trees in a network, these spanning trees can be utilized to balance network transmission for reducing congestion, to increase parallel processing capability for augmenting broadcasting performance, and to raise fault tolerance capability for maintaining network reliability. Besides, the lower the height of each spanning tree is, the less the delay of the network is. Therefore, obtaining more and congestion-free spanning trees with lower height in a network is crucial to network broadcasting. In this thesis, a methodology to embed all (k(n-k)) congestion-free spanning trees in an (n,k)-arrangement graph is proposed. Then, we show that the height of each spanning tree is optimal , for k>=2/n , and is less and equal to the optimal value plus one, for k<2/n .
APA, Harvard, Vancouver, ISO, and other styles
39

Lahiri, Abhiruk. "Problems on bend-number, circular separation dimension and maximum edge 2-colouring." Thesis, 2018. https://etd.iisc.ac.in/handle/2005/5491.

Full text
Abstract:
Representation of graphs as the intersection graphs of geometric objects has a long history. The objective is to a nd a collection of \simple" sets S such that a given graph G is its intersection graph. We are interested in two types of intersection representations motivated by VLSI circuit layout problem. In these models, vertices are represented by polygonal paths with alternating horizontal and vertical segments. The measure of the complexity of a path is de fined by the number of bends it has. The objective is to minimise the maximum number of bends used by any path in a representation. This minimum number (over all possible representations) is called the bend number of the graph. In the first model, two vertices share an edge if and only if corresponding paths intersect. A graph that can be represented in such a way is called a VPG graph. We study a subclass of the planar graphs on this model. In the second model, two vertices of the graph share an edge if and only if corresponding paths overlap on a non-zero length segment. A graph that can be represented in such a way is called an EPG graph. We study Halin graphs which is a subclass of the planar graphs, fully subdivided graphs and minimally 2-connected graphs for this model. Using one of these results, we show optimization problems such as maximum independent set, minimum dominating set are APX-hard on 1-bend EPG graphs. We devise a polynomial time algorithm for the colouring and maximum independent set problem on two-sided boundary generated EPG graphs which is a subclass of 1-bend EPG graphs. We also establish NP-hardness and inapproximability result on three-sided boundary generated EPG graphs and four-sided boundary generated EPG graphs. In the second part, we study the notion of circular separation dimension which was introduced recently by Douglas West. Formally, a pair of non-adjacent edges is said to be separated in a circular ordering of vertices if the endpoints of the two edges do not alternate in the ordering. The circular separation dimension (CSD) of a graph G is the minimum number of circular orderings of the vertices of G such that every pair of non-adjacent edges is separated in at least one of the circular orderings. We establish a new upper bound for CSD in terms of the chromatic number of the graph. We further study this question for special graph classes such as series-parallel graphs and two-outerplanar graphs. In the nal part, we study maximum edge 2-colouring problem. For a graph G, the maximum edge 2-colouring problem seeks the maximum possible colours that can be used to colour the edges of the graph such that edges incident on a vertex span at most two distinct colours. The problem is well studied in combinatorics, in the context of the anti-Ramsey number. Algorithmically, the problem is known to be NP-hard. It is also known that no polynomial time algorithm can approximate to a factor less than 3=2 assuming the unique game conjecture. The obvious but the only known algorithm issues different colours to the edges of a maximum matching and different colours to remaining connected components. We establish an improved approximation bound of 8=5 for the algorithm, for triangle-free graphs with perfect matching.
APA, Harvard, Vancouver, ISO, and other styles
40

Chih-YungLi and 李志勇. "Maximum Likelihood Estimator for the Number of True Null Hypotheses in Multiple Testing." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/04307563400292941031.

Full text
Abstract:
碩士<br>國立成功大學<br>統計學系<br>102<br>When we conduct multiple testing, the probability of committing type I error tends to become a problem to be solved. In typical multiple comparison procedures, the control of familywise error rate (FWER) method, which given each comparison test the same probability of type I error, is a common solution. Another solution to this problem is FDR-controlled procedure, which is proposed by Benjamini and Hochberg in 1995. Whatever FWER-controlled procedure or FDR-controlled procedure we choose, to estimate the number of true null hypotheses is the first thing needed to do. Maximum likelihood estimation has some good properties such as asymptotically unbiased and asymptotic normality. For the problem of estimating the number of true null hypotheses, approximate upper bound and maximum likelihood estimation (MLE) are presented in this thesis. The former is obtained by the number of test and error rates; MLE is different from the past methods in the literatures, which is estimated by maximizing the likelihood function. In addition, we compare with different methods by root mean square error in statistical simulation. Simulation results show that when the number of test is large, the proposed method has the smallest root mean squared error. That is, MLE estimate the number of true null hypotheses more accurately when the number of test is large.
APA, Harvard, Vancouver, ISO, and other styles
41

Lin, Xin Hong, and 林信宏. "Hardware-Efficient Implementation of Maximum-Period Pseudo Random Number Generators Using Programmable Barrel Shifters." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/m3r4k9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

LU, MING-SHIH, and 呂明蒔. "A Simulation Study of the Bound of the Distribution of the Number of Isolated Nodes in Random Networks." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/ee9mmx.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Dörnfelder, Martin [Verfasser]. "Penalty methods in discrete optimization : on the maximum number of threshold parameters / von Martin Dörnfelder." 2009. http://d-nb.info/1001408403/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Alfonse, Lauren Elizabeth. "Effects of template mass, complexity, and analysis method on the ability to correctly determine the number of contributors to DNA mixtures." Thesis, 2015. https://hdl.handle.net/2144/16179.

Full text
Abstract:
In traditional forensic DNA casework, the inclusion or exclusion of individuals who may have contributed to an item of evidence may be dependent upon the assumption on the number of individuals from which the evidence arose. Typically, the determination of the minimum number of contributors (NOC) to a mixture is achieved by counting the number of alleles observed above a given analytical threshold (AT); this technique is known as maximum allele count (MAC). However, advances in polymerase chain reaction (PCR) chemistries and improvements in analytical sensitivities have led to an increase in the detection of complex, low template DNA (LtDNA) mixtures for which MAC is an inadequate means of determining the actual NOC. Despite the addition of highly polymorphic loci to multiplexed PCR kits and the advent of interpretation softwares which deconvolve DNA mixtures, a gap remains in the DNA analysis pipeline, where an effective method of determining the NOC needs to be established. The emergence of NOCIt -- a computational tool which provides the probability distribution on the NOC, may serve as a promising alternative to traditional, threshold- based methods. Utilizing user-provided calibration data consisting of single source samples of known genotype, NOCIt calculates the a posteriori probability (APP) that an evidentiary sample arose from 0 to 5 contributors. The software models baseline noise, reverse and forward stutter proportions, stutter and allele dropout rates, and allele heights. This information is then utilized to determine whether the evidentiary profile originated from one or many contributors. In short, NOCIt provides information not only on the likely NOC, but whether more than one value may be deemed probable. In the latter case, it may be necessary to modify downstream interpretation steps such that multiple values for the NOC are considered or the conclusion that most favors the defense is adopted. Phase I of this study focused on establishing the minimum number of single source samples needed to calibrate NOCIt. Once determined, the performance of NOCIt was evaluated and compared to that of two other methods: the maximum likelihood estimator (MLE) -- accessed via the forensim R package, and MAC. Fifty (50) single source samples proved to be sufficient to calibrate NOCIt, and results indicate NOCIt was the most accurate method of the three. Phase II of this study explored the effects of template mass and sample complexity on the accuracy of NOCIt. Data showed that the accuracy decreased as the NOC increased: for 1- and 5-contributor samples, the accuracy was 100% and 20%, respectively. The minimum template mass from any one contributor required to consistently estimate the true NOC was 0.07 ng -- the equivalent of approximately 10 cells' worth of DNA. Phase III further explored NOCIt and was designed to assess its robustness. Because the efficacy of determining the NOC may be affected by the PCR kit utilized, the results obtained from NOCIt analysis of 1-, 2-, 3-, 4-, and 5-contributor mixtures amplified with AmpFlstr® Identifiler® Plus and PowerPlex® 16 HS were compared. A positive correlation was observed for all NOCIt outputs between kits. Additionally, NOCIt was found to result in increased accuracies when analyzed with 1-, 3-, and 4-contributor samples amplified with Identifiler® Plus and with 5-contributor samples amplified with PowerPlex® 16 HS. The accuracy rates obtained for 2-contributor samples were equivalent between kits; therefore, the effect of amplification kit type on the ability to determine the NOC was not substantive. Cumulatively, the data indicate that NOCIt is an improvement to traditional methods of determining the NOC and results in high accuracy rates with samples containing sufficient quantities of DNA. Further, the results of investigations into the effect of template mass on the ability to determine the NOC may serve as a caution that forensic DNA samples containing low-target quantities may need to be interpreted using multiple or different assumptions on the number of contributors, as the assumption on the number of contributors is known to affect the conclusion in certain casework scenarios. As a significant degree of inaccuracy was observed for all methods of determining the NOC at severe low template amounts, the data presented also challenge the notion that any DNA sample can be utilized for comparison purposes. This suggests that the ability to detect extremely complex, LtDNA mixtures may not be commensurate with the ability to accurately interpret such mixtures, despite critical advances in software-based analysis. In addition to the availability of advanced comparison algorithms, limitations on the interpretability of complex, LtDNA mixtures may also be dependent on the amount of biological material present on an evidentiary substrate.
APA, Harvard, Vancouver, ISO, and other styles
45

Hashim, Talha. "Improved approximation bounds on maximum edge q coloring of dense graphs." Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6089.

Full text
Abstract:
The anti-Ramsey number ar(G,H) with input graph G and pattern graph H, is the maximum positive integer k such that there exists an edge coloring of G using k colors, in which there are no rainbow subgraphs isomorphic to H in G. (H is rainbow if all its edges get distinct colors). The concept of anti-Ramsey number was introduced by Erdos, Simanovitz, and Sos in 1973. Thereafter several researchers investigated this concept in the combinatorial setting. The cases where pattern graph H is a complete graph K_r, a path P_r or a star K_{1,r} for a fixed positive integer r, are well studied.Recently, Feng et al. revisited the anti-Ramsey problem for the pattern graph K_{1,t} (for t geq 3) purely from an algorithmic point of view, due to its applications in interference modeling of wireless networks. They posed it as an optimization problem, the maximum edge q-coloring problem. For a graph G and an integer q geq 2, an edge q-coloring of G is an assignment of colors to edges of G, such that edges incident on a vertex span at most q distinct colors. The maximum edge q-coloring problem seeks to maximize the number of colors in an edge q- coloring of the graph G. Note that the optimum value of the edge q-coloring problem of G equals ar(G,K_{1,q+1}). We study ar(G,K_{1,t}), the anti-Ramsey number of stars, for each fixed integer t geq 3, both from combinatorial and algorithmic point of view. The first of our main results, presents an upper bound for ar(G,K_{1,q+1}), in terms of number of vertices and the minimum degree of G. The second one improves this result for the case of triangle free input graphs. For a positive integer t, let H_t denote a subgraph of G with maximum number of possible edges and maximum degree t. From an observation of Erdos, Simanovitz, and Sos, we get: |E(H_{q-1})| + 1 leq ar(G,K_{1,q+1}) leq |E(H_{q})|. For instance, when q=2, the subgraph E(H_{q-1}) refers to a maximum matching. It looks like |E(H_{q-1})| is the most natural parameter associated with the anti-ramsey number ar(G,K_{1,q+1}) and the approximation algorithms for the maximum edge coloring problem proceed usually by first computing the H_{q-1}, then coloring all its edges with different colors and by giving one (sometimes more than one) extra colors to the remaining edges. The approximation guarantees of these algorithms usually depend on upper bounds for ar(G,K_{1,q+1}) in terms of |E(H_{q-1})|. Our third main result presents an upper bound for ar(G,K_{1,q+1}) in terms of |E(H_{q-1})|. All our results have algorithmic consequences. For some large special classes of graphs, such as d-regular graphs, where d geq 4, our results can be used to prove a better approximation guarantee for the sub-factor based algorithm. We also show that all our bounds are almost tight. Results for the case q=2 were done earlier by Chandran et al. In this thesis, we extend it further for each fixed integer q greater then 2
APA, Harvard, Vancouver, ISO, and other styles
46

Pellerin, Brian. "Modelling Biennial Bearing in Apple Trees." 2011. http://hdl.handle.net/10222/14275.

Full text
Abstract:
Many commercially grown apple cultivars have a biennial cropping habit, producing many small fruit in one year and few or none in the following year. The production of fruits is known to inhibit flower initiation for the following year. This undesirable trait is frequently managed by removing (thinning) some flowers or young fruit in years of heavy flowering which improves the size of remaining fruits, but does not reliably improve flowering in the following year. The effect of thinning on flower initiation is not well understood. Two mathematical models are developed describing the relationship between flowering in one year and the next. The first models the effects of thinning on return bloom and attempts to define maximum repeatable flower number. The second models how proximity of growing points may impact biennial bearing and maximum annual flower number. This second model may be useful to advance research into biennial bearing in apple.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Z., Yakun Guo, J. Zeng, J. Zheng, and X. Wu. "Numerical simulation of vertical buoyant wall jet discharged into a linearly stratified environment." 2018. http://hdl.handle.net/10454/15580.

Full text
Abstract:
Yes<br>Results are presented from a numerical simulation to investigate the vertical buoyant wall jet discharged into a linearly stratified environment. A tracer transport model considering density variation is implemented. The standard k-ε model with the buoyancy effect is used to simulate the evolution of the buoyant jet in a stratified environment. Results show that the maximum jet velocity trend along vertical direction has two regions: acceleration region and deceleration region. In the deceleration region, jet velocity is reduced by the mixing taking place between jet fluid and ambient lighter fluid. Jet velocity is further decelerated by the upwards buoyant force when ambient fluid density is larger than jet fluid density. The normalized peak value of the cross sectional maximum jet velocity decreases with λ (the ratio between the characteristic momentum length and the buoyancy length). When λ<1, the dimensionless maximum penetration distance (normalized by the characteristic buoyancy length) does not vary much and has a value between 4.0 and 5.0, while it increases with increasing λ for λ≥1. General good agreements between the simulations and measurements are obtained, indicating that the model can be successfully applied to investigate the mixing of buoyant jet with ambient linearly stratified fluid.<br>Engineering and Physical Sciences Research Council (EPSRC: EP/G066264/1), National Natural Science Foundation of China (51609214,41376099,51609213), National Natural Science Foundation for Distinguished Young Scholars of China (Grant No.51425901),Public Project of Zhejiang Province (2016C33095)
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Yan-Wen, and 王彥雯. "Maximum Number of Live Births per Donor in Artificial Insemination Based on Incidence Rate and Coefficient of Inbreeding." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/26902211522716246708.

Full text
Abstract:
碩士<br>國立臺灣大學<br>流行病學研究所<br>94<br>Using anonymous donors’ gametes, a treatment in assisted reproductive technology (ART), may result in inadvertent consanguineous mating, especially when a single donor’s gametes are multiply used. Unwitting consanguinities between offspring of one single donor and his or her unknown relatives may lead to higher risks of transmission of hereditary diseases. However, when supply and demand of artificial insemination by donor (AID) are unbalanced, multiple use of one single donor’s gametes becomes a solution. In that case, its influence on medical and genetic aspects should be carefully evaluated, as well as its social, cultural and legal implications. The choice of maximum number of offspring per donor varies greatly in different countries. For example, the limit in France is 5, 10 in UK, 6 in Spain, 1 in Taiwan, and 25 in The Netherlands. Most of these limits, however, do not result from specific scientific studies. In this study, two approaches are discussed for setting the maximum number of live births per donor. First, I incorporate the risk of certain hereditary disease in the computation of number of consanguinities to evaluate the possible elevation of incidence and prevalence. For any given disease of interest, I construct its incidence due to donor insemination (DI) from probabilistic perspective. Based on information from disease characteristics, population data, donor statistics, and tolerable increased incidence or medical costs, one can decide the figure for the maximal limit. The results show that there will be more new cases due to AID when the prevalence is high. In addition, when using the ratio of incidence to prevalence as the criterion, the risk owing to DI of autosomal recessive inheritance disease is higher than that of disease of other mode of inheritance. The second approach, following the same idea from de Boer et al. (1995) and Curie-Cohen (1980), adopts the population perspective to derive coefficient of inbreeding including DI. The modification includes adding the current constant coefficient of inbreeding in a given society with no AID children, and an extra coefficient of inbreeding due to AID with respect to the number of live births per donor. Then, maximum number of live births per donor will be decided by setting a threshold for the tolerable coefficient of inbreeding. The results indicate that the larger coefficient of inbreeding in a population without AID is, the smaller number of live births per donor is and the more significantly assortive mating for phenotype is, the smaller number of live births per donor is.
APA, Harvard, Vancouver, ISO, and other styles
49

Ogunronbi, Oluseun Ifeanyi. "Maximum heat transfer rate density from a rotating multiscale array of cylinders." Diss., 2011. http://hdl.handle.net/2263/26208.

Full text
Abstract:
This work investigated a numerical approach to the search of a maximum heat transfer rate density (the overall heat transfer dissipated per unit of volume) from a two-dimensional laminar multiscale array of cylinders in cross-flow under an applied fixed pressure drop and subject to the constraint of fixed volume. It was furthermore assumed that the flow field was steady state and incompressible. The configuration had two degrees of freedom in the stationary state, that is, the spacing between the cylinders and the diameter of the smaller cylinders. The angular velocity of the cylinders was in the range 0 ≤ ϖ, ≤ 0.1. Two cylinders of different diameters were used, in the first case, the cylinders were aligned along a plane which lay on their centrelines. In the second case, the cylinder leading edge was aligned along the plane that received the incoming fluid at the same time. The diameter of the smaller cylinder was fixed at the optimal diameter obtained when the cylinders were stationary. Tests were conducted for co-rotating and counterrotating cylinders. The results were also compared with results obtained in the open literature and the trend was found to be the same. Results showed that the heat transfer from a rotating array of cylinders was enhanced in certain cases and this was observed for both directions of rotation from an array which was aligned on the centreline. For rotating cylinders with the same leading edge, there is heat transfer suppression and hence the effect of rotation on the maximum heat transfer rate density is insignificant. This research is important in further understanding of heat transfer from rotating cylinders, which can be applied to applications ranging from contact cylinder dryers in the chemical processes industry and rotating cylinder electrodes to devices used for roller hearth furnaces.<br>Dissertation (MEng)--University of Pretoria, 2011.<br>Mechanical and Aeronautical Engineering<br>unrestricted
APA, Harvard, Vancouver, ISO, and other styles
50

Lê, Ngoc C. "Algorithms for the Maximum Independent Set Problem." Doctoral thesis, 2014. https://tubaf.qucosa.de/id/qucosa%3A22990.

Full text
Abstract:
This thesis focuses mainly on the Maximum Independent Set (MIS) problem. Some related graph theoretical combinatorial problems are also considered. As these problems are generally NP-hard, we study their complexity in hereditary graph classes, i.e. graph classes defined by a set F of forbidden induced subgraphs. We revise the literature about the issue, for example complexity results, applications, and techniques tackling the problem. Through considering some general approach, we exhibit several cases where the problem admits a polynomial-time solution. More specifically, we present polynomial-time algorithms for the MIS problem in: + some subclasses of $S_{2;j;k}$-free graphs (thus generalizing the classical result for $S_{1;2;k}$-free graphs); + some subclasses of $tree_{k}$-free graphs (thus generalizing the classical results for subclasses of P5-free graphs); + some subclasses of $P_{7}$-free graphs and $S_{2;2;2}$-free graphs; and various subclasses of graphs of bounded maximum degree, for example subcubic graphs. Our algorithms are based on various approaches. In particular, we characterize augmenting graphs in a subclass of $S_{2;k;k}$-free graphs and a subclass of $S_{2;2;5}$-free graphs. These characterizations are partly based on extensions of the concept of redundant set [125]. We also propose methods finding augmenting chains, an extension of the method in [99], and finding augmenting trees, an extension of the methods in [125]. We apply the augmenting vertex technique, originally used for $P_{5}$-free graphs or banner-free graphs, for some more general graph classes. We consider a general graph theoretical combinatorial problem, the so-called Maximum -Set problem. Two special cases of this problem, the so-called Maximum F-(Strongly) Independent Subgraph and Maximum F-Induced Subgraph, where F is a connected graph set, are considered. The complexity of the Maximum F-(Strongly) Independent Subgraph problem is revised and the NP-hardness of the Maximum F-Induced Subgraph problem is proved. We also extend the augmenting approach to apply it for the general Maximum Π -Set problem. We revise on classical graph transformations and give two unified views based on pseudo-boolean functions and αff-redundant vertex. We also make extensive uses of α-redundant vertices, originally mainly used for $P_{5}$-free graphs, to give polynomial solutions for some subclasses of $S_{2;2;2}$-free graphs and $tree_{k}$-free graphs. We consider some classical sequential greedy heuristic methods. We also combine classical algorithms with αff-redundant vertices to have new strategies of choosing the next vertex in greedy methods. Some aspects of the algorithms, for example forbidden induced subgraph sets and worst case results, are also considered. Finally, we restrict our attention on graphs of bounded maximum degree and subcubic graphs. Then by using some techniques, for example ff-redundant vertex, clique separator, and arguments based on distance, we general these results for some subclasses of $S_{i;j;k}$-free subcubic graphs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography