To see the other types of publications on this topic, follow the link: Greedy Algorithms.

Dissertations / Theses on the topic 'Greedy Algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Greedy Algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sundman, Dennis. "Greedy Algorithms for Distributed Compressed Sensing." Doctoral thesis, KTH, Kommunikationsteori, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-144907.

Full text
Abstract:
Compressed sensing (CS) is a recently invented sub-sampling technique that utilizes sparsity in full signals. Most natural signals possess this sparsity property. From a sub-sampled vector, some CS reconstruction algorithm is used to recover the full signal. One class of reconstruction algorithms is formed by the greedy pursuit, or simply greedy, algorithms, which is popular due to low complexity and good performance. Meanwhile, in sensor networks, sensor nodes monitor natural data for estimation or detection. One application of sensor networking is in cognitive radio networks, where sensor nodes want to estimate a power spectral density. The data measured by different sensors in such networks are typically correlated. Another type are multiple processor networks of computational nodes that cooperate to solve problems too difficult for the nodes to solve individually. In this thesis, we mainly consider greedy algorithms for distributed CS. To this end, we begin with a review of current knowledge in the field. Here, we also introduce signal models to model correlation and network models for simulation of network. We proceed by considering two applications; power spectrum density estimation and distributed reconstruction algorithms for multiple processor networks. Then, we delve deeper into the greedy algorithms with the objective to improve reconstruction performance; this naturally comes at the expense of increased computational complexity. The main objective of the thesis is to design greedy algorithms for distributed CS that exploit data correlation in sensor networks to improve performance. We develop several such algorithms, where a key element is to use intuitive democratic voting principles. Finally, we show the merit of such voting principles by probabilistic analysis based on a new input/output system model of greedy algorithms in CS. By comparing the new single sensor algorithms to well known greedy pursuit algorithms already present in the literature, we see that the goal of improved performance is achieved. We compare complexity using big-O analysis where the increased complexity is characterized. Using simulations we verify the performance and confirm complexity claims. The complexity of distributed algorithms is typically harder to analyze since it depends on the specific problem and network topology. However, when analysis is not possible, we provide extensive simulation results. No distributed algorithms based on the signal-models used in this thesis were so far available in the literature. Therefore, we compare our algorithms to standard single-sensor algorithms, and our results can then easily be used as benchmarks for future research. Compared to the stand-alone case, the new distributed algorithms provide significant performance gains. Throughout the thesis, we strive to present the work in a smooth flow of algorithm design, simulation results and analysis.
Compressed sensing (CS) är en nyutvecklad teknik som utnyttjar gleshet i stora undersamplade signaler. Många intressanta signaler besitter dessa glesa egenskaper. Utifrån en undersamplad vektor återskapar CS-algoritmer hela den sökta signalen. En klass av rekonstruktionsalgoritmer är de så kallade giriga algoritmerna, som blivit populära tack vare låg komplexitet och god prestanda. CS kan användas i vissa typer av nätverk för att detektera eller estimera stora signaler. En typ av nätverk där detta kan göras är i sensornätverk för kognitiv radio, där man använder sensorer för att estimera effektspektrum. Datan som samplas av de olika sensorerna i sådana nätverk är typiskt korrelerad. En annan typ av nätverk är multiprocessornätverk bestående av distribuerade beräkningsnoder, där noderna genom samarbete kan lösa svårare problem än de kan göra ensamma. Avhandlingen kommer främst att behandla giriga algoritmer för distribuerade CS-problem. Vi börjar med en överblick av nuvarande kunskap inom området. Här introducerar vi signalmodeller för korrelation och nätverksmodeller som används för simulering i nätverk. Vi fortsätter med att studera två tillämpningar; estimering av effektspektrum och en distribuerad återskapningsalgoritm för multiprocessornätverk. Därefter tar vi ett djupare steg i studien av giriga algoritmer, där vi utvecklar nya algoritmer med förbättrad prestanda, detta till priset av ökad beräkningskomplexitet. Huvudmålet med avhandlingen är giriga algoritmer för distribuerad CS, där algoritmerna utnyttjar datakorrelationen i sensornätverk. Vi utvecklar flera sådana algoritmer, där en huvudingrediens är att använda demokratiska röstningsalgoritmer. Vi analyserar sedan denna typ av röstningsalgoritmer genom att introducera en ingång/utgångs modell. Analysen visar att algoritmerna ger bra resultat. Genom att jämföra algoritmer för enskilda sensorer med redan befintliga algoritmer i litteraturen ser vi att målet med ökad prestanda uppnås. Vi karaktäriserar också komplexiteten. Genom simulationer verifierar vi både prestandan och komplexiteten. Att analysera komplexitet hos distribuerade algoritmer är generellt svårare eftersom den beror på specifik signalrealisation, nätverkstopologi och andra parametrar. I de fall där vi inte kan göra analys presenterar vi istället genomgående simuleringsresultat. Vi jämför våra algoritmer med de vanligaste algoritmerna för enskilda sensorsystem, och våra resultat kan därför enkelt användas som referens för framtida forskning. Jämfört med prestandan för enskilda sensorer visar de nya distribuerade algoritmerna markant förbättring.
APA, Harvard, Vancouver, ISO, and other styles
2

Beis, Michail. "Greedy algorithms for random regular graphs." Thesis, University of Liverpool, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Determe, Jean-François. "Greedy algorithms for multi-channel sparse recovery." Doctoral thesis, Universite Libre de Bruxelles, 2018. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/265808.

Full text
Abstract:
During the last decade, research has shown compressive sensing (CS) to be a promising theoretical framework for reconstructing high-dimensional sparse signals. Leveraging a sparsity hypothesis, algorithms based on CS reconstruct signals on the basis of a limited set of (often random) measurements. Such algorithms require fewer measurements than conventional techniques to fully reconstruct a sparse signal, thereby saving time and hardware resources. This thesis addresses several challenges. The first is to theoretically understand how some parameters—such as noise variance—affect the performance of simultaneous orthogonal matching pursuit (SOMP), a greedy support recovery algorithm tailored to multiple measurement vector signal models. Chapters 4 and 5 detail novel improvements in understanding the performance of SOMP. Chapter 4 presents analyses of SOMP for noiseless measurements; using those analyses, Chapter 5 extensively studies the performance of SOMP in the noisy case. A second challenge consists in optimally weighting the impact of each measurement vector on the decisions of SOMP. If measurement vectors feature unequal signal-to-noise ratios, properly weighting their impact improves the performance of SOMP. Chapter 6 introduces a novel weighting strategy from which SOMP benefits. The chapter describes the novel weighting strategy, derives theoretically optimal weights for it, and presents both theoretical and numerical evidence that the strategy improves the performance of SOMP. Finally, Chapter 7 deals with the tendency for support recovery algorithms to pick support indices solely for mapping a particular noise realization. To ensure that such algorithms pick all the correct support indices, researchers often make the algorithms pick more support indices than the number strictly required. Chapter 7 presents a support reduction technique, that is, a technique removing from a support the supernumerary indices solely mapping noise. The advantage of the technique, which relies on cross-validation, is that it is universal, in that it makes no assumption regarding the support recovery algorithm generating the support. Theoretical results demonstrate that the technique is reliable. Furthermore, numerical evidence proves that the proposed technique performs similarly to orthogonal matching pursuit with cross-validation (OMP-CV), a state-of-the-art algorithm for support reduction.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Qing. "Greedy Inference Algorithms for Structured and Neural Models." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/81860.

Full text
Abstract:
A number of problems in Computer Vision, Natural Language Processing, and Machine Learning produce structured outputs in high-dimensional space, which makes searching for the global optimal solution extremely expensive. Thus, greedy algorithms, making trade-offs between precision and efficiency, are widely used. %Unfortunately, they in general lack theoretical guarantees. In this thesis, we prove that greedy algorithms are effective and efficient to search for multiple top-scoring hypotheses from structured (neural) models: 1) Entropy estimation. We aim to find deterministic samples that are representative of Gibbs distribution via a greedy strategy. 2) Searching for a set of diverse and high-quality bounding boxes. We formulate this problem as the constrained maximization of a monotonic sub-modular function such that there exists a greedy algorithm having near-optimal guarantee. 3) Fill-in-the-blank. The goal is to generate missing words conditioned on context given an image. We extend Beam Search, a greedy algorithm applicable on unidirectional expansion, to bidirectional neural models when both past and future information have to be considered. We test our proposed approaches on a series of Computer Vision and Natural Language Processing benchmarks and show that they are effective and efficient.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Puricella, Antonio. "The complexity of greedy algorithms on ordered graphs." Thesis, University of Leicester, 2002. http://hdl.handle.net/2381/30518.

Full text
Abstract:
Let p be any fixed polynomial time testable, non-trivial, hereditary property of graphs. Suppose that the vertices of a graph G are not necessarily linearly ordered but partially ordered, where we think of this partial order as a collection of (possibly exponentially many) linear orders in the natural way. In the first part of this thesis, we prove that the problem of deciding whether a lexicographically first maximal (with respect to one of these linear orders) subgraph of G satisfying p, contains a specified vertex is NP-complete. For some of these properties p we then show that by applying certain restrictions the problem still remains NP-complete, and show how the problem can be solved in deterministic polynomial time if the restrictions imposed become more severe. Let H be a fixed undirected graph. An H-colouring of an undirected graph G is a homomorphism from G to H. In the second part of the thesis, we show that, if the vertices of G are partially ordered then the complexity of deciding whether a given vertex of G is in a lexicographically first maximal H-colourable subgraph of G is NP-complete, if H is bipartite, and Sp2-complete, if H is non-bipartite. We then show that if the vertices of G are linearly, as opposed to partially, ordered then the complexity of deciding whether a given vertex of G is in the lexicographically first maximal H-colourable subgraph of G is P-complete, if H is bipartite, and DP2-complete, if H is non-bipartite. In the final part of the thesis we show that the results obtained can be paralleled in the setting of graphs where orders are given by degrees of the vertices.
APA, Harvard, Vancouver, ISO, and other styles
6

Dutta, Himanshu Shekhar. "Survey of Approximation Algorithms for Set Cover Problem." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc12118/.

Full text
Abstract:
In this thesis, I survey 11 approximation algorithms for unweighted set cover problem. I have also implemented the three algorithms and created a software library that stores the code I have written. The algorithms I survey are: 1. Johnson's standard greedy; 2. f-frequency greedy; 3. Goldsmidt, Hochbaum and Yu's modified greedy; 4. Halldorsson's local optimization; 5. Dur and Furer semi local optimization; 6. Asaf Levin's improvement to Dur and Furer; 7. Simple rounding; 8. Randomized rounding; 9. LP duality; 10. Primal-dual schema; and 11. Network flow technique. Most of the algorithms surveyed are refinements of standard greedy algorithm.
APA, Harvard, Vancouver, ISO, and other styles
7

Oglic, Dino [Verfasser]. "Constructive Approximation and Learning by Greedy Algorithms / Dino Oglic." Bonn : Universitäts- und Landesbibliothek Bonn, 2018. http://d-nb.info/1170777910/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yuen, Chi-kan. "A double-track greedy algorithm for VLSI channel routing /." Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B19656373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Razanajatovo, Misanantenaina Valisoa. "Properties of greedy trees." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/95909.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: A greedy tree is constructed from a given degree sequence using a simple greedy algorithm that assigns the highest degree to the root, the second, the third, . . . , -highest degree to the root’s neighbours, etc. This particular tree is the solution to numerous extremal problems among all trees with given degree sequence. In this thesis, we collect results for some distancebased graph invariants, the number of subtrees and the spectral radius in which greedy trees play a major role. We show that greedy trees are extremal for the aforementioned graph invariants by means of two different approaches, one using level greedy trees and majorization, while the other one is somewhat more direct. Finally, we prove some new results on greedy trees for additive parameters with specific toll functions.
AFRIKAANSE OPSOMMING: ’n Gulsige boom word vanuit ’n gegewe graadry deur middel van ’n eenvoudige gulsige algoritme gebou, wat die hoogste graad aan die wortel toewys, die tweede-, derde-, . . . , -hoogste graad aan die wortel se bure, ens. Hierdie spesifieke boom is die oplossing van ’n groot aantal ekstremale probleme onder al die bome met gegewe graadry. In hierdie tesis beskou ons ’n versameling van resultate oor afstand-gebaseerde grafiekinvariante, die aantal subbome en die spektraalstraal waarin gulsige bome ’n belangrike rol speel. Ons bewys dat gulsige bome ekstremaal vir die bogenoemde grafiekinvariante is deur van twee verskillende tegnieke gebruik te maak: een met behulp van vlak-gulsige bome en majorering, en ’n ander metode wat effens meer direk is. Laastens bewys ons sommige nuwe resultate oor gulsige bome vir additiewe parameters met spesifieke tolfunksies.
APA, Harvard, Vancouver, ISO, and other styles
10

Ortiz, John E. "Absolute position measurement for automated guided vehicles using the Greedy DeBruijn Sequence." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FOrtiz.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 2006.
Thesis Advisor(s): Harold M. Fredricksen, Jon T. Butler. "September 2006." Includes bibliographical references (p. 149). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
11

Almulla, Mohammed Ali. "A class of greedy algorithms for solving the travelling salesman problem /." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=59557.

Full text
Abstract:
The travelling salesman problem is one of the NP-complete problems. It has been under consideration in computer science for at least forty years. Solving this hard problem using search methods can be accomplished by choosing: a starting point, a solution generation scheme and a termination rule. When the termination rule is such that search stops if and only if the tour is optimal, we call the method "exact". When the termination rule is such that the search stops but not necessarily with an optimal tour, we call the method "approximate".
This thesis looks closely at one of the approximate methods, namely sub-optimal tour building. In particular, it focuses on the nearest neighbour algorithm (a greedy algorithm). By being greedy at every step of the procedure, this algorithm returns an approximate solution that is near optimal in terms of solution cost. Next, this greedy algorithm is used in implementing a new algorithm that is called the "Multi-Degree Greedy Algorithm". By being greedy at half of the procedure steps, this algorithm returns optimal solutions to travelling salesman problems 99% of the time. Thus, this algorithm is an approximate algorithm, designed to run on small-scale travelling salesman problems (n $<$ 20).
APA, Harvard, Vancouver, ISO, and other styles
12

袁志勤 and Chi-kan Yuen. "A double-track greedy algorithm for VLSI channel routing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31220241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Harper, Gavin. "The selection of compounds for screening in pharmaceutical research." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ghebreamlak, Kidane Asrat. "Analysis of Algorithms for Combinatorial Auctions and Related Problems." Doctoral thesis, Uppsala : Department of Mathematics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Shimin. "Geometric Algorithms for Intervals and Related Problems." DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7035.

Full text
Abstract:
In this dissertation, we study several problems related to intervals and develop efficient algorithms for them. Interval problems have many applications in reality because many objects, values, and ranges are intervals in nature, such as time intervals, distances, line segments, probabilities, etc. Problems on intervals are gaining attention also because intervals are among the most basic geometric objects, and for the same reason, computational geometry techniques find useful for attacking these problems. Specifically, the problems we study in this dissertation includes the following: balanced splitting on weighted intervals, minimizing the movements of spreading points, dispersing points on intervals, multiple barrier coverage, and separating overlapped intervals on a line. We develop efficient algorithms for these problems and our results are either first known solutions or improve the previous work. In the problem of balanced splitting on weighted intervals, we are given a set of n intervals with non-negative weights on a line and an integer k ≥ 1. The goal is to find k points to partition the line into k + 1 segments, such that the maximum sum of the interval weights in these segments is minimized. We give an algorithm that solves the problem in O(n log n) time. Our second problem is on minimizing the movements of spreading points. In this problem, we are given a set of points on a line and we want to spread the points on the line so that the minimum pairwise distance of all points is no smaller than a given value δ. The objective is to minimize the maximum moving distance of all points. We solve the problem in O(n) time. We also solve the cycle version of the problem in linear time. For the third problem, we are given a set of n non-overlapping intervals on a line and we want to place a point on each interval so that the minimum pairwise distance of all points are maximized. We present an O(n) time algorithm for the problem. We also solve its cycle version in O(n) time. The fourth problem is on multiple barrier coverage, where we are given n sensors in the plane and m barriers (represented by intervals) on a line. The goal is to move the sensors onto the line to cover all the barriers such that the maximum moving distance of all sensors is minimized. Our algorithm for the problem runs in O(n2 log n log log n + nm log m) time. In a special case where the sensors are all initially on the line, our algorithm runs in O((n + m) log(n + m)) time. Finally, for the problem of separating overlapped intervals, we have a set of n intervals (possibly overlapped) on a line and we want to move them along the line so that no two intervals properly intersect. The objective is to minimize the maximum moving distance of all intervals. We propose an O(n log n) time algorithm for the problem. The algorithms and techniques developed in this dissertation are quite basic and fundamental, so they might be useful for solving other related problems on intervals as well.
APA, Harvard, Vancouver, ISO, and other styles
16

TAKADA, Hiroaki, Hiroyuki TOMIYAMA, Gang ZENG, and Tetsuo YOKOYAMA. "Static Task Scheduling Algorithms Based on Greedy Heuristics for Battery-Powered DVS Systems." Institute of Electronics, Information and Communication Engineers, 2010. http://hdl.handle.net/2237/15037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Jones, Jeffrey S. "Analysis of Algorithms for Star Bicoloring and Related Problems." Ohio University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1426770501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sundman, Dennis. "Compressed Sensing : Algorithms and Applications." Licentiate thesis, KTH, Kommunikationsteori, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-90074.

Full text
Abstract:
The theoretical problem of finding the solution to an underdeterminedset of linear equations has for several years attracted considerable attentionin the literature. This problem has many practical applications.One example of such an application is compressed sensing (cs), whichhas the potential to revolutionize how we acquire and process signals. Ina general cs setup, few measurement coefficients are available and thetask is to reconstruct a larger, sparse signal.In this thesis we focus on algorithm design and selected applicationsfor cs. The contributions of the thesis appear in the following order:(1) We study an application where cs can be used to relax the necessityof fast sampling for power spectral density estimation problems. Inthis application we show by experimental evaluation that we can gainan order of magnitude in reduced sampling frequency. (2) In order toimprove cs recovery performance, we extend simple well-known recoveryalgorithms by introducing a look-ahead concept. From simulations it isobserved that the additional complexity results in significant improvementsin recovery performance. (3) For sensor networks, we extend thecurrent framework of cs by introducing a new general network modelwhich is suitable for modeling several cs sensor nodes with correlatedmeasurements. Using this signal model we then develop several centralizedand distributed cs recovery algorithms. We find that both thecentralized and distributed algorithms achieve a significant gain in recoveryperformance compared to the standard, disconnected, algorithms.For the distributed case, we also see that as the network connectivity increases,the performance rapidly converges to the performance of thecentralized solution.

QC 20120229

APA, Harvard, Vancouver, ISO, and other styles
19

Mor, Stefano Drimon Kurz. "Analysis of synchronizations in greedy-scheduled executions and applications to efficient generation of pseudorandom numbers in parallel." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/130529.

Full text
Abstract:
Nous présentons deux contributions dans le domaine de la programmation parallèle. La première est théorique : nous introduisons l’analyse SIPS, une approche nouvelle pour dénombrer le nombre d’opérations de synchronisation durant l’exécution d’un algorithme parallèle ordonnancé par vol de travail. Basée sur le concept d’horloges logiques, elle nous permet : d’une part de donner de nouvelles majorations de coût en moyenne; d’autre part de concevoir des programmes parallèles plus efficaces par adaptation dynamique de la granularité. La seconde contribution est pragmatique : nous présentons une parallélisation générique d’algorithmes pour la génération déterministe de nombres pseudo-aléatoires, indépendamment du nombre de processus concurrents lors de l’exécution. Alternative à l’utilisation d’un générateur pseudo-aléatoire séquentiel par processus, nous introduisons une API générique, appelée Par-R qui est conçue et analysée grâce à SIPS. Sa caractéristique principale est d’exploiter un générateur séquentiel qui peut “sauter” directement d’un nombre à un autre situé à une distance arbitraire dans la séquence pseudo-aléatoire. Grâce à l’analyse SIPS, nous montrons qu’en moyenne, lors d’une exécution par vol de travail d’un programme très parallèle (dont la profondeur ou chemin critique est très petite devant le travail ou nombre d’opérations), ces opérations de saut sont rares. Par-R est comparé au générateur pseudo-aléatoire DotMix écrit pour Cilk Plus, une extension de C/C++ pour la programmation parallèle par vol de travail. Le surcout théorique de Par-R se compare favorablement au surcoput de DotMix, ce qui apparait aussi expériemntalement. De plus, étant générique, Par-R est indépendant du générateur séquentiel sous-jacent.
Nós apresentamos duas contribuições para a área de programação paralela. A primeira contribuição é teórica: nós introduzimos a análise SIPS, uma nova abordagem para a estimar o número de sincronizações realizadas durante a execução de um algoritmo paralelo. SIPS generaliza o conceito de relógios lógicos para contar o número de sincronizações realizadas por um algoritmo paralelo e é capaz de calcular limites do pior caso mesmo na presença de execuções paralelas não-determinísticas, as quais não são geralmente cobertas por análises no estado-da-arte. Nossa análise nos permite estimar novos limites de pior caso para computações escalonadas pelo popular algoritmo de roubo de tarefas e também projetar programas paralelos e adaptáveis que são mais eficientes. A segunda contribuição é pragmática: nós apresentamos uma estratégia de paralelização eficiente para a geração de números pseudoaleatórios. Como uma alternativa para implementações fixas de componentes de geração aleatória nós introduzimos uma API chamada Par-R, projetada e analisada utilizando-se SIPS. Sua principal idea é o uso da capacidade de um gerador sequencial R de realizar um “pulo” eficiente dentro do fluxo de números gerados; nós os associamos a operações realizadas pelo escalonador por roubo de tarefas, o qual nossa análise baseada em SIPS demonstra ocorrer raramente em média. Par-R é comparado com o gerador paralelo de números pseudoaleatórios DotMix, escrito para a plataforma de multithreading dinâmico Cilk Plus. A latência de Par-R tem comparação favorável à latência do DotMix, o que é confirmado experimentalmente, mas não requer o uso subjacente fixado de um dado gerador aleatório.
We present two contributions to the field of parallel programming. The first contribution is theoretical: we introduce SIPS analysis, a novel approach to estimate the number of synchronizations performed during the execution of a parallel algorithm. Based on the concept of logical clocks, it allows us: on one hand, to deliver new bounds for the number of synchronizations, in expectation; on the other hand, to design more efficient parallel programs by dynamic adaptation of the granularity. The second contribution is pragmatic: we present an efficient parallelization strategy for pseudorandom number generation, independent of the number of concurrent processes participating in a computation. As an alternative to the use of one sequential generator per process, we introduce a generic API called Par-R, which is designed and analyzed using SIPS. Its main characteristic is the use of a sequential generator that can perform a “jump-ahead” directly from one number to another on an arbitrary distance within the pseudorandom sequence. Thanks to SIPS, we show that, in expectation, within an execution scheduled by work stealing of a “very parallel” program (whose depth or critical path is subtle when compared to the work or number of operations), these operations are rare. Par-R is compared with the parallel pseudorandom number generator DotMix, written for the Cilk Plus dynamic multithreading platform. The theoretical overhead of Par-R compares favorably to DotMix’s overhead, what is confirmed experimentally, while not requiring a fixed generator underneath.
APA, Harvard, Vancouver, ISO, and other styles
20

Farooq, Farhan. "Optimal Path Searching through Specified Routes using different Algorithms." Thesis, Högskolan Dalarna, Datateknik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:du-4530.

Full text
Abstract:
To connect different electrical, network and data devices with the minimum cost and shortest path, is a complex job. In huge buildings, where the devices are placed at different locations on different floors and only some specific routes are available to pass the cables and buses, the shortest path search becomes more complex. The aim of this thesis project is, to develop an application which indentifies the best path to connect all objects or devices by following the specific routes.To address the above issue we adopted three algorithms Greedy Algorithm, Simulated Annealing and Exhaustive search and analyzed their results. The given problem is similar to Travelling Salesman Problem. Exhaustive search is a best algorithm to solve this problem as it checks each and every possibility and give the accurate result but it is an impractical solution because of huge time consumption. If no. of objects increased from 12 it takes hours to search the shortest path. Simulated annealing is emerged with some promising results with lower time cost. As of probabilistic nature, Simulated annealing could be non optimal but it gives a near optimal solution in a reasonable duration. Greedy algorithm is not a good choice for this problem. So, simulated annealing is proved best algorithm for this problem. The project has been implemented in C-language which takes input and store output in an Excel Workbook
APA, Harvard, Vancouver, ISO, and other styles
21

Venmani, Daniel Philip. "Multi-operator greedy routing based on open routers." Phd thesis, Institut National des Télécommunications, 2014. http://tel.archives-ouvertes.fr/tel-00997721.

Full text
Abstract:
Revolutionary mobile technologies, such as high-speed packet access 3G (HSPA+) and LTE, have significantly increased mobile data rate over the radio link. While most of the world looks at this revolution as a blessing to their day-to-day life, a little-known fact is that these improvements over the radio access link results in demanding tremendous improvements in bandwidth on the backhaul network. Having said this, today's Internet Service Providers (ISPs) and Mobile Network Operators (MNOs) are intemperately impacted as a result of this excessive smartphone usage. The operational costs (OPEX) associated with traditional backhaul methods are rising faster than the revenue generated by the new data services. Building a mobile backhaul network is very different from building a commercial data network. A mobile backhaul network requires (i) QoS-based traffic with strict requirements on delay and jitter (ii) high availability/reliability. While most ISPs and MNOs have promised advantages of redundancy and resilience to guarantee high availability, there is still the specter of failure in today's networks. The problems of network failures in today's networks can be quickly but clearly ascertained. The underlying observation is that ISPs and MNOs are still exposed to rapid fluctuations and/or unpredicted breakdowns in traffic; it goes without saying that even the largest operators can be affected. But what if, these operators could now put in place designs and mechanisms to improve network survivability to avoid such occurrences? What if mobile network operators can come up with low-cost backhaul solutions together with ensuring the required availability and reliability in the networks? With this problem statement in-hand, the overarching theme of this dissertation is within the following scopes: (i) to provide low-cost backhaul solutions; the motivation here being able to build networks without over-provisioning and then to bring-in new resources (link capacity/bandwidth) on occasions of unexpected traffic surges as well as on network failure conditions for particularly ensuring premium services (ii) to provide uninterrupted communications even at times of network failure conditions, but without redundancy. Here a slightly greater emphasis is laid on tackling the 'last-mile' link failures. The scope of this dissertation is therefore to propose, design and model novel network architectures for improving effective network survivability and network capacity, at the same time by eliminating network-wide redundancy, adopted within the context of mobile backhaul networks. Motivated by this, we study the problem of how to share the available resources of a backhaul network among its competitors, with whom a Service Level Agreement (SLA) has been concluded. Thus, we present a systematic study of our proposed solutions focusing on a variety of empirical resource sharing heuristics and optimization frameworks. With this background, our work extends towards a novel fault restoration framework which can cost-effectively provide protection and restoration for the operators, enabling them with a parameterized objective function to choose desired paths based on traffic patterns of their end-customers. We then illustrate the survivability of backhaul networks with reduced amount of physical redundancy, by effectively managing geographically distributed backhaul network equipments which belong to different MNOs using 'logically-centralized' physically-distributed controllers, while meeting strict constraints on network availability and reliability
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Hui. "Algorithms for the selection of optimal spaced seed sets for transposable element identification." Miami University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=miami1283178156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kontak, Max [Verfasser]. "Novel algorithms of greedy-type for probability density estimation as well as linear and nonlinear inverse problems / Max Kontak." Siegen : Universitätsbibliothek der Universität Siegen, 2018. http://d-nb.info/1157094554/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mathirajan, M. "Heuristic Scheduling Algorithms For Parallel Heterogeneous Batch Processors." Thesis, Indian Institute of Science, 2000. http://hdl.handle.net/2005/196.

Full text
Abstract:
In the last decade, market pressures for greater variety of products forced a gradual shift from continuous manufacturing to batch manufacturing in various industries. Consequently batch scheduling problems have attracted the attention of researchers in production and operations management. This thesis addresses the scheduling of parallel non-identical batch processors in the presence of dynamic job arrivals, incompatible job-families and non-identical job sizes. This problem abstracts the scheduling of heat-treatment furnace operations of castings in a steel foundry. The problem is of considerable interest in this sector as a large proportion of the total production time is spent in heat treatment processing. This problem is also encountered in other industrial settings such as burn-in operation in the final testing stage of semiconductor manufacturing, and manufacturing of steel, ceramics, aircraft parts, footwear, etc. A detailed literature review and personal communications with experts revealed that this class of batch scheduling problems have not been addressed hitherto. A major concern in the management of foundries is to maximize throughput and reduce flow time and work-in-process inventories. Therefore we have chosen the primary scheduling objective to be the utilization of batch processors and as secondary objectives the minimization of overall flow time and weighted average waiting time per job. This formulation can be considered as an extension of problems studied by DOBSON AND NAMBINADOM (1992), UZSOY (1995), ZEE et a/. (1997) and MEHTA AND UZSOY (1998). Our effort to carefully catalogue the large number of variants of deterministic batch scheduling problems led us to the development of a taxonomy and notation. Not surprisingly, we are able to show that our problem is NP-hard and is therefore in the company of many scheduling problems that are difficult to solve. Initially two heuristic algorithms, one a mathematical programming based heuristic algorithm (MPHA) and the other a greedy heuristic algorithm were developed. Due to the computational overheads in the implementation of MPHA when compared with the greedy heuristic, we chose to focus on the latter as the primary scheduling methodology. Preliminary experimentation led us to the observation that the performance of greedy heuristics depends critically on the selection of job-families. So eight variants of the greedy heuristic that differ mainly in the decision on "job-family selection" were proposed. These eight heuristics are basically two sets {Al, A2, A3, A4} and the modified (MAI, MA2, MA3, MA4}, which differ only on how the "job-family" index, weighted shortest processing time, is computed. For evaluating the performance of the eight heuristics, computational experiments were carried out. The analysis of the experimental data is presented in two perspectives. The goal of the first perspective was to evaluate the absolute quality of the solutions obtained by the proposed heuristic algorithms when compared with estimated optimal solutions. The second perspective was to compare the relative performance of the proposed heuristics. The test problems generated were designed to reflect real-world scheduling problems that we have observed in the steel-casting industry. Three important problem parameters for the test set generation are the number of jobs [n], job-priority [P], and job-family [F]. We considered 5 different levels for n, 2 different levels for P and 2 different levels for F. The test set reflects (i) the size of the jobs vary uniformly (ii) there are two batch processors and (iii) five incompatible job-families with different processing times. 15 problem instances were generated for each level of (n, P, and F). Out of many procedures available in the literature for estimating optimal value for combinatorial optimization problems, we used the procedure based on Weibull distribution as discussed in Rardin and Uzsoy (2001). For each problem instance of the randomly generated 300 problem instances, 15 feasible solutions (i.e., the average utilization of batch processors (AUBP)) were obtained using "random decision rule for first two stages and using a "best-fit heuristic' for the last stage of the scheduling problem. These 15 feasible solutions were used to estimate the optimal value. The generated 15 feasible solutions are expected to provide the estimated optimal value of the problem instance with a very high probability. Both average performance and worst-case performance of the heuristics indicated that, the heuristic algorithms A3 and A4, on the average yielded better utilization than the estimated optimal value. This indicates that the Weibull-based technique may have yielded conservative estimates of the optimal value. Further, the other heuristic algorithms found inferior solutions when compared with the estimated optimal value. But the deviations were very small. From this, we may infer that all the proposed heuristic algorithms are acceptable. The relative evaluation of heuristics was in terms of both computational effort and the quality of the solution. For the heuristics, it was clear that the computational burden is low enough on the average to run all the proposed heuristics on each problem instance and select the best solution. Also, it is observed that any algorithm from the first set of {Al, A2, A3, and A4} takes more computational time than any one from the second set {MAI, MA2, MA3, and MA4}. Regarding solution quality, the following inferences were made: ٭ In general the heuristic algorithms are sensitive to the choice of problem factors with respect to all the scheduling objectives. ٭ The three algorithms A3, MA4 and MAI are observed to be superior with respect to the scheduling objectives: maximizing average utilization of batch processors (AUBP), minimization of overall flow time (OFT) and minimizing weighted average waiting time (WAWT) respectively. Further, the heuristic algorithm MAI turns out to be the best choice if we trade-off all three objectives AUBP, OFT and WAWT. Finally we carried out simple sensitivity analyses experiments in order to understand the influence of some parameters of the scheduling on the performance of the heuristic algorithms. These were related to one at a time changes in (1) job-size distribution, (2) capacities of batch processors and (3) processing time of job-families. From the analyses it appears that there is an influence of changes in these input parameters. The results of the sensitivity analyses can be used to guide the selection of a heuristic for a particular combination of input parameters. For example, if we have to pick a single heuristic algorithm, then MAI is the best choice when considering the performance and the robustness indicated by the sensitivity analysis. In summary, this thesis examined a problem arising in the scheduling of heat-treatment operations in the steel-casting industry. This problem was abstracted to a class of deterministic batch scheduling problems. We analyzed the computational complexity of this problem and showed that it is NP-hard and therefore unlikely to admit a scalable exact method. Eight variants of a fast greedy heuristic were designed to solve the scheduling problem of interest. Extensive computational experiments were carried out to compare the performance of the heuristics with estimated optimal values (using the Weibull technique) and also for relative effectiveness and this showed that the heuristics are capable of consistently obtaining near-estimated) optimal solutions with very low computational burden for the solution of large scale problems. Finally, a comprehensive sensitivity analysis was carried out to study the influence of a few parameters, by changing them one at a time, on the performance of the heuristic algorithms. This type of analysis gives users some confidence in the robustness of the proposed heuristics.
APA, Harvard, Vancouver, ISO, and other styles
25

Altman, James Ross. "A Practical Comprehensive Approach to PMU Placement for Full Observability." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/31200.

Full text
Abstract:
In recent years, the placement of phasor measurement units (PMUs) in electric transmission systems has gained much attention. Engineers and mathematicians have developed a variety of algorithms to determine the best locations for PMU installation. But often these placement algorithms are not practical for real systems and do not cover the whole process. This thesis presents a strategy that is practical and addresses three important topics: system preparation, placement algorithm, and installation scheduling. To be practical, a PMU strategy should strive for full observability, work well within the heterogeneous nature of power system topology, and enable system planners to adapt the strategy to meet their unique needs and system configuration. Practical considerations for the three placement topics are discussed, and a specific strategy based on these considerations is developed and demonstrated on real transmission system models.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
26

Mahadevan, Muralidharan Ananth. "Analysis of Garbage Collector Algorithms in Non-Volatile Memory Devices." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1365811711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

He, Jeannie. "Automatic Diagnosis of Parkinson’s Disease Using Machine Learning : A Comparative Study of Different Feature Selection Algorithms, Classifiers and Sampling Methods." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301616.

Full text
Abstract:
Over the past few years, several studies have been published to propose algorithms for the automated diagnosis of Parkinson’s Disease using simple exams such as drawing and voice exams. However, at the same time as all classifiers appear to have been outperformed by another classifier in at least one study, there appear to lack a study on how well different classifiers work with a certain feature selection algorithm and sampling method. More importantly, there appear to lack a study that compares the proposed feature selection algorithm and/or sampling method with a baseline that does not involve any feature selection or oversampling. This leaves us with the question of which combination of feature selection algorithm, sampling method and classifier is the best as well as what impact feature selection and oversampling may have on the performance. Given the importance of providing a quick and accurate diagnosis of Parkinson’s disease, a comparison is made between different systems of classifier, feature selection and sampling method with a focus on the predictive performance. A system was chosen as the best system for the diagnosis of Parkinson’s disease based on its comparative predictive performance on two sets of data - one from drawing exams and one from voice exams.
Som en av världens mest vanligaste sjukdom med en tendens att leda till funktionshinder har Parkinsons sjukdom länge varit i centrum av forskning. För att se till att så många som möjligt får en behandling innan det blir för sent har flera studier publicerats för att föreslå algoritmer för automatisk diagnos av Parkinsons sjukdom. Samtidigt som alla klassificerare verkar ha överträffats av en annan klassificerare i minst en studie, verkar det saknas en studie om hur väl olika klassificerare fungerar med en viss kombination av urvalsalgoritm (feature selection algorithm på engelska) och provtagningsmetod. Därutöver verkar det saknas en studie där resultatet från den föreslagna urvalsalgoritmen och/eller samplingsmetoden jämförs med resultatet av att applicera klassificeraren direkt på datan utan någon urvalsalgoritm eller resampling. Detta lämnar oss en fråga om vilket system av klassificerare, urvalsalgoritm och samplingsmetod man bör välja och ifall det är värt att använda en urvalsalgoritm och överprovtagningsmetod. Med tanke på vikten av att snabbt och noggrant upptäcka Parkinsons sjukdom har en jämförelse gjorts för att hitta den bästa kombinationen av klassificerare, urvalsalgoritm och provtagningsalgoritm för den automatiska diagnosen av Parkinsons sjukdom.
APA, Harvard, Vancouver, ISO, and other styles
28

Berger, Karl-Eduard. "Placement de graphes de tâches de grande taille sur architectures massivement multicoeurs." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLV026/document.

Full text
Abstract:
Ce travail de thèse de doctorat est dédié à l'étude d'un problème de placement de tâches dans le domaine de la compilation d'applications pour des architectures massivement parallèles. Ce problème vient en réponse à certains besoins industriels tels que l'économie d'énergie, la demande de performances pour les applications de type flots de données synchrones. Ce problème de placement doit être résolu dans le respect de trois critères: les algorithmes doivent être capable de traiter des applications de tailles variables, ils doivent répondre aux contraintes de capacités des processeurs et prendre en compte la topologie des architectures cibles. Dans cette thèse, les tâches sont organisées en réseaux de communication, modélisés sous forme de graphes. Pour évaluer la qualité des solutions produites par les algorithmes, les placements obtenus sont comparés avec un placement aléatoire. Cette comparaison sert de métrique d'évaluation des placements des différentes méthodes proposées. Afin de résoudre à ce problème, deux algorithmes de placement de réseaux de tâches de grande taille sur des architectures clusterisées de processeurs de type many-coeurs ont été développés. Ils s'appliquent dans des cas où les poids des tâches et des arêtes sont unitaires. Le premier algorithme, nommé Task-wise Placement, place les tâches une par une en se servant d'une notion d'affinité entre les tâches. Le second, intitulé Subgraph-wise Placement, rassemble les tâches en groupes puis place les groupes de tâches sur les processeurs en se servant d'une relation d'affinité entre les groupes et les tâches déjà affectées. Ces algorithmes ont été testés sur des graphes, représentants des applications, possédant des topologies de types grilles ou de réseaux de portes logiques. Les résultats des placements sont comparés avec un algorithme de placement, présent dans la littérature qui place des graphes de tailles modérée et ce à l'aide de la métrique définie précédemment. Les cas d'application des algorithmes de placement sont ensuite orientés vers des graphes dans lesquels les poids des tâches et des arêtes sont variables similairement aux valeurs qu'on peut retrouver dans des cas industriels. Une heuristique de construction progressive basée sur la théorie des jeux a été développée. Cet algorithme, nommé Regret Based Approach, place les tâches une par une. Le coût de placement de chaque tâche en fonction des autres tâches déjà placées est calculée. La phase de sélection de la tâche se base sur une notion de regret présente dans la théorie des jeux. La tâche qu'on regrettera le plus de ne pas avoir placée est déterminée et placée en priorité. Afin de vérifier la robustesse de l'algorithme, différents types de graphes de tâches (grilles, logic gate networks, series-parallèles, aléatoires, matrices creuses) de tailles variables ont été générés. Les poids des tâches et des arêtes ont été générés aléatoirement en utilisant une loi bimodale paramétrée de manière à obtenir des valeurs similaires à celles des applications industrielles. Les résultats de l'algorithme ont également été comparés avec l'algorithme Task-Wise Placement, qui a été spécialement adapté pour les valeurs non unitaires. Les résultats sont également évalués en utilisant la métrique de placement aléatoire
This Ph.D thesis is devoted to the study of the mapping problem related to massively parallel embedded architectures. This problem arises from industrial needs like energy savings, performance demands for synchronous dataflow applications. This problem has to be solved considering three criteria: heuristics should be able to deal with applications with various sizes, they must meet the constraints of capacities of processors and they have to take into account the target architecture topologies. In this thesis, tasks are organized in communication networks, modeled as graphs. In order to determine a way of evaluating the efficiency of the developed heuristics, mappings, obtained by the heuristics, are compared to a random mapping. This comparison is used as an evaluation metric throughout this thesis. The existence of this metric is motivated by the fact that no comparative heuristics can be found in the literature at the time of writing of this thesis. In order to address this problem, two heuristics are proposed. They are able to solve a dataflow process network mapping problem, where a network of communicating tasks is placed into a set of processors with limited resource capacities, while minimizing the overall communication bandwidth between processors. They are applied on task graphs where weights of tasks and edges are unitary set. The first heuristic, denoted as Task-wise Placement, places tasks one after another using a notion of task affinities. The second algorithm, named Subgraph-wise Placement, gathers tasks in small groups then place the different groups on processors using a notion of affinities between groups and processors. These algorithms are tested on tasks graphs with grid or logic gates network topologies. Obtained results are then compared to an algorithm present in the literature. This algorithm maps task graphs with moderated size on massively parallel architectures. In addition, the random based mapping metric is used in order to evaluate results of both heuristics. Then, in a will to address problems that can be found in industrial cases, application cases are widen to tasks graphs with tasks and edges weights values similar to those that can be found in the industry. A progressive construction heuristic named Regret Based Approach, based on game theory, is proposed. This heuristic maps tasks one after another. The costs of mapping tasks according to already mapped tasks are computed. The process of task selection is based on a notion of regret, present in game theory. The task with the highest value of regret for not placing it, is pointed out and is placed in priority. In order to check the strength of the algorithm, many types of task graphs (grids, logic gates networks, series-parallel, random, sparse matrices) with various size are generated. Tasks and edges weights are randomly chosen using a bimodal law parameterized in order to have similar values than industrial applications. Obtained results are compared to the Task Wise placement, especially adapted for non-unitary values. Moreover, results are evaluated using the metric defined above
APA, Harvard, Vancouver, ISO, and other styles
29

Nguyen, Minh-Lien Jeanne. "Estimation non paramétrique de densités conditionnelles : grande dimension, parcimonie et algorithmes gloutons." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS185/document.

Full text
Abstract:
Nous considérons le problème d’estimation de densités conditionnelles en modérément grandes dimensions. Beaucoup plus informatives que les fonctions de régression, les densités condi- tionnelles sont d’un intérêt majeur dans les méthodes récentes, notamment dans le cadre bayésien (étude de la distribution postérieure, recherche de ses modes...). Après avoir rappelé les problèmes liés à l’estimation en grande dimension dans l’introduction, les deux chapitres suivants développent deux méthodes qui s’attaquent au fléau de la dimension en demandant : d’être efficace computation- nellement grâce à une procédure itérative gloutonne, de détecter les variables pertinentes sous une hypothèse de parcimonie, et converger à vitesse minimax quasi-optimale. Plus précisément, les deux méthodes considèrent des estimateurs à noyau bien adaptés à l’estimation de densités conditionnelles et sélectionnent une fenêtre multivariée ponctuelle en revisitant l’algorithme glouton RODEO (Re- gularisation Of Derivative Expectation Operator). La première méthode ayant des problèmes d’ini- tialisation et des facteurs logarithmiques supplémentaires dans la vitesse de convergence, la seconde méthode résout ces problèmes, tout en ajoutant l’adaptation à la régularité. Dans l’avant-dernier cha- pitre, on traite de la calibration et des performances numériques de ces deux procédures, avant de donner quelques commentaires et perspectives dans le dernier chapitre
We consider the problem of conditional density estimation in moderately large dimen- sions. Much more informative than regression functions, conditional densities are of main interest in recent methods, particularly in the Bayesian framework (studying the posterior distribution, find- ing its modes...). After recalling the estimation issues in high dimension in the introduction, the two following chapters develop on two methods which address the issues of the curse of dimensionality: being computationally efficient by a greedy iterative procedure, detecting under some suitably defined sparsity conditions the relevant variables, while converging at a quasi-optimal minimax rate. More precisely, the two methods consider kernel estimators well-adapted for conditional density estimation and select a pointwise multivariate bandwidth by revisiting the greedy algorithm RODEO (Regular- isation Of Derivative Expectation Operator). The first method having some initialization problems and extra logarithmic factors in its convergence rate, the second method solves these problems, while adding adaptation to the smoothness. In the penultimate chapter, we discuss the calibration and nu- merical performance of these two procedures, before giving some comments and perspectives in the last chapter
APA, Harvard, Vancouver, ISO, and other styles
30

Turlapaty, Sandhya. "Implementation and Performance Comparison of Some Heuristic Algorithms for Block Sorting." UNF Digital Commons, 2018. https://digitalcommons.unf.edu/etd/816.

Full text
Abstract:
An implementation framework has been developed in this thesis for a well-known APX-hard combinatorial optimization problem known as Block Sorting. The motivation for the study of this problem comes from applications such as computational biology and optical character recognition. While existing Block Sorting research has been theoretically focused on the development and analysis of several approximation algorithms for Block Sorting, little or no work has been carried out thus far on the implementation of the proposed approximation algorithms. The conceptualization of an implementation framework and illustrating its use by experimenting with the existing approximation algorithms will provide means for discovering newer approaches to handling this important problem. As the main contribution, the research in this thesis provides a new greedy algorithm for Block Sorting in which each block move either reduces the number of blocks by two or three blocks, or reduces by one the number of reversals or inversions in the orig- inal permutation. Experimental results for all algorithms are also provided along with a comparison of their performance using the number of block moves and approximation ratios as performance metrics when sorting permutations of a given order, and as the order of permutations is varied. Preliminary results from the experimentation were shared at the 2017 IEEE 17th International Conference on Bioinformatics and Bioengineering (BIBE) [1]. To the best of our knowledge, this is the first work that has been focused on the implementation and experimental performance analysis of any algorithm for Block Sorting. We believe the results presented in this thesis will be useful for researchers and practitioners working in this area.
APA, Harvard, Vancouver, ISO, and other styles
31

Renaud-Goud, Paul. "Energy-aware scheduling : complexity and algorithms." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00744247.

Full text
Abstract:
In this thesis we have tackled a few scheduling problems under energy constraint, since the energy issue is becoming crucial, for both economical and environmental reasons. In the first chapter, we exhibit tight bounds on the energy metric of a classical algorithm that minimizes the makespan of independent tasks. In the second chapter, we schedule several independent but concurrent pipelined applications and address problems combining multiple criteria, which are period, latency and energy. We perform an exhaustive complexity study and describe the performance of new heuristics. In the third chapter, we study the replica placement problem in a tree network. We try to minimize the energy consumption in a dynamic frame. After a complexity study, we confirm the quality of our heuristics through a complete set of simulations. In the fourth chapter, we come back to streaming applications, but in the form of series-parallel graphs, and try to map them onto a chip multiprocessor. The design of a polynomial algorithm on a simple problem allows us to derive heuristics on the most general problem, whose NP-completeness has been proven. In the fifth chapter, we study energy bounds of different routing policies in chip multiprocessors, compared to the classical XY routing, and develop new routing heuristics. In the last chapter, we compare the performance of different algorithms of the literature that tackle the problem of mapping DAG applications to minimize the energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
32

Muñoz, Jugo Cynthia Mariela. "Algoritmos Greedy." Universidad Peruana de Ciencias Aplicadas - UPC, 2007. http://hdl.handle.net/10757/272784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Nguyen, Thi Thanh. "Algorithmes gloutons orthogonaux sous contrainte de positivité." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0133/document.

Full text
Abstract:
De nombreux domaines applicatifs conduisent à résoudre des problèmes inverses où le signal ou l'image à reconstruire est à la fois parcimonieux et positif. Si la structure de certains algorithmes de reconstruction parcimonieuse s'adapte directement pour traiter les contraintes de positivité, il n'en va pas de même des algorithmes gloutons orthogonaux comme OMP et OLS. Leur extension positive pose des problèmes d'implémentation car les sous-problèmes de moindres carrés positifs à résoudre ne possèdent pas de solution explicite. Dans la littérature, les algorithmes gloutons positifs (NNOG, pour “Non-Negative Orthogonal Greedy algorithms”) sont souvent considérés comme lents, et les implémentations récemment proposées exploitent des schémas récursifs approchés pour compenser cette lenteur. Dans ce manuscrit, les algorithmes NNOG sont vus comme des heuristiques pour résoudre le problème de minimisation L0 sous contrainte de positivité. La première contribution est de montrer que ce problème est NP-difficile. Deuxièmement, nous dressons un panorama unifié des algorithmes NNOG et proposons une implémentation exacte et rapide basée sur la méthode des contraintes actives avec démarrage à chaud pour résoudre les sous-problèmes de moindres carrés positifs. Cette implémentation réduit considérablement le coût des algorithmes NNOG et s'avère avantageuse par rapport aux schémas approximatifs existants. La troisième contribution consiste en une analyse de reconstruction exacte en K étapes du support d'une représentation K-parcimonieuse par les algorithmes NNOG lorsque la cohérence mutuelle du dictionnaire est inférieure à 1/(2K-1). C'est la première analyse de ce type
Non-negative sparse approximation arises in many applications fields such as biomedical engineering, fluid mechanics, astrophysics, and remote sensing. Some classical sparse algorithms can be straightforwardly adapted to deal with non-negativity constraints. On the contrary, the non-negative extension of orthogonal greedy algorithms is a challenging issue since the unconstrained least square subproblems are replaced by non-negative least squares subproblems which do not have closed-form solutions. In the literature, non-negative orthogonal greedy (NNOG) algorithms are often considered to be slow. Moreover, some recent works exploit approximate schemes to derive efficient recursive implementations. In this thesis, NNOG algorithms are introduced as heuristic solvers dedicated to L0 minimization under non-negativity constraints. It is first shown that the latter L0 minimization problem is NP-hard. The second contribution is to propose a unified framework on NNOG algorithms together with an exact and fast implementation, where the non-negative least-square subproblems are solved using the active-set algorithm with warm start initialisation. The proposed implementation significantly reduces the cost of NNOG algorithms and appears to be more advantageous than existing approximate schemes. The third contribution consists of a unified K-step exact support recovery analysis of NNOG algorithms when the mutual coherence of the dictionary is lower than 1/(2K-1). This is the first analysis of this kind
APA, Harvard, Vancouver, ISO, and other styles
34

Kadri, Ahmed Abdelmoumene. "Simulation and optimization models for scheduling and balancing the public bicycle-sharing systems." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0268.

Full text
Abstract:
Les enjeux du développement durable, le réchauffement climatique, la pollution dans les grandes villes, la congestion et les nuisances sonores, l'augmentation des prix de carburants, sont parmi des nombreux facteurs qui incitent les pays développés à l'innovation dans les transports publics. Dans ce contexte, l'introduction des systèmes de vélos en libre-service, au cours de ces dernières années, est une des solutions adoptées par de nombreuses grandes villes. Malgré leur succès fulgurant dans le monde entier, il existe peu d'études fondamentales sur ce type transport urbain. Pourtant, leur exploitation et leur management par des opérateurs soulèvent de nombreuses questions notamment d'ordre opérationnel. Dans ce contexte, cette thèse s'adresse aux problèmes d'ordonnancement et de rééquilibrage des stations de vélos en libre-service. Ce sont des problèmes cruciaux pour la qualité de service et la viabilité économique de tels systèmes. Le rééquilibrage consiste à redistribuer le nombre de vélos entre les différentes stations afin de satisfaire au mieux les demandes des usagers. Cette régulation se fait souvent par le biais de véhicules spécifiques qui font des tournées autour des différentes stations. Ainsi, deux problèmes d'optimisation difficiles se posent : la recherche de la meilleure tournée du véhicule de régulation (ordonnancement de la tournée) et la détermination des nombres de véhicules à utiliser (rééquilibrage des stations). Dans cette optique, les travaux de cette thèse constituent une contribution à la modélisation et à l'optimisation de performances des systèmes de vélos en libre-service en vue de leur rééquilibrage et leur ordonnancement. Plusieurs méthodes d'optimisation et ont été développées et testées. De telles méthodes incorporent différentes approches de simulation ou d'optimisation comme les réseaux de Petri, les algorithmes génétiques, les algorithmes gloutons, les algorithmes de recherche par voisinage, la méthode arborescente de branch-and-bound, l'élaboration des bornes supérieures et inférieures, etc. Différentes facettes du problème ont été étudiées : le cas statique, le cas dynamique, l'ordonnancement et le rééquilibrage avec un seul (ou multiple) véhicule(s). Afin de montrer la pertinence de nos approches, la thèse comporte également plusieurs applications réelles et expérimentations
In our days, developed countries have to face many public transport problems, including traffic congestion, air pollution, global oil prices and global warming. In this context, Public Bike sharing systems are one of the solutions that have been recently implemented in many big cities around the world. Despite their apparent success, the exploitation and management of such transportation systems imply crucial operational challenges that confronting the operators while few scientific works are available to support such complex dynamical systems. In this context, this thesis addresses the scheduling and balancing in public bicycle-sharing systems. These problems are the most crucial questions for their operational efficiency and economic viability. Bike sharing systems are balanced by distributing bicycles from one station to another. This procedure is generally ensured by using specific redistribution vehicles. Therefore, two hard optimization problems can be considered: finding a best tour for the redistribution vehicles (scheduling) and the determination of the numbers of bicycles to be assigned and of the vehicles to be used (balancing of the stations). In this context, this thesis constitutes a contribution to modelling and optimizing the bicycle sharing systems' performances in order to ensure a coherent scheduling and balancing strategies. Several optimization methods have been proposed and tested. Such methods incorporate different approaches of simulation or optimization like the Petri nets, the genetic algorithms, the greedy search algorithms, the local search algorithms, the arborescent branch-and-bound algorithms, the elaboration of upper and lower bounds, ... Different variants of the problem have been studied: the static mode, the dynamic mode, the scheduling and the balancing by using a single or multiple vehicle(s). In order to demonstrate the coherence and the suitability of our approaches, the thesis contains several real applications and experimentations
APA, Harvard, Vancouver, ISO, and other styles
35

Balakrishnan, Ramasamy. "A greedy algorithm based router." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq37475.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Dore, Lucia. "Matroidi e Algoritmo Greedy." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20798/.

Full text
Abstract:
Il primo capitolo di questa trattazione presenta nove differenti definizioni assiomatiche di matroide, che si dimostrano essere equivalenti attraverso il concetto di criptomorfismo. Nel secondo capitolo andremo a studiare le matroidi grafiche come esempio di matroidi. Col termine ‘matroide grafica’ indichiamo la struttura assunta da un grafo che si verifica soddisfare i vari sistemi assiomatici introdotti nel capitolo precedente. Un’attenzione particolare è data al Polinomio di Tutte, nato come oggetto definito per i grafi ed esteso successivamente alle matroidi. L’ultimo capitolo vede invece lo studio degli algoritmi greedy per le matroidi, con esempi quali l’Algoritmo di Kruskal e l’Algoritmo di Prim per la ricerca di alberi generatori di peso minimo di un grafo dato. Viene inoltre introdotto il concetto di greedoide con l’obiettivo di dimostrare che un algoritmo greedy risolve un problema di ottimizzazione se e solo se la struttura in analisi è proprio quella di un greedoide.
APA, Harvard, Vancouver, ISO, and other styles
37

Jurčík, Lukáš. "Evoluční algoritmy při řešení problému obchodního cestujícího." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2014. http://www.nusl.cz/ntk/nusl-224447.

Full text
Abstract:
This diploma thesis deals with evolutionary algorithms used for travelling salesman problem (TSP). In the first section, there are theoretical foundations of a graph theory and computational complexity theory. Next section contains a description of chosen optimization algorithms. The aim of the diploma thesis is to implement an application that solve TSP using evolutionary algorithms.
APA, Harvard, Vancouver, ISO, and other styles
38

Zainiev, Timur. "Quantum mechanics and the greedy algorithm." Thesis, University of Cambridge, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.614850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kopřiva, Jan. "Srovnání algoritmů při řešení problému obchodního cestujícího." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2009. http://www.nusl.cz/ntk/nusl-222126.

Full text
Abstract:
The Master Thesis deals with logistic module innovation of information system ERP. The principle of innovation is based on implementation of heuristic algorithms which solve Travel Salesman Problems (TSP). The software MATLAB is used for analysis and tests of these algorithms. The goal of Master Thesis is the comparison of selections algorithm, which are suitable for economic purposes (accuracy of solution, speed of calculation and memory demands).
APA, Harvard, Vancouver, ISO, and other styles
40

Naeimi, Helia DeHon André. "A greedy algorithm for tolerating defective crosspoints in nanoPLA design /." Diss., Pasadena, Calif. : California Institute of Technology, 2005. http://resolver.caltech.edu/CaltechETD:etd-05052005-164226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Zakaria, Rabih. "Optimization of the car relocation operations in one-way carsharing systems." Thesis, Belfort-Montbéliard, 2015. http://www.theses.fr/2015BELF0281/document.

Full text
Abstract:
L'autopartage est un service de mobilité qui offre les mêmes avantages que les voitures particulières mais sansnotion de propriété. Les clients du système peuvent accéder aux véhicules sans ou avec réservation préalable. Laflotte de voitures est distribuée entre les stations et les clients peuvent prendre une voiture d'une station et ladéposer dans n'importe quelle autre station (one-way), chaque station disposant d'un nombre maximum de placesde stationnement. La demande pour la prise ou le retour des voitures dans chaque station est souvent asymétriqueentre les stations et varie au cours de la journée. Par conséquent, certaines stations accumulent des voitures etatteignent leur capacité maximale prévenant alors de nouvelles voitures de trouver une place de stationnement.Dans le même temps, des stations se vident et conduisent au rejet de la demande de retrait de clients. Notre travailporte sur l'optimisation des opérations de redéploiement de voitures afin de redistribuer efficacement les voitures surles stations suivant la demande qui varie en fonction du temps et de l'espace. Dans les systèmes d'autopartage àsens unique, le problème du redéploiement de voitures sur les stations est techniquement plus difficile que leproblème de la redistribution des vélos dans les systèmes de vélopartage. Dans ce dernier, on peut utiliser uncamion pour déplacer plusieurs vélos en même temps, alors que nous ne pouvons pas le faire dans le systèmeautopartage en raison de la taille des voitures et de la difficulté de chargement et de déchargement. Ces opérationsaugmentent le coût de fonctionnement du système d'autopartage sur l'opérateur. De ce fait, l'optimisation de cesopérations est essentielle afin de réduire leur coût. Dans cette thèse, nous développons un modèle deprogrammation linéaire en nombre entier pour ce problème. Ensuite, nous présentons trois politiques différentes deredéploiement de voitures que nous mettons en oeuvre dans des algorithmes de recherche gloutonne et nousmontrons que les opérations de redéploiement qui ne considèrent pas les futures demandes ne sont pas efficacesdans la réduction du nombre de demandes rejetées. Les solutions fournies par notre algorithme glouton sontperformantes en temps d'exécution (moins d'une seconde) et en qualité en comparaison avec les solutions fourniespar CPLEX. L'évaluation de la robustesse des deux approches présentées par l'ajout d'un bruit stochastique sur lesdonnées d'entrée montre qu'elles sont très dépendantes des données même avec l'adoption de valeur de seuil deredéploiement. En parallèle à ce travail algorithmique, l'analyse de variance (ANOVA) et des méthodes derégression multilinéaires ont été appliqués sur l'ensemble de données utilisées pour construire un modèle global afind'estimer le nombre de demandes rejetées. Enfin, nous avons développé et comparé deux algorithmesévolutionnaires multicritères pour prendre en compte l'indécision sur les objectifs de l'optimisation, NSGA-II et unalgorithme mémétique qui a montré une bonne performance pour résoudre ce problème
To buy it. Users can have access to vehicles on the go with or without reservation. Each station has a maximumnumber of parking places. In one-way carsharing system, users can pick up a car from a station and drop it in anyother station. The number of available cars in each station will vary based on the departure and the arrival of cars oneach station at each time of the day. The demand for taking or returning cars in each station is often asymmetric andis fluctuating during the day. Therefore, some stations will accumulate cars and will reach their maximum capacitypreventing new arriving cars from finding a parking place, while other stations will become empty which lead to therejection of new users demand to take a car. Users expect that cars are always available in stations when they needit, and they expect to find a free parking place at the destination station when they want to return the rented car aswell. However, maintaining this level of service is not an easy task. For this sake, carsharing operators recruitemployees to relocate cars between the stations in order to satisfy the users' demands.Our work concerns the optimization of the car relocation operations in order to efficiently redistribute the cars overthe stations with regard to user demands, which are time and space dependent. In one-way carsharing systems, therelocation problem is technically more difficult than the relocation problem in bikesharing systems. In the latter, wecan use trucks to move several bikes at the same time, while we cannot do this in carsharing system because of thesize of cars and the difficulty of loading and unloading cars. These operations increase the cost of operating thecarsharing system.As a result, optimizing these operations is crucial in order to reduce the cost of the operator. In this thesis, we modelthis problem as an Integer Linear Programming model. Then we present three different car relocation policies thatwe implement in a greedy search algorithm. The comparison between the three policies shows that car relocationoperations that do not consider future demands are not effective in reducing the number of rejected demands.Results prove that solutions provided by our greedy algorithm when using a good policy, are competitive withCPLEX solutions. Furthermore, adding stochastic modification on the input data proves that the robustness of thetwo presented approaches to solve the relocation problem is highly dependent on the input demand even afteradding threshold values constraints. After that, the analysis of variance (ANOVA) and the multi-linear regressionmethods were applied on the used dataset in order to build a global model to estimate the number of rejecteddemands. Finally, we developed and compared two multi-objectives evolutionary algorithms to deal with thedecisional aspect of the car relocation problem using NSGA-II and memetic algorithms
APA, Harvard, Vancouver, ISO, and other styles
42

Burrowbridge, Sarah Elizabeth. "Optimal Allocation of Satellite Network Resources." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/36385.

Full text
Abstract:
This work examines a straightforward solution to a problem of satellite network resource allocation by exploring the application of an optimal algorithm to a subset of the general scheduling problem. Making some basic simplifications, such as the limitation of the mission scenarios to Low Earth Orbiting satellites, allows the effective application of the rigorous methods of the Greedy Activity-Selector algorithm. Tools such as Analytical Graphic's Satellite Tool Kit and MATLAB 5.0 are integrated to assist in the implementation of the algorithm. Several examples are presented in order to illustrate the effectiveness of the process.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
43

Moussa, Ibrahim. "Modèles de résolution approchée et efficace pour les problèmes des réseaux de transport et de télécommunication." Thesis, Amiens, 2015. http://www.theses.fr/2015AMIE0009/document.

Full text
Abstract:
Cette thèse s’intéresse à la résolution de problèmes d’optimisation combinatoires NP-difficiles en utilisant des méthodes de résolution approchées. Deux domaines d’application sont ciblés ici, d’une part la problématique générale du réseau de transport avec une variante portant plus précisément sur la planification des tournées avec une équipe de véhicules, d’autre part le problème de gestion de sessions en mode multicast dans un réseau de télécommunication, abordé ici du point de vue plus général du partitionnement dans un graphe biparti. Ces deux applications sont évidemment d’intérêt, tant du point de vue fondamental pour les méthodes de résolution qui doivent toujours progresser face à de nouveaux challenges, que du point de vue des retombées industrielles potentielles. La résolution de tels problèmes comporte généralement deux phases : dans un premier temps il s’agit de définir un ou plusieurs modèles mathématiques, de les comparer éventuellement pour choisir le plus efficace en fonction des outils de résolution disponibles; dans un deuxième temps il est possible d’utiliser un paradigme de résolution générique, comme par exemple un solveur de programmation linéaire, ou bien de spécialiser un algorithme en y incluant des heuristiques et connaissances spécifiques, afin d’optimiser sa performance. C’est dans cette deuxième démarche que se situe cette thèse, démarche souvent nécessaire lorsque les problèmes abordés deviennent complexes et/ou de grande taille et que l’on souhaite concevoir des algorithmes plus efficaces
This thesis focuses on solving combinatorial optimization problems NP-hard using approximate solving methods. Two practical application areas are targeted here, firstly the general problem of vehicule routing network with a variant specifically with planning tours with a vehicle team, on the other hand the multicast session management problem on a telecommunications network, addressed by the broader perspective of clustering in a bipartite graph. Both applications are obviously of interest both from the fundamental point of view for the resolution methods that must always progress facing new challenges, from the point of view of potential industrial benefits. The resolution of such problems usually has two phases: initially it comes to define one or more mathematical models to compare possibly to choose the most effective according to the available resolution tools; secondly it is possible to use a generic resolution paradigm, such as a linear programming solver, or specialize an algorithm by including specific heuristics and knowledge to optimize its performance. This thesis is in this second approach. This is often necessary when the problems addressed become complex and / or large and that we need to be designing more efficient algorithms
APA, Harvard, Vancouver, ISO, and other styles
44

Althoby, Haeder Younis Ghawi. "Theoritical and numerical studies on the graph partitioning problem." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC233/document.

Full text
Abstract:
Étant donné G = (V, E) un graphe non orienté connexe et un entier positif β (n), où n est le nombrede sommets de G, le problème du séparateur (VSP) consiste à trouver une partition de V en troisclasses A, B et C de sorte qu'il n'y a pas d'arêtes entre A et B, max {| A |, | B |} est inférieur ou égal àβ (n) et | C | est minimum. Dans cette thèse, nous considérons une modélisation du problème sous laforme d'un programme linéaire en nombres entiers. Nous décrivons certaines inégalités valides et etdéveloppons des algorithmes basés sur un schéma de voisinage.Nous étudions également le problème du st-séparateur connexe. Soient s et t deux sommets de Vnon adjacents. Un st-séparateur connexe dans le graphe G est un sous-ensemble S de V \ {s, t} quiinduit un sous-graphe connexe et dont la suppression déconnecte s de t. Il s'agit de déterminer un stséparateur de cardinalité minimum. Nous proposons trois formulations pour ce problème et donnonsdes inégalités valides du polyèdre associé à ce problème. Nous présentons aussi une heuristiqueefficace pour résoudre ce problème
Given G=(V,E) a connected undirected graph and a positive integer β(n), where n is number ofvertices, the vertex separator problem (VSP) is to find a partition of V into three classes A,B and Csuch that there is no edge between A and B, max{|A|,|B|}less than or equal β(n), and |C| isminimum. In this thesis, we consider aninteger programming formulation for this problem. Wedescribe some valid inequalties and using these results to develop algorithms based onneighborhood scheme.We also study st-connected vertex separator problem. Let s and tbe two disjoint vertices of V, notadjacent. A st-connected separator in the graph G is a subset S of V\{s,t} such that there are no morepaths between sand tin G[G\S] and the graph G[S] is connected . The st-connected vertex speratorproblem consists in finding such subset with minimum cardinality. We propose three formulationsfor this problem and give some valid inequalities for the polyhedron associated with this problem.We develop also an efficient heuristic to solve this problem
APA, Harvard, Vancouver, ISO, and other styles
45

Neas, Charles Bennett. "A Greedy Search Algorithm for Maneuver-Based Motion Planning of Agile Vehicles." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/36213.

Full text
Abstract:
This thesis presents a greedy search algorithm for maneuver-based motion planning of agile vehicles. In maneuver-based motion planning, vehicle maneuvers are solved offline and saved in a library to be used during motion planning. From this library, a tree of possible vehicle states can be generated through the search space. A depth-first, library-based algorithm called AD-Lib is developed and used to quickly provide feasible trajectories along the tree. AD-Lib combines greedy search techniques with hill climbing and effective backtracking to guide the search process rapidly towards the goal. Using simulations of a four-thruster hovercraft, AD-Lib is compared to existing suboptimal search algorithms in both known and unknown environments with static obstacles. AD-Lib is shown to be faster than existing techniques, at the expense of increased path cost. The motion planning strategy of AD-Lib along with a switching controller is also tested in an environment with dynamic obstacles.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
46

Hossain, Mohammad Forhad. "Spanning Tree Approach On The Snow Cleaning Problem." Thesis, Högskolan Dalarna, Datateknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:du-4847.

Full text
Abstract:
Snow cleaning is one of the important tasks in the winter time in Sweden. Every year government spends huge amount money for snow cleaning purpose. In this thesis we generate a shortest road network of the city and put the depots in different place of the city for snow cleaning. We generate shortest road network using minimum spanning tree algorithm and find the depots position using greedy heuristic. When snow is falling, vehicles start work from the depots and clean the snow all the road network of the city. We generate two types of model. Models are economic model and efficient model. Economic model provide good economical solution of the problem and it use less number of vehicles. Efficient model generate good efficient solution and it take less amount of time to clean the entire road network.
APA, Harvard, Vancouver, ISO, and other styles
47

Lakshminarayanan, Srivathsan. "Nature Inspired Grey Wolf Optimizer Algorithm for Minimizing Operating Cost in Green Smart Home." University of Toledo / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1438102173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Moussallam, Manuel. "Représentations redondantes et hiérarchiques pour l'archivage et la compression de scènes sonores." Phd thesis, Télécom ParisTech, 2012. http://pastel.archives-ouvertes.fr/pastel-00834272.

Full text
Abstract:
L'objet de cette thèse est l'analyse et le traitement automatique de grands volumes de données audio. Plus particulièrement, on s'intéresse à l'archivage, tâche qui regroupe, au moins, deux problématiques: la compression des données, et l'indexation du contenu de celles-ci. Ces deux problématiques définissent chacune des objectifs, parfois concurrents, dont la prise en compte simultanée s'avère donc difficile. Au centre de cette thèse, il y a donc la volonté de construire un cadre cohérent à la fois pour la compression et pour l'indexation d'archives sonores. Les représentations parcimonieuses de signaux dans des dictionnaires redondants ont récemment montré leur capacité à remplir une telle fonction. Leurs propriétés ainsi que les méthodes et algorithmes permettant de les obtenir sont donc étudiés dans une première partie de cette thèse. Le cadre applicatif relativement contraignant (volume des données) va nous amener à choisir parmi ces derniers des algorithmes itératifs, appelés également gloutons. Une première contribution de cette thèse consiste en la proposition de variantes du célèbre Matching Pursuit basées sur un sous-échantillonnage aléatoire et dynamique de dictionnaires. L'adaptation au cas de dictionnaires temps-fréquence structurés (union de bases de cosinus locaux) nous permet d'espérer une amélioration significative des performances en compression de scènes sonores. Ces nouveaux algorithmes s'accompagnent d'une modélisation statistique originale des propriétés de convergence usant d'outils empruntés à la théorie des valeurs extrêmes. Les autres contributions de cette thèse s'attaquent au second membre du problème d'archivage: l'indexation. Le même cadre est cette fois-ci envisagé pour mettre à jour les différents niveaux de structuration des données. Au premier plan, la détection de redondances et répétitions. A grande échelle, un système robuste de détection de motifs récurrents dans un flux radiophonique par comparaison d'empreintes est proposé. Ses performances comparatives sur une campagne d'évaluation du projet QUAERO confirment la pertinence de cette approche. L'exploitation des structures pour un contexte autre que la compression est également envisagé. Nous proposons en particulier une application à la séparation de sources informée par la redondance pour illustrer la variété de traitements que le cadre choisi autorise. La synthèse des différents éléments permet alors d'envisager un système d'archivage répondant aux contraintes par la hiérarchisation des objectifs et des traitements.
APA, Harvard, Vancouver, ISO, and other styles
49

De, Souza Bento Da Silva Pedro Paulo. "On the mapping of distributed applications onto multiple Clouds." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEN089/document.

Full text
Abstract:
Le Cloud est devenu une plate-forme très répandue pour le déploiement d'applications distribuées. Beaucoup d'entreprises peuvent sous-traiter leurs infrastructures d'hébergement et, ainsi, éviter des dépenses provenant d'investissements initiaux en infrastructure et de maintenance.Des petites et moyennes entreprises, en particulier, attirés par le modèle de coûts sur demande du Cloud, ont désormais accès à des fonctionnalités comme le passage à l'échelle, la disponibilité et la fiabilité, qui avant le Cloud étaient presque réservées à de grandes entreprises.Les services du Cloud peuvent être offerts aux utilisateurs de plusieurs façons. Dans cette thèse, nous nous concentrons sur le modèle d'Infrastructure sous Forme de Service. Ce modèle permet aux utilisateurs d’accéder à des ressources de calcul virtualisés sous forme de machine virtuelles (MVs).Pour installer une application distribuée, un client du Cloud doit d'abord définir l'association entre son application et l'infrastructure. Il est nécessaire de prendre en considération des contraintesde coût, de ressource et de communication pour pouvoir choisir un ensemble de MVs provenant d'opérateurs de Cloud publiques et privés le plus adaptés. Cependant, étant donné la quantité exponentiel de configurations, la définition manuelle de l'association entre application et infrastructure peut être un challenge dans des scénarios à large échelle ou ayant des contraintes importantes de temps. En effet, ce problème est une généralisation du problème de calcul de homomorphisme de graphes, qui est NP-complet.Dans cette thèse, nous adressons le problème de calculer des placements initiaux et de reconfiguration pour des applications distribuées sur potentiellement de multiples Clouds. L'objectif est de minimiser les coûts de location et de migration en satisfaisant des contraintes de ressources et communications. Pour cela, nous proposons des heuristiques performantes capables de calculer des placements de bonne qualité très rapidement pour des scénarios à petite et large échelles. Ces heuristiques, qui sont basées sur des algorithmes de partition de graphes et de vector packing, ont été évaluées en les comparant avec des approches de l'état de l'art comme des solveurs exactes et des méta-heuristiques. Nous montrons en utilisant des simulations que les heuristiques proposées arrivent à calculer des solutions de bonne qualité en quelques secondes tandis que des autres approches prennent des heures ou jours pour les calculer
The Cloud has become a very popular platform for deploying distributed applications. Today, virtually any credit card holder can have access to Cloud services. There are many different ways of offering Cloud services to customers. In this thesis we especially focus on theInfrastructure as a Service (IaaS), a model that, usually, proposes virtualized computing resources to costumers in the form of virtual machines (VMs). Thanks to its attractive pay-as-you-use cost model, it is easier for customers, specially small and medium companies, to outsource hosting infrastructures and benefit of savings related to upfront investments and maintenance costs. Also, customers can have access to features such as scalability, availability, and reliability, which previously were almost exclusive for large companies. To deploy a distributed application, a Cloud customer must first consider the mapping between her application (or its parts) to the target infrastructure. She needs to take into consideration cost, resource, and communication constraints to select the most suitable set of VMs, from private and public Cloud providers. However, defining a mapping manually may be a challenge in large-scale or time constrained scenarios since the number of possible configuration explodes. Furthermore, when automating this process, scalability issues must be taken into account given that this mapping problem is a generalization of the graph homomorphism problem, which is NP-complete.In this thesis we address the problem of calculating initial and reconfiguration placements for distributed applications over possibly multiple Clouds. Our objective is to minimize renting and migration costs while satisfying applications' resource and communication constraints. We concentrate on the mapping between applications and Cloud infrastructure. Using an incremental approach, we split the problem into three different parts and propose efficient heuristics that can compute good quality placements very quickly for small and large scenarios. These heuristics are based on graph partition and vector packing heuristics and have been extensively evaluated against state of the art approaches such as MIP solvers and meta-heuristics. We show through simulations that the proposed heuristics manage to compute solutions in a few seconds that would take many hours or days for other approaches to compute
APA, Harvard, Vancouver, ISO, and other styles
50

Banissi, Ebad K. "Conic drawing algorithms with grey scale." Thesis, Brunel University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.290869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography