Dissertations / Theses on the topic 'Optical complexity'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Optical complexity.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Saad, Mohamed Elsayed Mostafa Luo Zhi-Quan. "Design of optical networks: performance bounds, complexity and algorithms /." *McMaster only, 2004.
Find full textLim, Dong Sung. "Phase singularities and spatial-temporal complexity in optical fibres." Thesis, Heriot-Watt University, 1995. http://hdl.handle.net/10399/772.
Full textLi, Yunxi. "Study of low-complexity modal multiplexing for optical communication links." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708925.
Full textPost, Arthur David 1954. "Complexity of optical computing paradigms: Computational implications and a suggested improvement." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/291592.
Full textBarrami, Fatima. "Low-complexity direct-detection optical OFDM systems for high data rate communications." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT057/document.
Full textA possible approach to maximize the data rate per wavelength, is to employ the high spectral efficiency discrete multitone (DMT) modulation. The work presented in this thesis mainly focuses on optimizing the power consumption and cost of DMT, that are the major obstacles to its market development. Within this context, we have first developed novel techniques permitting to discard the use of Hermitian symmetry in DMT modulations, thus significantly reducing the power consumption and the system cost. We have next proposed an asymmetric linear companding algorithm permitting to reduce the optical power of conventional DCO-OFDM modulation with a moderate complexity. A new VCSEL behavioural model based on the use of the VCSEL quasi-static characteristic was also developed to accurately evaluate the VCSEL impact on DMT modulations. Finally, we have built an experimental system to experimentally validate our proposed techniques. Several simulations and measurement results are then provided
Nadal, Reixats Laia. "Design and implementation of low complexity adaptive optical OFDM systems for software-defined transmission in elastic optical networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284714.
Full textDegut al creixement del tràfic IP i de la demanda de serveis de banda ampla, les xarxes òptiques estan experimentant canvis significatius. Els formats avançats de modulació, implementats a nivell de processat del senyal digital, habiliten la transmissió a alta velocitat. Mentre que a la capa de xarxa, l'espectre òptic es dividit en ranures flexibles ocupant l'ample de banda necessari segons la demanda de tràfic. La transmissió a alta velocitat és fa més tangible un cop habilitades totes aquestes funcionalitats. D'aquesta manera un dels principals objectius d'aquesta tesis es introduir flexibilitat al sistema. A demés, minimitzar el cost i maximitzar l'eficiència espectral del sistema són també dos aspectes crucials a considerar en el disseny del transmissor i receptor. Aquesta tesis investiga l'ús de la tecnologia Optical Orthogonal Frequency Division Multiplexing (OFDM) basada en la transformada de Fourier (FFT) i la de Hartley (FHT) per tal de dissenyar un sistema flexible i capaç de transmetre a alta velocitat a través de la fibra òptica. Per tant, es proposen diferent solucions de baix cost vàlides per a utilitzar en xarxes òptiques elàstiques. En primer lloc, s'investiguen i es proposen sistemes basats en detecció directa per tal de suportar la present i futura demanda. Després d'una introducció dels principis d' OFDM i la seva aplicació als sistemes òptics, s'introdueixen alguns dels problemes d'aquesta modulació. En particular, es presenten el Peak-to-Average Power Ratio (PAPR) i els sorolls de clipping i de quantizació com a limitació dels sistemes OFDM. S'analitzen tècniques de reducció de PAPR per tal de reduir l'impacte d'aquests impediments. També es proposen tècniques de baixa complexitat per a reduir el PAPR basades en la FHT. Finalment, s'utilitzen algoritmes d'assignació de bits i de potència, Bit Loading (BL) i Power Loading (PL), per tal de combatre la dispersió cromàtica quan es transmet pel canal òptic. Amb la implementació dels algoritmes de BL i PL, es poden dissenyar transmissors i receptors flexibles adaptant la velocitat a la demanda del moment i a les actuals condicions de la xarxa. En particular, els símbols OFDM es creen mapejant cada portadora amb un format de modulació diferent segons el perfil del canal. El sistema és validat experimentalment mostrant les prestacions i els beneficis d'incloure flexibilitat per tal de facilitar la transmissió a alta velocitat i cobrir les necessitats de l'Internet del futur
Debido al crecimiento del tráfico IP y de la demanda de servicios de banda ancha, las redes ópticas están experimentando cambios significativos. Los formatos avanzados de modulación, implementados a nivel de procesado de la señal digital, habilitan la transmisión a alta velocidad. Mientras que en la capa de red, el espectro óptico se divide en ranuras flexibles ocupando el ancho de banda necesario según la demanda de tráfico. La transmisión a alta velocidad es más tangible una vez habilitadas todas estas funcionalidades. De este modo uno de los principales objetivos de esta tesis es introducir flexibilidad en el sistema. Además, minimizar el coste y maximizar la eficiencia espectral del sistema son también dos aspectos cruciales a considerar en el diseño del transmisor y receptor. Esta tesis investiga el uso de la tecnologia Optical Orthogonal Frequency Division Multiplexing (OFDM) basada en la transformada de Fourier (FFT) y en la de Hartley (FHT) con tal de diseñar un sistema flexible y capaz de transmitir a alta velocidad a través de la fibra óptica. Por lo tanto, se proponen distintas soluciones de bajo coste válidas para utilizar en redes ópticas elásticas. En primer lugar, se investigan y se proponen sistemas basados en detección directa con tal de soportar la presente y futura demanda. Después de una introducción de los principios de OFDM y su aplicación en los sistemas ópticos, se introduce el principal problema de esta modulación. En particular se presentan el Peak-to-Average Power Ratio (PAPR) y los ruidos de clipping y cuantización como limitaciones de los sistemas OFDM. Se analizan técnicas de reducción de PAPR con tal de reducir el impacto de estos impedimentos. También se proponen técnicas de baja complejidad para reducir el PAPR basadas en la FHT. Finalmente, se utilizan algoritmos de asignación de bits y potencia, Bit Loading (BL) y Power Loading (PL), con tal de combatir la dispersión cromática cuando se transmite por el canal óptico. Con la implementación de los algoritmos de BL y PL, se pueden diseñar transmisores y receptores flexibles adaptando la velocidad a la demanda del momento y a las actuales condiciones de la red. En particular, los símbolos OFDM se crean mapeando cada portadora con un formato de modulaci_on distinto según el perfil del canal. El sistema se valida experimentalmente mostrando las prestaciones y los beneficios de incluir flexibilidad con tal de facilitar la transmisión a alta velocidad y cubrir las necesidades de Internet del futuro.
Bilbeisi, Hana. "Time-slotted scheduling for agile all-photonics networks : performance and complexity." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=112558.
Full textSachs, Antonio de Campos. "Rede auto-organizada utilizando chaveamento de pacotes ópticos." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-05082011-152444/.
Full textThe Optical Packet Switching (OPS) technology usually involves complex and expensive components relegating its application viability to the future. Nevertheless the OPS utilization is a good option for improving the granularity at high bit rate transmissions, as well as for operation involving flexibility and fast bandwidth distribution. This thesis proposes simplifications on optical switching devices that besides getting closer future viability enable the deployment of highly scalable and self-organized complex network architecture. The proposed network operates without resources reservation or previous path establishment. The routes are defined packet-by-packet in a real time deflection routing procedure. With simple local functions the network starts to operate with desirable performance characteristics such as high scalability and automatic protection system. Those desirable performance characteristics are treated as Emerging Functions. For the network characterization it is presented a statistical analytical model validated by simulation. In the automatic protection functions investigation the results for a 256 nodes network showed that the mean number of hops enhancement occurs only around the failure neighborhood. To demonstrate the switch viability, a prototype was fabricated utilizing components already available in the market. The switching time obtained was below two nanoseconds showing compatibility with the optical packet switching technology.
Sau, Ignasi. "Optimization in Graphs under Degree Constraints. Application to Telecommunication Networks." Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00429092.
Full textVon, Eden Elric Omar. "Optical arbitrary waveform generation using chromatic dispersion in silica fibers." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/24780.
Full textAngilella, Vincent. "Design optimal des réseaux Fiber To The Home." Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0004/document.
Full textFor operators, FTTH networks are the most widespread solution to the increasing traffic demand. Their layout requires a huge investment. The aim of this work is to ensure a cost effective deployment of quality networks. We start by presenting aspects of this network design problem which make it a complex problem. The related literature is reviewed to highlight the novel issues that we solve. Then, we elaborate strategies to find the best solution in different contexts. Several policies regarding maintenance or civil engineering use will be investigated. The problems encountered are tackled using several combinatorial optimization tools (integer programming, valid inequalities, dynamic programming, approximations, complexity theory, inapproximability…) which will be developed according to our needs. The proposed solutions were tested and validated on real-life instances, and are meant to be implemented in a network planning tool from Orange
Tzamos, Christos. "The complexity of optimal mechanism design." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82373.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 63-64).
Myerson's seminal work provides a computationally efficient revenue-optimal auction for selling one item to multiple bidders. Generalizing this work to selling multiple items at once has been a central question in economics and algorithmic game theory, but its complexity has remained poorly understood. We answer this question by showing that a revenue-optimal auction in multi-item settings cannot be found and implemented computationally efficiently, unless ZPP = P # p. This is true even for a single additive bidder whose values for the items are independently distributed on two rational numbers with rational probabilities. Our result is very general: we show that it is hard to compute any encoding of an optimal auction of any format (direct or indirect, truthful or non-truthful) that can be implemented in expected polynomial time. In particular, under well-believed complexity-theoretic assumptions, revenue-optimization in very simple multi-item settings can only be tractably approximated. We note that our hardness result applies to randomized mechanisms in a very simple setting, and is not an artifact of introducing combinatorial structure to the problem by allowing correlation among item values, introducing combinatorial valuations, or requiring the mechanism to be deterministic (whose structure is readily combinatorial). Our proof is enabled by a flow interpretation of the solutions of an exponential-size linear program for revenue maximization with an additional supermodularity constraint.
by Christos Tzamos.
S.M.
Nakache, Elie. "Chemin optimal, conception et amélioration de réseaux sous contrainte de distance." Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4023/document.
Full textIn this thesis, we investigate several combinatorial optimization problems and characterize their computational complexity and approximability by providing polynomial reductions and exact or approximation algorithms.In particular, we study the problem of finding, in a vertex-labeled directed acyclic graph, a path collecting a maximum number of distinct labels. We prove that no polynomial time constant factor approximation algorithm exists for this problem. Furthermore, we describe a scheme that produces, for any $epsilon >0$, a polynomial time algorithm that computes a solution collecting $O(OPT^{1-epsilon})$ labels. Then, we study several variants of the minimum cost spanning tree problem that take into account distance and betweenness constraints. We prove that most of these problems can be solved in polynomial time using a reduction to the weighted matroid intersection problem. For an other problem, we give a factor 2 approximation algorithm and prove the optimality of this ratio.Finally, we study a network improvement problem from a cost sharing perspective. We establish that the cost function corresponding to this problem is submodular and use this result to derive a cost sharing mechanism having several good properties
余鳳玲 and Fung-ling Yue. "On the complexity of finding optimal edge rankings." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B30148881.
Full textYue, Fung-ling. "On the complexity of finding optimal edge rankings /." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18540247.
Full textGelashvili, Rati. "Leader election and renaming with optimal message complexity." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/89859.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 65-68).
Asynchronous message-passing system is a standard distributed model, where n processors communicate over unreliable channels, controlled by a strong adaptive adversary. The asynchronous nature of the system and the fact that t
S.M.
Dahmen, Wolfgang, Helmut Harbrecht, and Reinhold Schneider. "Compression Techniques for Boundary Integral Equations - Optimal Complexity Estimates." Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600464.
Full textKim, Hyungjoon. "Low-Complexity Mode Selection for Rate-Distortion Optimal Video Coding." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14513.
Full textLearned, Rachel E. "Low complexity optimal joint detection for over-saturated multiple access communications." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/9812.
Full textIncludes bibliographical references (leaves 220-222).
by Rachel E. Learned.
Ph.D.
Ta, Thanh Thuy Tien. "New single machine scheduling problems with deadline for the characterization of optimal solutions." Thesis, Tours, 2018. http://www.theses.fr/2018TOUR4015/document.
Full textWe consider a single machine scheduling problem with deadlines and we want to characterise the set of optimal solutions, without enumerating them. We assume that jobs are numbered in EDD order and that this sequence is feasible. The key idea is to use the lattice of permutations and to associate to the supremum permutation the EDD sequence. In order to characterize a lot of solutions, we search for a feasible sequence, as far as possible to the supremum. The distance is the level of the sequence in the lattice, which has to be minimum. This new objective function is investigated. Some polynomially particular cases are identified, but the complexity of the general case problem remains open. Some resolution methods, polynomial and exponential, are proposed and evaluated. The level of the sequence being related to the positions of jobs in the sequence, new objective functions related to the jobs positions are identified and studied. The problem of minimizing the total weighted positions of jobs is proved to be strongly NP-hard. Some particular cases are investigated, resolution methods are also proposed and evaluated
Starrett, Dean. "Optimal Alignment of Multiple Sequence Alignments." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/194840.
Full textTouzeau, Valentin. "Analyse statique de caches LRU : complexité, analyse optimale, et applications au calcul de pire temps d'exécution et à la sécurité." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM041.
Full textThe certification of real-time safety critical programs requires bounding their execution time.Due to the high impact of cache memories on memory access latency, modern Worst-Case Execution Time estimation tools include a cache analysis.The aim of this analysis is to statically predict if memory accesses result in a cache hit or a cache miss.This problem is undecidable in general, thus usual cache analyses perform some abstractions that lead to precision loss.One common assumption made to remove the source of undecidability is that all execution paths in the program are feasible.Making this hypothesis is reasonable because the safety of the analysis is preserved when adding spurious paths to the program model.However, classifying memory accesses as cache hits or misses is still hard in practice under this assumption, and efficient cache analysis usually involve additional approximations, again leading to precision loss.This thesis investigates the possibility of performing an optimally precise cache analysis under the common assumption that all execution paths in the program are feasible.We formally define the problems of classifying accesses as hits and misses, and prove that they are NP-hard or PSPACE-hard for common replacement policies (LRU, FIFO, NRU and PLRU).However, if these theoretical complexity results legitimate the use of additional abstraction, they do not preclude the existence of algorithms efficient in practice on industrial workloads.Because of the abstractions performed for efficiency reasons, cache analyses can usually classify accesses as Unknown in addition to Always-Hit (Must analysis) or Always-Miss (May analysis).Accesses classified as Unknown can lead to both a hit or a miss, depending on the program execution path followed.However, it can also be that they belong to one of the Always-Hit or Always-Miss category and that the cache analysis failed to classify them correctly because of a coarse approximation.We thus designed a new analysis for LRU instruction that is able to soundly classify some accesses into a new category, called Definitely Unknown, that represents accesses that can lead to both a hit or a miss.For those accesses, one knows for sure that their classification does not result from a coarse approximation but is a consequence of the program structure and cache configuration.By doing so, we also reduce the set of accesses that are candidate for a refined classification using more powerful and more costly analyses.Our main contribution is an analysis that can perform an optimally precise analysis of LRU instruction caches.We use a method called block focusing that allows an analysis to scale by only analyzing one cache block at a time.We thus take advantage of the low number of candidates for refinement left by our Definitely Unknown analysis.This analysis produces an optimal classification of memory accesses at a reasonable cost (a few times the cost of the usual May and Must analyses).We evaluate the impact of our precise cache analysis on the pipeline analysis.Indeed, when the cache analysis is not able to classify an access as Always-Hit or Always-Miss, the pipeline analysis must consider both cases.By providing a more precise memory access classification, we thus reduce the state space explored by the pipeline analysis and hence the WCET analysis time.Aside from this application of precise cache analysis to WCET estimation, we investigate the possibility of using the Definitely Unknown analysis in the domain of security.Indeed, caches can be used as side-channel to extract some sensitive data from a program execution, and we propose a variation of our Definitely Unknown analysis to help a developer finding the source of some information leakage
Xu, Chong. "Reduced-complexity near-optimal Ant-Colony-aided multi-user detection for CDMA systems." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/206015/.
Full textAtassi, Vincent. "Typing and Optimal reduction for λ-calculus in variants of Linear logic for Implicit computational complexity." Paris 13, 2008. http://www.theses.fr/2008PA132038.
Full textLe lambda-calcul a été introduit pour étudier les fonctions mathématiques d’un point de vue calculatoire. Il a ensuite servi de fondement au développement des langages de prgrammation fonctionnels. Savoir si il existe une méthode prouvablement la plus efficace pour réduire les lambda-termes, et connaître la complexité intrinsèque de cette opération en général sont toujours des questions ouvertes. Dans cette thèse, nous utilisons les outils du typage, de l’inférence de type, de la Logique linéaire et de la Réduction optimale pour explorer ces questions. Nous présentons un algorithme d’inférence de type pour Dual light affine logic (dlal), un système de type qui caractérise la classe de complexité polynomiale. L’algorithme prend en entrée un lambda-terme typé dans le système F et renvoie un typage dans dlal si il en existe un. Une implémentation est fournie. Puis, nous étendons un système de type fondé sur Elementary affine logic avec du sous-typage, afin d’automatiser le placement des cœrcicions. Nous montrons que le sous-typage capture bien les cœrcicions, et nous donnons un algorithme d’inférence complet pour ce système étendu. Enfin, nous adaptons l’algorithme de Réduction optimale de Lamping pour les lambda-termes typables dans Soft linear logic (sll), une logique qui caractérise le temps polynomial. Nous montrons qu’une borne polynomiale existe pour tous les graphes de partage ainsi réduits, et que les lambda termes typables dans sll sont réduits correctement
Renaud-Goud, Paul. "Energy-aware scheduling : complexity and algorithms." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00744247.
Full textHilliard, David (David John). "Achieving and sustaining an optimal product portfolio in the healthcare industry through SKU rationalization, complexity costing, and dashboards." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/73385.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 76).
After years of new product launches, and entry into emerging markets, Company X, a healthcare company, has seen its product portfolio proliferate and bring costly complexity into its operations. Today, Company X seeks to achieve and sustain an optimal product offering that meets their customers' needs. Through a six-month research effort, we develop a process for stock-keeping-unit (SKU) rationalization to reduce SKU complexity while maintaining sales volumes. We, also, implement operational models to compute complexity costs associated with SKU complexity and employ SKU portfolio dashboards to monitor SKU development and govern SKU creation. This thesis discusses a process for applying these tools to any healthcare company. Through two case studies, we apply the rationalization process on one pilot brand and develop a dashboard to improve product portfolio management. We expect that the SKU rationalization process will release 38% of avoidable costs associated with the pilot brand. These case studies also provide insight into how to correctly diagnose the cost reduction opportunity associated with SKU complexity, as well as methods for a step-change improvement in lead-times and cost-reduction. Lastly, removal of complexity provides flexibility to capture other business opportunities.
by David Hilliard.
S.M.
M.B.A.
Brun, Jean-Marc. "Modèles à complexité réduite de transport pour applications environnementales." Montpellier 2, 2007. http://www.theses.fr/2007MON20248.
Full textA platform of low complexity models for the transport of passive scalars for environmental applications is presented. Multi-level analysis has been used with a reduction in dimension of the solution space at each level. Local spray drift distribution is estimated thanks to the turbulent jet theory and determine the source term. Similitude solutions are used in a non symmetric metric for the transport over long distances. Model parameters identification is based on data assimilation. The approach does not require the solution of any PDE and therefore is mesh free. The model also permits to access the solution in one point without computing the solution over the whole domain
Perinelli, Alessio. "A new approach to optimal embedding of time series." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/280754.
Full textOtten, Edward W. "The Influence of Stimulus Complexity and Perception-action Coupling on Postural Sway." Miami University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=miami1218562177.
Full textSalah, Abdellatif. "Schémas de décodage MIMO à Complexité Réduite." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00682392.
Full textFielbaum, Schnitzler Andrés Salomón. "Effects of the introduction of spatial and temporal complexity on the optimal design, economies of scale and pricing of public transport." Tesis, Universidad de Chile, 2019. http://repositorio.uchile.cl/handle/2250/171789.
Full textEn esta tesis estudiamos modelos microeconómicos para el diseño estratégico de transporte público de buses, incorporando los efectos que implican tanto la composición espacial de la demanda por viajes y la necesidad de representarla en una red, como la heterogeneidad entre la cantidad de viajes realizados en distintos períodos del día. Esto se realiza complejizando espacial y temporalmente los modelos clásicos de una línea estudiados por Jansson (1980) y Jara-Díaz y Gschwender (2009). Para el análisis espacial, estudiamos el diseño óptimo de estructuras de línea (es decir, el conjunto de rutas de las líneas de transporte público) sobre el modelo urbano propuesto por Fielbaum et al (2016, 2017) basado en la jerarquía entre los centros de la ciudad- y analizamos los resultados del enfoque heurístico, la presencia de economías de escala y sus fuentes, y la densidad espacial de líneas. Respecto al enfoque heurístico, comparamos las cuatro estructuras básicas propuestas por Fielbaum et al (2016) con las resultantes de cuatro heurísticas propuestas previamente en la literatura. Los fenómenos de escala se analizan bajo la definición del concepto de directness , que muestra que al aumentar el flujo de pasajeros el sistema prioriza rutas que minimicen los trasbordos, detenciones y los largos de los viajes de los pasajeros, es decir, ésta es una nueva fuente de economías de escala; esto permite estudiar los efectos de este fenómeno en tarifas y subsidios óptimos. Cuando la densidad espacial de líneas se incorpora como variable de diseño, se muestra que ésta crece con el número de pasajeros, manteniendo siempre los costos de acceso iguales a los costos de espera en el sistema, mostrando cierto nivel de sustitución con el nivel de directness y constituyendo una nueva fuente de economías de escala. La heterogeneidad temporal de la demanda se analiza al estudiar los modelos de una línea incluyendo dos períodos: punta y fuera de punta. El sistema se optimiza bajo distintas maneras de operación, como son el considerar una flota única, una flota independiente para cada período y dos flotas que operan de manera conjunta en el período punta (y sólo una de ellas en fuera de punta); el sistema con dos flotas simultáneas es el más eficiente, siendo ligeramente mejor que el de una sola flota. Las soluciones se comparan con aquellas que se obtienen al considerar solamente un período, y los efectos cruzados entre períodos son identificados. Adicionalmente, se estudian estrategias de tipo second-best, al comparar la optimización del sistema de acuerdo a las características del período punta, y la utilización de una sub-flota para el período fuera de punta, con la estrategia inversa: como resultado, una regla aproximada es priorizar aquél período en que el número total de pasajeros (en toda su duración) sea mayor.
Valkanova, Elena. "Algorithms for simple stochastic games." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003070.
Full textBrancotte, Bryan. "Agrégation de classements avec égalités : algorithmes, guides à l'utilisateur et applications aux données biologiques." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112184/document.
Full textThe rank aggregation problem is to build consensus among a set of rankings (ordered elements). Although this problem has numerous applications (consensus among user votes, consensus between results ordered differently by different search engines ...), computing an optimal consensus is rarely feasible in cases of real applications (problem NP-Hard). Many approximation algorithms and heuristics were therefore designed. However, their performance (time and quality of product loss) are quite different and depend on the datasets to be aggregated. Several studies have compared these algorithms but they have generally not considered the case (yet common in real datasets) that elements can be tied in rankings (elements at the same rank). Choosing a consensus algorithm for a given dataset is therefore a particularly important issue to be studied (many applications) and it is an open problem in the sense that none of the existing studies address it. More formally, a consensus ranking is a ranking that minimizes the sum of the distances between this consensus and the input rankings. Like much of the state-of-art, we have considered in our studies the generalized Kendall-Tau distance, and variants. Specifically, this thesis has three contributions. First, we propose new complexity results associated with cases encountered in the actual data that rankings may be incomplete and where multiple items can be classified equally (ties). We isolate the different "features" that can explain variations in the results produced by the aggregation algorithms (for example, using the generalized distance of Kendall-Tau or variants, pre-processing the datasets with unification or projection). We propose a guide to characterize the context and the need of a user to guide him into the choice of both a pre-treatment of its datasets but also the distance to choose to calculate the consensus. We finally adapt existing algorithms to this new context. Second, we evaluate these algorithms on a large and varied set of datasets both real and synthetic reproducing actual features such as similarity between rankings, the presence of ties and different pre-treatments. This large evaluation comes with the proposal of a new method to generate synthetic data with similarities based on a Markov chain modeling. This evaluation led to the isolation of datasets features that impact the performance of the aggregation algorithms, and to design a guide to characterize the needs of a user and advise him in the choice of the algorithm to be use. A web platform to replicate and extend these analyzes is available (rank-aggregation-with-ties.lri.fr). Finally, we demonstrate the value of using the rankings aggregation approach in two use cases. We provide a tool to reformulating the text user queries through biomedical terminologies, to then query biological databases, and ultimately produce a consensus of results obtained for each reformulation (conqur-bio.lri.fr). We compare the results to the references platform and show a clear improvement in quality results. We also calculate consensus between list of workflows established by experts in the context of similarity between scientific workflows. We note that the computed consensus agree with the expert in a very large majority of cases
Yeleswarapu, Radhika M. "Scheduling Of 2-Operation Jobs On A Single Machine To Minimize The Number Of Tardy Jobs." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000216.
Full textAyala, Obregón Alan. "Complexity reduction methods applied to the rapid solution to multi-trace boundary integral formulations." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS581.
Full textIn this thesis, we provide complexity reduction techniques for the solution of Boundary Integral Equations (BIE). In particular, we focus on BIE arising from the modeling of acoustic and electromagnetic problems via Boundary Element Methods (BEM). We use the local multi-trace formulation which is friendly to operator preconditioning. We find a closed form inverse of the local multi-trace operator for a model problem and then we propose this inverse operator for preconditioning general scattering problems. Moreover, we show that the local multi-trace formulation is stable for Maxwell equations posed on a particular domain configuration. For general problems where BEM are applied, we propose to use the framework of hierarchical matrices, which are constructed using cluster trees and allow to represent the original matrix in such a way that submatrices that admit low-rank approximations (admissible blocks) are well identified. We introduce a technique called geometric sampling which uses cluster trees to create accurate linear-time CUR algorithms for the compression and matrix-vector product acceleration of admissible matrix blocks, and which are oriented to develop parallel communication-avoiding algorithms. We also contribute to the approximation theory of QR and subspace iteration methods; for the former we provide new bounds for the approximation error, and for the later we solve an open question in the literature consisting in proving that the approximation of singular vectors exponentially converges. Finally, we propose a technique called affine low-rank approximation intended to increase the accuracy of classical low-rank approximation methods
Daher, Ali. "Application de la théorie des nombres à la conception optimale et à l'implémentation de très faible complexité des filtres numériques." Phd thesis, Université de Bretagne occidentale - Brest, 2009. http://tel.archives-ouvertes.fr/tel-00490369.
Full textDaher, Ali. "Application de la théorie des nombres à la conception optimale et à l’implémentation de très faible complexité des filtres numériques." Brest, 2009. http://www.theses.fr/2009BRES2039.
Full textThe main objective of our study is to develop fast algorithms for an optimal design and an implementation with low complexity of digital filters. The optimization criterion is the mean squared error at the filter output. Thus, we have studied and developed new algorithms for synthesis of finite impulse response (FIR) filters related to the two techniques of block filtering, overlap-save (OLS) and overlap-add (OLA). These two filtering techniques consist in processing the signal by blocks and use the fast Fourier transform (FFT) to reduce the complexity of the convolution calculation. Our algorithms, based on the matrix model development of the OLA and OLS structures, use the linear algebra properties, especially those of circulant matrices. To further reduce the complexity and the distortion, we have looked further into the mathematical foundations of the Fermat Number Transform (FNT). This transform is a particular case of the Number Theoretic Transforms (NTT) defined in the Galois field. Compared to the FFT, the FNT allows a calculation without rounding error and a large reduction of the number of multiplications necessary to carry out the convolution product. To highlight this transform, we have proposed and studied a new design of OLS and OLA filtering using the FNT. We have developed a low complexity algorithm for the optimal synthesis of filters using the properties of circulant matrices that we have developed in the Galois field. The simulation results of the block filtering with fixed-point implementation have shown that the use of the FNT instead of the FFT reduces the complexity and the filtering errors, as well as the cost of optimal filter synthesis
Simard, Catherine. "Analyse d'algorithmes de type Nesterov et leurs applications à l'imagerie numérique." Mémoire, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/7714.
Full textKattami, Christalena. "Development of a construction methodology of goal directed, optimal complexity, flexible and task oriented (GOFT) training materials for novice computer users : application and evaluation in adults with mental health problems." Thesis, City University London, 1996. http://openaccess.city.ac.uk/7779/.
Full textBountourelis, Theologos. "Efficient pac-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/28144.
Full textCommittee Chair: Reveliotis, Spyros; Committee Member: Ayhan, Hayriye; Committee Member: Goldsman, Dave; Committee Member: Shamma, Jeff; Committee Member: Zwart, Bert.
Yapici, Yavuz. "A Bidirectional Lms Algorithm For Estimation Of Fast Time-varying Channels." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613220/index.pdf.
Full textAnil, Gautham. "A Fitness Function Elimination Theory for Blackbox Optimization and Problem Class Learning." Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5106.
Full textPh.D.
Doctorate
Computer Science
Engineering and Computer Science
Computer Science
Villon, Pierre. "Contribution à l'optimisation." Compiègne, 1991. http://www.theses.fr/1991COMPDE95.
Full textRankine, Luke. "Newborn EEG seizure detection using adaptive time-frequency signal processing." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16200/.
Full textMarchandon, Mathilde. "Vers la compréhension des séquences sismiques sur un système de failles : de l’observation spatiale à la modélisation numérique. Application à la séquence du Nord-Est Lut, Iran." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4055/document.
Full textMany studies show that static and postseismic stress transfers play an important role in the occurrence of seismic sequences. However, a large majority of these studies involves seismic sequences that occurred within fault systems having simple geometric configurations (e.g. collinear or parallel fault system). In this thesis, we study a seismic sequence that occurred within a complex fault system (i.e. conjugate fault system), the NE Lut seismic sequence (1939-1997, NE Iran), in order to assess if (1) stress transfers can explain the succession of earthquakes in the sequence and (2) stress transfers can lead to the synchronization of the NE Lut faults over multiple seismic cycles. To this end, we first measure the surface displacement field produced by the sequence in order to precisely constrain the stress transfer modeling afterwards. We use optical correlation technique to measure the surface displacement fields of the Khuli-Boniabad (Mw 7.1, 1979) and Zirkuh earthquake (Mw 7.2, 1997). We find that these earthquakes broke several segments limited by geometrical complexities of the faults. We interpret the differences in failure style of these earthquakes (i.e. rupture length, mean slip and number of broken segments) as being due to different level of structural maturity of the Dasht-e-Bayaz and Abiz faults. Furthermore, we succeed to detect offsets produced by the 1979 Mw 6.6 Korizan earthquake. It is the first time that surface displacements for such a small historical earthquake have been measured using optical correlation. Then, combining previously published intermediate-field InSAR data and our near-field optical data, we estimate a new source model for the Zirkuh earthquake (Mw 7.2, 1997). We show that near-field data are crucial to better constrain the fault geometry and the slip distribution at depth. According to our source model, the Zirkuh earthquake broke three asperities separated by geometrical barriers where aftershocks are located. No shallow slip deficit is found for the overall rupture except on the central segment where it could be due to off-fault deformation in quaternary deposits. Finally, we use the information acquired in the first parts of this work to model the stress transfers within the NE Lut sequence. We find that 7 out of 11 earthquakes are triggered by the previous ones and that the precise modeling of the rupture geometry is crucial to robustly estimate the stress transfers. We also show that the Zirkuh earthquake is mainly triggered by the moderate earthquakes of the NE Lut sequence. Lastly, the simulation of multiple seismic cycles on the NE Lut fault system shows that stress transfers, in particular postseismic stress transfers due to viscoelastic relaxation, enhance the number of seismic sequences and synchronize the rupture of the faults. The simulations also show that the order in which the Mw>7 earthquakes occurred during the NE Lut sequence is quite exceptional
Chinot, Geoffrey. "Localization methods with applications to robust learning and interpolation." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAG002.
Full textThis PhD thesis deals with supervized machine learning and statistics. The main goal is to use localization techniques to derive fast rates of convergence, with a particular focus on robust learning and interpolation problems.Localization methods aim to analyze localized properties of an estimator to obtain fast rates of convergence, that is rates of order O(1/n), where n is the number of observations. Under assumptions, such as the Bernstein condition, such rates are attainable.A robust estimator is an estimator with good theoretical guarantees, under as few assumptions as possible. This question is getting more and more popular in the current era of big data. Large dataset are very likely to be corrupted and one would like to build reliable estimators in such a setting. We show that the well-known regularized empirical risk minimizer (RERM) with Lipschitz-loss function is robust with respect to heavy-tailed noise and outliers in the label. When the class of predictor is heavy-tailed, RERM is not reliable. In this setting, we show that minmax Median of Means estimators can be a solution. By construction minmax-MOM estimators are also robust to an adversarial contamination.Interpolation problems study learning procedure with zero training error. Surprisingly, in large dimension, interpolating the data does not necessarily implies over-fitting. We study a high dimensional Gaussian linear model and show that sometimes the over-fitting may be benign
Kapfunde, Goodwell. "Near-capacity sphere decoder based detection schemes for MIMO wireless communication systems." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/11350.
Full textKeyder, Emil Ragip. "New Heuristics for Planning with Action Costs." Doctoral thesis, Universitat Pompeu Fabra, 2010. http://hdl.handle.net/10803/7570.
Full textLa plani caci on cl asica es el problema que consiste en hallar una secuencia de acciones que lleven a un agente desde un estado inicial a un objetivo, asum- iendo resultados determin sticos e informaci on completa. La plani caci on \satis cing" busca encontrar una soluci on de bajo coste, sin garant as de op- timalidad. La b usqueda heur stica guiada por heur sticas no admisibles es el enfoque que ha tenido mas exito. Esta tesis presenta varias heur sticas de ese g enero que consideran costes en las acciones, y por lo tanto encuentran soluciones que minimizan el coste, en lugar de la longitud del plan. Adem as, demostramos que el problema de plani caci on con \soft goals", u objetivos opcionales, se puede reducir a un problema de plani caci on clasica con costes en las acciones, escenario en el que heur sticas sensibles a costes, tal como las aqu presentadas, son esenciales.
Roche, Jean-Christophe. "Localisation spatiale par subdivision pour l'accélération des calculs en radiométrie :." Phd thesis, Université Joseph Fourier (Grenoble), 2000. http://tel.archives-ouvertes.fr/tel-00006752.
Full text"Low Complexity Optical Flow Using Neighbor-Guided Semi-Global Matching." Master's thesis, 2017. http://hdl.handle.net/2286/R.I.44132.
Full textDissertation/Thesis
Masters Thesis Computer Science 2017