To see the other types of publications on this topic, follow the link: Optical complexity.

Dissertations / Theses on the topic 'Optical complexity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optical complexity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Saad, Mohamed Elsayed Mostafa Luo Zhi-Quan. "Design of optical networks: performance bounds, complexity and algorithms /." *McMaster only, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lim, Dong Sung. "Phase singularities and spatial-temporal complexity in optical fibres." Thesis, Heriot-Watt University, 1995. http://hdl.handle.net/10399/772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Yunxi. "Study of low-complexity modal multiplexing for optical communication links." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Post, Arthur David 1954. "Complexity of optical computing paradigms: Computational implications and a suggested improvement." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/291592.

Full text
Abstract:
Optical computing has been suggested as a means of achieving a high degree of parallelism for both scientific and symbolic applications. While a number of implementations of logic operations have been forwarded, all have some characteristic which prevents their direct extension to functions of a large number of input bits. This paper will analyze several of these implementations and demonstrate that all these implementations require some measure of the system (area, space-bandwidth product, or time) to grow exponentially with the number of inputs. We will then suggest an implementation whose complexity is not greater than the best theoretical realization of a boolean function. We will demonstrate the optimality of the realization, to within a constant multiple, for digital optical computing systems realized by bulk space-variant elements.
APA, Harvard, Vancouver, ISO, and other styles
5

Barrami, Fatima. "Low-complexity direct-detection optical OFDM systems for high data rate communications." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT057/document.

Full text
Abstract:
Une approche pour augmenter le débit par longueur d'onde, est d'utiliser la modulation DMT (Discrete Multitone) à haute efficacité spectrale. Le travail présenté dans cette thèse se focalise principalement sur l'optimisation de la consommation en puissance et le coût de la DMT, qui présentent des obstacles majeurs à son industrialisation. Dans ce cadre, nous avons tout d'abord développé des nouvelles techniques permettant d'exclure la symétrie Hermitienne des modulations DMT, réduisant ainsi considérablement la consommation en puissance et le coût du système. Nous avons ensuite proposé un algorithme de compression linéaire asymétrique permettant de réduire la puissance optique de la modulation DMT avec une complexité modérée. Un nouveau modèle comportemental du VCSEL basé sur la caractéristique quasi-statique a été également développé. Nous avons enfin validé expérimentalement les techniques que nous avons proposées. Plusieurs résultats de simulations et de mesures sont ainsi présentés
A possible approach to maximize the data rate per wavelength, is to employ the high spectral efficiency discrete multitone (DMT) modulation. The work presented in this thesis mainly focuses on optimizing the power consumption and cost of DMT, that are the major obstacles to its market development. Within this context, we have first developed novel techniques permitting to discard the use of Hermitian symmetry in DMT modulations, thus significantly reducing the power consumption and the system cost. We have next proposed an asymmetric linear companding algorithm permitting to reduce the optical power of conventional DCO-OFDM modulation with a moderate complexity. A new VCSEL behavioural model based on the use of the VCSEL quasi-static characteristic was also developed to accurately evaluate the VCSEL impact on DMT modulations. Finally, we have built an experimental system to experimentally validate our proposed techniques. Several simulations and measurement results are then provided
APA, Harvard, Vancouver, ISO, and other styles
6

Nadal, Reixats Laia. "Design and implementation of low complexity adaptive optical OFDM systems for software-defined transmission in elastic optical networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284714.

Full text
Abstract:
Due to the increasing global IP traffic and the exponential growing demand for broadband services, optical networks are experimenting significant changes. Advanced modulation formats are being implemented at the Digital Signal Processing (DSP) level as key enablers for high data rate transmission. Whereas in the network layer, flexi Dense Wavelength-Division Multiplexing (DWDM) grids are being investigated in order to efficiently use the optical spectrum according to the traffic demand. Enabling these capabilities makes high data rate transmission more feasible. Hence, introducing flexibility in the system is one of the main goals of this thesis. Furthermore, minimizing the cost and enhancing the Spectral Efficiency (SE) of the system are two crucial issues to consider in the transceiver design. This dissertation investigates the use of Optical Orthogonal Frequency Division Multiplexing (O-OFDM) based either on the Fast Fourier Transform (FFT) or the Fast Hartley Transform (FHT) and flexi-grid technology to allow high data rate transmission over the fiber. Different cost-effective solutions for Elastic Optical Networks (EON) are provided. On the one hand, Direct Detection (DD) systems are investigated and proposed to cope with present and future traffic demand. After an introduction to the principles of OFDM and its application in optical systems, the main problems of such modulation is introduced. In particular, Peak-to-Average Power Ratio (PAPR) is presented as a limitation in OFDM systems, as well as clipping and quantization noise. Hence, PAPR reduction techniques are proposed to mitigate these impairments. Additionally, Low Complexity (LC) PAPR reduction techniques based on the FHT have also been presented with a simplified DSP. On the other hand, loading schemes have also been introduced in the analyzed system to combat Chromatic Dispersion (CD) when transmitting over the optical link. Moreover, thanks to Bit Loading (BL) and Power Loading (PL), flexible and software-defined transceivers can be implemented maximizing the spectral efficiency by adapting the data rate to the current demand and the actual network conditions. Specifically, OFDM symbols are created by mapping the different subcarriers with different modulation formats according to the channel profile. Experimental validation of the proposed flexible transceivers is also provided in this dissertation. The benefits of including loading capabilities in the design, such as enabling high data rate and software-defined transmission, are highlighted.
Degut al creixement del tràfic IP i de la demanda de serveis de banda ampla, les xarxes òptiques estan experimentant canvis significatius. Els formats avançats de modulació, implementats a nivell de processat del senyal digital, habiliten la transmissió a alta velocitat. Mentre que a la capa de xarxa, l'espectre òptic es dividit en ranures flexibles ocupant l'ample de banda necessari segons la demanda de tràfic. La transmissió a alta velocitat és fa més tangible un cop habilitades totes aquestes funcionalitats. D'aquesta manera un dels principals objectius d'aquesta tesis es introduir flexibilitat al sistema. A demés, minimitzar el cost i maximitzar l'eficiència espectral del sistema són també dos aspectes crucials a considerar en el disseny del transmissor i receptor. Aquesta tesis investiga l'ús de la tecnologia Optical Orthogonal Frequency Division Multiplexing (OFDM) basada en la transformada de Fourier (FFT) i la de Hartley (FHT) per tal de dissenyar un sistema flexible i capaç de transmetre a alta velocitat a través de la fibra òptica. Per tant, es proposen diferent solucions de baix cost vàlides per a utilitzar en xarxes òptiques elàstiques. En primer lloc, s'investiguen i es proposen sistemes basats en detecció directa per tal de suportar la present i futura demanda. Després d'una introducció dels principis d' OFDM i la seva aplicació als sistemes òptics, s'introdueixen alguns dels problemes d'aquesta modulació. En particular, es presenten el Peak-to-Average Power Ratio (PAPR) i els sorolls de clipping i de quantizació com a limitació dels sistemes OFDM. S'analitzen tècniques de reducció de PAPR per tal de reduir l'impacte d'aquests impediments. També es proposen tècniques de baixa complexitat per a reduir el PAPR basades en la FHT. Finalment, s'utilitzen algoritmes d'assignació de bits i de potència, Bit Loading (BL) i Power Loading (PL), per tal de combatre la dispersió cromàtica quan es transmet pel canal òptic. Amb la implementació dels algoritmes de BL i PL, es poden dissenyar transmissors i receptors flexibles adaptant la velocitat a la demanda del moment i a les actuals condicions de la xarxa. En particular, els símbols OFDM es creen mapejant cada portadora amb un format de modulació diferent segons el perfil del canal. El sistema és validat experimentalment mostrant les prestacions i els beneficis d'incloure flexibilitat per tal de facilitar la transmissió a alta velocitat i cobrir les necessitats de l'Internet del futur
Debido al crecimiento del tráfico IP y de la demanda de servicios de banda ancha, las redes ópticas están experimentando cambios significativos. Los formatos avanzados de modulación, implementados a nivel de procesado de la señal digital, habilitan la transmisión a alta velocidad. Mientras que en la capa de red, el espectro óptico se divide en ranuras flexibles ocupando el ancho de banda necesario según la demanda de tráfico. La transmisión a alta velocidad es más tangible una vez habilitadas todas estas funcionalidades. De este modo uno de los principales objetivos de esta tesis es introducir flexibilidad en el sistema. Además, minimizar el coste y maximizar la eficiencia espectral del sistema son también dos aspectos cruciales a considerar en el diseño del transmisor y receptor. Esta tesis investiga el uso de la tecnologia Optical Orthogonal Frequency Division Multiplexing (OFDM) basada en la transformada de Fourier (FFT) y en la de Hartley (FHT) con tal de diseñar un sistema flexible y capaz de transmitir a alta velocidad a través de la fibra óptica. Por lo tanto, se proponen distintas soluciones de bajo coste válidas para utilizar en redes ópticas elásticas. En primer lugar, se investigan y se proponen sistemas basados en detección directa con tal de soportar la presente y futura demanda. Después de una introducción de los principios de OFDM y su aplicación en los sistemas ópticos, se introduce el principal problema de esta modulación. En particular se presentan el Peak-to-Average Power Ratio (PAPR) y los ruidos de clipping y cuantización como limitaciones de los sistemas OFDM. Se analizan técnicas de reducción de PAPR con tal de reducir el impacto de estos impedimentos. También se proponen técnicas de baja complejidad para reducir el PAPR basadas en la FHT. Finalmente, se utilizan algoritmos de asignación de bits y potencia, Bit Loading (BL) y Power Loading (PL), con tal de combatir la dispersión cromática cuando se transmite por el canal óptico. Con la implementación de los algoritmos de BL y PL, se pueden diseñar transmisores y receptores flexibles adaptando la velocidad a la demanda del momento y a las actuales condiciones de la red. En particular, los símbolos OFDM se crean mapeando cada portadora con un formato de modulaci_on distinto según el perfil del canal. El sistema se valida experimentalmente mostrando las prestaciones y los beneficios de incluir flexibilidad con tal de facilitar la transmisión a alta velocidad y cubrir las necesidades de Internet del futuro.
APA, Harvard, Vancouver, ISO, and other styles
7

Bilbeisi, Hana. "Time-slotted scheduling for agile all-photonics networks : performance and complexity." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=112558.

Full text
Abstract:
Schedulers in optical switches are still electronic, the performance of these units has a significant impact on the performance of the network and could form a bottleneck in high speed networks, such as AAPN. Four time-slotted scheduling algorithms are investigated in this study, PIM, iSlip, PHM and Adapted-SRA. The study addresses the performance of AAPN for each of the algorithms, and evaluates the hardware complexity, estimating the running time of the algorithms. Performance measures were collected from an OPNET model, designed to emulate AAPN. Furthermore, hardware complexity and timing constraints were evaluated through hardware simulations, for iSlip, and through analysis for the rest of the algorithms. iSlip confirmed its feasibility by meeting the 10us timing constraint set by AAPN. The study revealed the superiority of iSlip and PHM over PIM and Adapted-SRA.
APA, Harvard, Vancouver, ISO, and other styles
8

Sachs, Antonio de Campos. "Rede auto-organizada utilizando chaveamento de pacotes ópticos." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-05082011-152444/.

Full text
Abstract:
A tecnologia de chaveamento de pacotes ópticos comumente utiliza componentes muito complexos, relegando sua viabilidade para o futuro. A utilização de pacotes ópticos, entretanto, é uma boa opção para melhorar a granularidade dos enlaces ópticos, bem como para tornar os processos de distribuição de banda muito mais eficientes e flexíveis. Esta tese propõe simplificações nas chaves ópticas que além de tornarem o pacote óptico viável para um futuro mais próximo, permitem montar redes ópticas complexas, com muitos nós, que operam de maneira auto-organizada. A rede proposta nesta tese não possui sinalização para reserva ou estabelecimento de caminho. As rotas são definidas pacote a pacote, em tempo real, durante o seu percurso, utilizando roteamento por deflexão. Com funções muito simples realizadas localmente, a rede ganha características desejáveis como: alta escalabilidade e eficiente sistema de proteção de enlace. Estas características desejáveis são tratadas como funções da rede que emergem de funções realizadas em cada um dos nós de rede individualmente. A tese apresenta um modelo analítico estatístico, validado por simulação, para caracterização da rede. No sistema de proteção contra falhas, os cálculos realizados para redes com até 256 nós mostram que o aumento do número médio de saltos ocorre apenas para destinos localizados no entorno da falha. Para demonstrar a viabilidade de construção de chave óptica rápida simplificada utilizando somente componentes já disponíveis no mercado foi montado um protótipo, que mostrou ter um tempo de chaveamento inferior a dois nanossegundos, sendo compatível com as operações de chaveamento de pacotes ópticos.
The Optical Packet Switching (OPS) technology usually involves complex and expensive components relegating its application viability to the future. Nevertheless the OPS utilization is a good option for improving the granularity at high bit rate transmissions, as well as for operation involving flexibility and fast bandwidth distribution. This thesis proposes simplifications on optical switching devices that besides getting closer future viability enable the deployment of highly scalable and self-organized complex network architecture. The proposed network operates without resources reservation or previous path establishment. The routes are defined packet-by-packet in a real time deflection routing procedure. With simple local functions the network starts to operate with desirable performance characteristics such as high scalability and automatic protection system. Those desirable performance characteristics are treated as Emerging Functions. For the network characterization it is presented a statistical analytical model validated by simulation. In the automatic protection functions investigation the results for a 256 nodes network showed that the mean number of hops enhancement occurs only around the failure neighborhood. To demonstrate the switch viability, a prototype was fabricated utilizing components already available in the market. The switching time obtained was below two nanoseconds showing compatibility with the optical packet switching technology.
APA, Harvard, Vancouver, ISO, and other styles
9

Sau, Ignasi. "Optimization in Graphs under Degree Constraints. Application to Telecommunication Networks." Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00429092.

Full text
Abstract:
La première partie de cette thèse s'intéresse au groupage de trafic dans les réseaux de télécommunications. La notion de groupage de trafic correspond à l'agrégation de flux de faible débit dans des conduits de plus gros débit. Cependant, à chaque insertion ou extraction de trafic sur une longueur d'onde il faut placer dans le noeud du réseau un multiplexeur à insertion/extraction (ADM). De plus il faut un ADM pour chaque longueur d'onde utilisée dans le noeud, ce qui représente un coût d'équipements important. Les objectifs du groupage de trafic sont d'une part le partage efficace de la bande passante et d'autre part la réduction du coût des équipements de routage. Nous présentons des résultats d'inapproximabilité, des algorithmes d'approximation, un nouveau modèle qui permet au réseau de pouvoir router n'importe quel graphe de requêtes de degré borné, ainsi que des solutions optimales pour deux scénarios avec trafic all-to-all: l'anneau bidirectionnel et l'anneau unidirectionnel avec un facteur de groupage qui change de manière dynamique. La deuxième partie de la thèse s'intéresse aux problèmes consistant à trouver des sous-graphes avec contraintes sur le degré. Cette classe de problèmes est plus générale que le groupage de trafic, qui est un cas particulier. Il s'agit de trouver des sous-graphes d'un graphe donné avec contraintes sur le degré, tout en optimisant un paramètre du graphe (très souvent, le nombre de sommets ou d'arêtes). Nous présentons des algorithmes d'approximation, des résultats d'inapproximabilité, des études sur la complexité paramétrique, des algorithmes exacts pour les graphes planaires, ainsi qu'une méthodologie générale qui permet de résoudre efficacement cette classe de problèmes (et de manière plus générale, la classe de problèmes tels qu'une solution peut être codé avec une partition d'un sous-ensemble des sommets) pour les graphes plongés dans une surface. Finalement, plusieurs annexes présentent des résultats sur des problèmes connexes.
APA, Harvard, Vancouver, ISO, and other styles
10

Von, Eden Elric Omar. "Optical arbitrary waveform generation using chromatic dispersion in silica fibers." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/24780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Angilella, Vincent. "Design optimal des réseaux Fiber To The Home." Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0004/document.

Full text
Abstract:
Pour les opérateurs, les réseaux FTTH représentent à la fois la solution de référence pour répondre à la demande croissante de trafic fixe, et un investissement considérable dû à leur mise en place. Le but de ces travaux est d'assurer le déploiement de réseaux de qualité à moindre coût. Nous commençons à présenter les différents aspects de la planification de ces réseaux qui en font un problème complexe. La littérature concernée est abordée afin d'exhiber les nouveaux défis que nous relevons. Puis nous élaborons des stratégies permettant de trouver la meilleure solution dans différents contextes. Plusieurs politiques de maintenance ou d'utilisation du génie civil sont ainsi explorées. Les problèmes rencontrés sont analysés à la lumière de divers outils d'optimisation (programmation entière, inégalités valides, programmation dynamique, approximations, complexités, inapproximabilité...) que nous utilisons et développons selon nos besoins. Les solutions proposées ont été testées et validées sur des instances réelles, et ont pour but d'être utilisées par Orange
For operators, FTTH networks are the most widespread solution to the increasing traffic demand. Their layout requires a huge investment. The aim of this work is to ensure a cost effective deployment of quality networks. We start by presenting aspects of this network design problem which make it a complex problem. The related literature is reviewed to highlight the novel issues that we solve. Then, we elaborate strategies to find the best solution in different contexts. Several policies regarding maintenance or civil engineering use will be investigated. The problems encountered are tackled using several combinatorial optimization tools (integer programming, valid inequalities, dynamic programming, approximations, complexity theory, inapproximability…) which will be developed according to our needs. The proposed solutions were tested and validated on real-life instances, and are meant to be implemented in a network planning tool from Orange
APA, Harvard, Vancouver, ISO, and other styles
12

Tzamos, Christos. "The complexity of optimal mechanism design." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82373.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 63-64).
Myerson's seminal work provides a computationally efficient revenue-optimal auction for selling one item to multiple bidders. Generalizing this work to selling multiple items at once has been a central question in economics and algorithmic game theory, but its complexity has remained poorly understood. We answer this question by showing that a revenue-optimal auction in multi-item settings cannot be found and implemented computationally efficiently, unless ZPP = P # p. This is true even for a single additive bidder whose values for the items are independently distributed on two rational numbers with rational probabilities. Our result is very general: we show that it is hard to compute any encoding of an optimal auction of any format (direct or indirect, truthful or non-truthful) that can be implemented in expected polynomial time. In particular, under well-believed complexity-theoretic assumptions, revenue-optimization in very simple multi-item settings can only be tractably approximated. We note that our hardness result applies to randomized mechanisms in a very simple setting, and is not an artifact of introducing combinatorial structure to the problem by allowing correlation among item values, introducing combinatorial valuations, or requiring the mechanism to be deterministic (whose structure is readily combinatorial). Our proof is enabled by a flow interpretation of the solutions of an exponential-size linear program for revenue maximization with an additional supermodularity constraint.
by Christos Tzamos.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
13

Nakache, Elie. "Chemin optimal, conception et amélioration de réseaux sous contrainte de distance." Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4023/document.

Full text
Abstract:
Cette thèse porte sur différents problèmes d'optimisation combinatoire dont nous avons caractérisé la difficulté en décrivant des réductions et des algorithmes polynomiaux exacts ou approchés.En particulier, nous étudions le problème de trouver, dans un graphe orienté sans cycle dont les sommets sont étiquetés, un chemin qui passe par un maximum d'étiquettes différentes. Nous établissons qu'il n'existe pas d'algorithme polynomial avec un facteur constant pour ce problème. Nous présentons aussi un schéma qui permet d'obtenir, pour tout $epsilon >0$, un algorithme polynomial qui calcule un chemin collectant $ O(OPT^{1-epsilon})$ étiquettes.Nous étudions ensuite des variantes du problème de l'arbre couvrant de poids minimum auquel nous ajoutons des contraintes de distance et d'intermédiarité. Nous prouvons que certaines variantes se résolvent en temps polynomial comme des problèmes de calcul d'un libre de poids minimum commun à deux matroïdes. Pour une autre variante, nous présentons un algorithme d'approximation facteur 2 et nous prouvons qu'il n'existe pas d'algorithme polynomial avec un meilleur facteur constant.Enfin, nous étudions un problème d'améliorations de réseaux du point de vue du partage des coûts. Nous montrons que la fonction de coût associée à ce problème est sous-modulaire et nous utilisons ce résultat pour déduire un mécanisme de partage des coûts qui possède plusieurs bonnes propriétés
In this thesis, we investigate several combinatorial optimization problems and characterize their computational complexity and approximability by providing polynomial reductions and exact or approximation algorithms.In particular, we study the problem of finding, in a vertex-labeled directed acyclic graph, a path collecting a maximum number of distinct labels. We prove that no polynomial time constant factor approximation algorithm exists for this problem. Furthermore, we describe a scheme that produces, for any $epsilon >0$, a polynomial time algorithm that computes a solution collecting $O(OPT^{1-epsilon})$ labels. Then, we study several variants of the minimum cost spanning tree problem that take into account distance and betweenness constraints. We prove that most of these problems can be solved in polynomial time using a reduction to the weighted matroid intersection problem. For an other problem, we give a factor 2 approximation algorithm and prove the optimality of this ratio.Finally, we study a network improvement problem from a cost sharing perspective. We establish that the cost function corresponding to this problem is submodular and use this result to derive a cost sharing mechanism having several good properties
APA, Harvard, Vancouver, ISO, and other styles
14

余鳳玲 and Fung-ling Yue. "On the complexity of finding optimal edge rankings." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B30148881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Yue, Fung-ling. "On the complexity of finding optimal edge rankings /." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18540247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Gelashvili, Rati. "Leader election and renaming with optimal message complexity." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/89859.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 65-68).
Asynchronous message-passing system is a standard distributed model, where n processors communicate over unreliable channels, controlled by a strong adaptive adversary. The asynchronous nature of the system and the fact that tby Rati Gelashvili.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
17

Dahmen, Wolfgang, Helmut Harbrecht, and Reinhold Schneider. "Compression Techniques for Boundary Integral Equations - Optimal Complexity Estimates." Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600464.

Full text
Abstract:
In this paper matrix compression techniques in the context of wavelet Galerkin schemes for boundary integral equations are developed and analyzed that exhibit optimal complexity in the following sense. The fully discrete scheme produces approximate solutions within discretization error accuracy offered by the underlying Galerkin method at a computational expense that is proven to stay proportional to the number of unknowns. Key issues are the second compression, that reduces the near field complexity significantly, and an additional a-posteriori compression. The latter one is based on a general result concerning an optimal work balance, that applies, in particular, to the quadrature used to compute the compressed stiffness matrix with sufficient accuracy in linear time. The theoretical results are illustrated by a 3D example on a nontrivial domain.
APA, Harvard, Vancouver, ISO, and other styles
18

Kim, Hyungjoon. "Low-Complexity Mode Selection for Rate-Distortion Optimal Video Coding." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14513.

Full text
Abstract:
The primary objective of this thesis is to provide a low-complexity rate-distortion optimal coding mode selection method in digital video encoding. To achieve optimal compression efficiency in the rate-distortion framework with low computational complexity, we first propose a rate-distortion model and then apply it to the coding mode selection problem. The computational complexity of the proposed method is very low compared to overall encoder complexity because the proposed method uses simple image properties such as variance that can be obtained easily. Also, the proposed method gives significant PSNR gains over the mode selection scheme used in TM5 for MPEG-2 because the rate-distortion model considers rate constraints of each mode as well as distortion. We extend the model-based mode selection approach to motion vector selection for further improvement of the coding efficiency. In addition to our theoretical work, we present practical solutions to real-time implementation of encoder modules including our proposed mode selection method on digital signal processors. First, we investigate the features provided by most of the recent digital signal processors, for example, hierarchical memory structure and efficient data transfer between on-chip and off-chip memory, and then present practical approaches for real-time implementation of a video encoder system with efficient use of the features.
APA, Harvard, Vancouver, ISO, and other styles
19

Learned, Rachel E. "Low complexity optimal joint detection for over-saturated multiple access communications." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/9812.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (leaves 220-222).
by Rachel E. Learned.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Ta, Thanh Thuy Tien. "New single machine scheduling problems with deadline for the characterization of optimal solutions." Thesis, Tours, 2018. http://www.theses.fr/2018TOUR4015/document.

Full text
Abstract:
Nous considérons un problème d'ordonnancement à une machine avec dates de fin impératives et nous cherchons caractériser l'ensemble des solutions optimales, sans les énumérer. Nous supposons que les travaux sont numérotés selon la règle EDD et que cette séquence est réalisable. La méthode consiste à utiliser le treillis des permutations et d'associer à la permutation maximale du treillis la séquence EDD. Afin de caractériser beaucoup de solutions, nous cherchons une séquence réalisable aussi loin que possible de cette séquence. La distance utilisée est le niveau de la séquence dans le treillis, qui doit être minimum (le plus bas possible). Cette nouvelle fonction objectif est étudiée. Quelques cas particuliers polynomiaux sont identifiés, mais la complexité du problème général reste ouverte. Quelques méthodes de résolution, polynomiales et exponentielles, sont proposées et évaluées. Le niveau de la séquence étant en rapport avec la position des travaux dans la séquence, de nouvelles fonctions objectifs en rapport avec les positions des travaux sont identifiées et étudiées. Le problème de la minimisation de la somme pondérée des positions des travaux est prouvé fortement NP-difficile. Quelques cas particuliers sont étudiés et des méthodes de résolution proposées et évaluées
We consider a single machine scheduling problem with deadlines and we want to characterise the set of optimal solutions, without enumerating them. We assume that jobs are numbered in EDD order and that this sequence is feasible. The key idea is to use the lattice of permutations and to associate to the supremum permutation the EDD sequence. In order to characterize a lot of solutions, we search for a feasible sequence, as far as possible to the supremum. The distance is the level of the sequence in the lattice, which has to be minimum. This new objective function is investigated. Some polynomially particular cases are identified, but the complexity of the general case problem remains open. Some resolution methods, polynomial and exponential, are proposed and evaluated. The level of the sequence being related to the positions of jobs in the sequence, new objective functions related to the jobs positions are identified and studied. The problem of minimizing the total weighted positions of jobs is proved to be strongly NP-hard. Some particular cases are investigated, resolution methods are also proposed and evaluated
APA, Harvard, Vancouver, ISO, and other styles
21

Starrett, Dean. "Optimal Alignment of Multiple Sequence Alignments." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/194840.

Full text
Abstract:
An essential tool in biology is the alignment of multiple sequences. Biologists use multiple sequence alignments for tasks such as predicting protein structure and function, reconstructing phylogenetic trees, and finding motifs. Constructing high-quality multiple alignments is computationally hard, both in theory and in practice, and is typically done using heuristic methods. The majority of state-of-the-art multiple alignment programs employ a form and polish strategy, where in the construction phase, an initial multiple alignment is formed by progressively merging smaller alignments, starting with single sequences. Then in a local-search phase, the resulting alignment is polished by repeatedly splitting it into smaller alignments and re-merging. This merging of alignments, the basic computational problem in the construction and local-search phases of the best multiple alignment heuristics, is called the Aligning Alignments Problem. Under the sum-of-pairs objective for scoring multiple alignments, this problem may seem to be a simple extension of two-sequence alignment. It is proven here, however, that with affine gap costs (which are recognized as necessary to get biologically-informative alignments) the problem is NP-complete when gaps are counted exactly. Interestingly, this form of multiple alignment is polynomial-time solvable when we relax the exact count, showing that exact gap counts themselves are inherently hard in multiple sequence alignment. Unlike general multiple alignment however, we show that Aligning Alignments with affine gap costs and exact counts is tractable in practice, by demonstrating an effective algorithm and a fast implementation. Our software AlignAlign is both time- and space-efficient on biological data. Computational experiments on biological data show instances derived from standard benchmark suites can be optimally aligned with surprising efficiency, and experiments on simulated data show the time and space both scale well.
APA, Harvard, Vancouver, ISO, and other styles
22

Touzeau, Valentin. "Analyse statique de caches LRU : complexité, analyse optimale, et applications au calcul de pire temps d'exécution et à la sécurité." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM041.

Full text
Abstract:
Dans le cadre des systèmes critiques, la certification de programmes temps-réel nécessite de borner leur temps d'exécution.Les mémoires caches impactant fortement la latence des accès mémoires, les outils de calcul de pire temps d'exécution incluent des analyses de cache.Ces analyses visent à prédire statiquement si ces accès aboutissent à des cache-hits ou des cache-miss.Ce problème étant indécidable en général, les analyses de caches emploient des abstractions pouvant mener à des pertes de précision.Une hypothèse habituelle pour rendre le problème décidable consiste à supposer que toutes les exécutions du programme sont réalisables.Cette hypothèse est raisonnable car elle ne met pas en cause la validité de l'analyse: tous les véritables chemins d'exécutions du programme sont couverts par l'analyse.Néanmoins, la classification des accès mémoires reste difficile en pratique malgré cette hypothèse, et les analyses de cache efficaces utilisent des approximations supplémentaires.Cette thèse s'intéresse à la possibilité de réaliser des analyses de cache de précision optimale sous l'hypothèse que tous les chemins sont faisables.Les problèmes de classification d'accès mémoires en hits et miss y sont définis formellement et nous prouvons qu'ils sont NP-difficiles, voire PSPACE-difficiles, pour les politiques de remplacement usuelles (LRU, FIFO, NMRU et PLRU).Toutefois, si ces résultats théoriques justifient l'utilisation d'abstractions supplémentaires, ils n'excluent pas l'existence d'un algorithme efficace en pratique pour des instances courantes dans l'industrie.Les abstractions usuelles ne permettent pas, en général, de classifier tous les accès mémoires en Always-Hit et Always-Miss.Certains sont alors classifiés Unknown par l'analyse de cache, et peuvent aboutir à des cache-hits comme à des cache-miss selon le chemin d'exécution emprunté.Cependant, il est aussi possible qu'un accès soit classifié comme Unknown alors qu'il mène toujours à un hit (ou un miss), à cause d'une approximation trop grossière.Nous proposons donc une nouvelle analyse de cache d'instructions LRU, capable de classifier certains accès comme Definitely Unknown, une nouvelle catégorie représentant les accès pouvant mener à un hit ou à un miss.On est alors certain que la classification de ces accès est due au programme et à la configuration du cache, et pas à une approximation peu précise.Par ailleurs, cette analyse réduit le nombre d'accès candidats à une reclassification par des analyses plus précises mais plus coûteuses.Notre principale contribution est une analyse capable de produire une classification de précision optimale.Celle-ci repose sur une méthode appelée block focusing qui permet le passage à l'échelle en analysant les blocs de cache un par un.Nous profitons ainsi de l'analyse de l'analyse Definitely Unknown, qui réduit le nombre de candidats à une classification plus précise.Cette analyse précise produit alors une classification optimale pour un coût raisonnable (proche du coût des analyses usuelles May et Must).Nous étudions également l'impact de notre analyse exacte sur l'analyse de pipeline.En effet, lorsqu'une analyse de cache ne parvient pas à classifier un accès comme Always-Hit ou Always-Miss, les deux cas (hit et miss) sont envisagés par l'analyse de pipeline.En fournissant une classification plus précise des accès mémoires, nous réduisons donc la taille de l'espace d'états de pipeline exploré, et donc le temps de l'analyse.Par ailleurs, cette thèse étudie la possibilité d'utiliser l'analyse Definitely Unknown dans le domaine de la sécurité.Les mémoires caches peuvent être utilisées comme canaux cachés pour extraire des informations de l'exécution d'un programme.Nous proposons une variante de l'analyse Definitely Unknown visant à localiser la source de certaines fuites d'information
The certification of real-time safety critical programs requires bounding their execution time.Due to the high impact of cache memories on memory access latency, modern Worst-Case Execution Time estimation tools include a cache analysis.The aim of this analysis is to statically predict if memory accesses result in a cache hit or a cache miss.This problem is undecidable in general, thus usual cache analyses perform some abstractions that lead to precision loss.One common assumption made to remove the source of undecidability is that all execution paths in the program are feasible.Making this hypothesis is reasonable because the safety of the analysis is preserved when adding spurious paths to the program model.However, classifying memory accesses as cache hits or misses is still hard in practice under this assumption, and efficient cache analysis usually involve additional approximations, again leading to precision loss.This thesis investigates the possibility of performing an optimally precise cache analysis under the common assumption that all execution paths in the program are feasible.We formally define the problems of classifying accesses as hits and misses, and prove that they are NP-hard or PSPACE-hard for common replacement policies (LRU, FIFO, NRU and PLRU).However, if these theoretical complexity results legitimate the use of additional abstraction, they do not preclude the existence of algorithms efficient in practice on industrial workloads.Because of the abstractions performed for efficiency reasons, cache analyses can usually classify accesses as Unknown in addition to Always-Hit (Must analysis) or Always-Miss (May analysis).Accesses classified as Unknown can lead to both a hit or a miss, depending on the program execution path followed.However, it can also be that they belong to one of the Always-Hit or Always-Miss category and that the cache analysis failed to classify them correctly because of a coarse approximation.We thus designed a new analysis for LRU instruction that is able to soundly classify some accesses into a new category, called Definitely Unknown, that represents accesses that can lead to both a hit or a miss.For those accesses, one knows for sure that their classification does not result from a coarse approximation but is a consequence of the program structure and cache configuration.By doing so, we also reduce the set of accesses that are candidate for a refined classification using more powerful and more costly analyses.Our main contribution is an analysis that can perform an optimally precise analysis of LRU instruction caches.We use a method called block focusing that allows an analysis to scale by only analyzing one cache block at a time.We thus take advantage of the low number of candidates for refinement left by our Definitely Unknown analysis.This analysis produces an optimal classification of memory accesses at a reasonable cost (a few times the cost of the usual May and Must analyses).We evaluate the impact of our precise cache analysis on the pipeline analysis.Indeed, when the cache analysis is not able to classify an access as Always-Hit or Always-Miss, the pipeline analysis must consider both cases.By providing a more precise memory access classification, we thus reduce the state space explored by the pipeline analysis and hence the WCET analysis time.Aside from this application of precise cache analysis to WCET estimation, we investigate the possibility of using the Definitely Unknown analysis in the domain of security.Indeed, caches can be used as side-channel to extract some sensitive data from a program execution, and we propose a variation of our Definitely Unknown analysis to help a developer finding the source of some information leakage
APA, Harvard, Vancouver, ISO, and other styles
23

Xu, Chong. "Reduced-complexity near-optimal Ant-Colony-aided multi-user detection for CDMA systems." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/206015/.

Full text
Abstract:
Reduced-complexity near-maximum-likelihood Ant-Colony Optimization (ACO) assisted Multi-User Detectors (MUDs) are proposed and investigated. The exhaustive search complexity of the optimal detection algorithm may be deemed excessive for practical applications. For example, a Space-Time Block Coded (STBC) two transmit assisted K = 32-user system has to search through the candidate-space for finding the final detection output during 264 times per symbol duration by invoking the Euclidean-distance-calculation of a 64-element complex-valued vector. Hence, a nearoptimal or near-ML MUDs are required in order to provide a near-optimal BER performance at a significantly reduced complexity. Specifically, the ACO assisted MUD algorithms proposed are investigated in the context of a Multi-Carrier DS-CDMA (MC DS-CDMA) system, in a Multi-Functional Antenna Array (MFAA) assisted MC DS-CDMA system and in a STBC aided DS-CDMA system. The ACO assisted MUD algorithm is shown to allow a fully loaded MU system to achieve a near-single user performance, which is similar to that of the classic Minimum Mean Square Error (MMSE) detection algorithm. More quantitatively, when the STBC assisted system support K = 32 users, the complexity imposed by the ACO based MUD algorithm is a fraction of 1 × 10−18 of that of the full search-based optimum MUD. In addition to the hard decision based ACO aided MUD a soft-output MUD was also developed,which was investigated in the context of an STBC assisted DS-CDMA system using a three-stage concatenated, iterative detection aided system. It was demonstrated that the soft-output system is capable of achieving the optimal performance of the Bayesian detection algorithm.
APA, Harvard, Vancouver, ISO, and other styles
24

Atassi, Vincent. "Typing and Optimal reduction for λ-calculus in variants of Linear logic for Implicit computational complexity." Paris 13, 2008. http://www.theses.fr/2008PA132038.

Full text
Abstract:
Lambda-calculus has been introduced to study the mathematical functions from a computational point of view. It has then been used as a basis for the design of functional programming languages. Knowing whether there exists a provably most efficient method to reduce lambda-terms, and evaluate the complexity of this operation in general are still open questions. In this thesis, we use the tools of typing, of Linear logic, of type inference and of Optimal reduction to explore those questions. We present a type inference algorithm for Dual light affine logic (dlal), a type system which characterises the polynomial time complexity class. The algorithm takes in input a system F typed lambda-term, and outputs a typing in dlal if there exists one. An implementation is provided. Then, we extend a type system based on Elementary affine logic with subtyping, in order to automatise the cœrcions placement. We show that subtyping captures indeed the cœrcions, and we give a fully-fledged type inference algorithm for this extended system. Finally, we adapt Lamping’s Optimal reduction algorithm to the lambdaterms typable in Soft linear logic (sll), also characterising polynomial time. We prove a complexity bound on the reduction of any Sharing graph, and that lambda-terms typable in sll can be correctly reduced with our ad-hoc Optimal reduction algorithm
Le lambda-calcul a été introduit pour étudier les fonctions mathématiques d’un point de vue calculatoire. Il a ensuite servi de fondement au développement des langages de prgrammation fonctionnels. Savoir si il existe une méthode prouvablement la plus efficace pour réduire les lambda-termes, et connaître la complexité intrinsèque de cette opération en général sont toujours des questions ouvertes. Dans cette thèse, nous utilisons les outils du typage, de l’inférence de type, de la Logique linéaire et de la Réduction optimale pour explorer ces questions. Nous présentons un algorithme d’inférence de type pour Dual light affine logic (dlal), un système de type qui caractérise la classe de complexité polynomiale. L’algorithme prend en entrée un lambda-terme typé dans le système F et renvoie un typage dans dlal si il en existe un. Une implémentation est fournie. Puis, nous étendons un système de type fondé sur Elementary affine logic avec du sous-typage, afin d’automatiser le placement des cœrcicions. Nous montrons que le sous-typage capture bien les cœrcicions, et nous donnons un algorithme d’inférence complet pour ce système étendu. Enfin, nous adaptons l’algorithme de Réduction optimale de Lamping pour les lambda-termes typables dans Soft linear logic (sll), une logique qui caractérise le temps polynomial. Nous montrons qu’une borne polynomiale existe pour tous les graphes de partage ainsi réduits, et que les lambda termes typables dans sll sont réduits correctement
APA, Harvard, Vancouver, ISO, and other styles
25

Renaud-Goud, Paul. "Energy-aware scheduling : complexity and algorithms." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00744247.

Full text
Abstract:
In this thesis we have tackled a few scheduling problems under energy constraint, since the energy issue is becoming crucial, for both economical and environmental reasons. In the first chapter, we exhibit tight bounds on the energy metric of a classical algorithm that minimizes the makespan of independent tasks. In the second chapter, we schedule several independent but concurrent pipelined applications and address problems combining multiple criteria, which are period, latency and energy. We perform an exhaustive complexity study and describe the performance of new heuristics. In the third chapter, we study the replica placement problem in a tree network. We try to minimize the energy consumption in a dynamic frame. After a complexity study, we confirm the quality of our heuristics through a complete set of simulations. In the fourth chapter, we come back to streaming applications, but in the form of series-parallel graphs, and try to map them onto a chip multiprocessor. The design of a polynomial algorithm on a simple problem allows us to derive heuristics on the most general problem, whose NP-completeness has been proven. In the fifth chapter, we study energy bounds of different routing policies in chip multiprocessors, compared to the classical XY routing, and develop new routing heuristics. In the last chapter, we compare the performance of different algorithms of the literature that tackle the problem of mapping DAG applications to minimize the energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
26

Hilliard, David (David John). "Achieving and sustaining an optimal product portfolio in the healthcare industry through SKU rationalization, complexity costing, and dashboards." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/73385.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; in conjunction with the Leaders for Global Operations Program at MIT, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 76).
After years of new product launches, and entry into emerging markets, Company X, a healthcare company, has seen its product portfolio proliferate and bring costly complexity into its operations. Today, Company X seeks to achieve and sustain an optimal product offering that meets their customers' needs. Through a six-month research effort, we develop a process for stock-keeping-unit (SKU) rationalization to reduce SKU complexity while maintaining sales volumes. We, also, implement operational models to compute complexity costs associated with SKU complexity and employ SKU portfolio dashboards to monitor SKU development and govern SKU creation. This thesis discusses a process for applying these tools to any healthcare company. Through two case studies, we apply the rationalization process on one pilot brand and develop a dashboard to improve product portfolio management. We expect that the SKU rationalization process will release 38% of avoidable costs associated with the pilot brand. These case studies also provide insight into how to correctly diagnose the cost reduction opportunity associated with SKU complexity, as well as methods for a step-change improvement in lead-times and cost-reduction. Lastly, removal of complexity provides flexibility to capture other business opportunities.
by David Hilliard.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
27

Brun, Jean-Marc. "Modèles à complexité réduite de transport pour applications environnementales." Montpellier 2, 2007. http://www.theses.fr/2007MON20248.

Full text
Abstract:
Cette étude présente une plateforme de modèles à complexité réduite pour le transport et la dispersion atmosphérique d'un scalaire passif pour applications environnementales. Une approche multi-échelle est appliquée avec une définition de l'espace de recherche adéquat pour la solution à chaque niveau. Au niveau local, la dérive en champ proche est estimée par la théorie des jets turbulents et détermine le terme source pour le niveau d'ordre supérieur. On utilise notamment les solutions de similitudes pour les panaches dans une métrique nonsymétrique pour le transport sur des grandes distances. L'approche ne nécessite pas la solution d'EDP, donc pas de maillage et il est possible accéder à la valeur en un point sans avoir à calculer la solution sur l'ensemble du domaine
A platform of low complexity models for the transport of passive scalars for environmental applications is presented. Multi-level analysis has been used with a reduction in dimension of the solution space at each level. Local spray drift distribution is estimated thanks to the turbulent jet theory and determine the source term. Similitude solutions are used in a non symmetric metric for the transport over long distances. Model parameters identification is based on data assimilation. The approach does not require the solution of any PDE and therefore is mesh free. The model also permits to access the solution in one point without computing the solution over the whole domain
APA, Harvard, Vancouver, ISO, and other styles
28

Perinelli, Alessio. "A new approach to optimal embedding of time series." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/280754.

Full text
Abstract:
The analysis of signals stemming from a physical system is crucial for the experimental investigation of the underlying dynamics that drives the system itself. The field of time series analysis comprises a wide variety of techniques developed with the purpose of characterizing signals and, ultimately, of providing insights on the phenomena that govern the temporal evolution of the generating system. A renowned example in this field is given by spectral analysis: the use of Fourier or Laplace transforms to bring time-domain signals into the more convenient frequency space allows to disclose the key features of linear systems. A more complex scenario turns up when nonlinearity intervenes within a system's dynamics. Nonlinear coupling between a system's degrees of freedom brings about interesting dynamical regimes, such as self-sustained periodic (though anharmonic) oscillations ("limit cycles"), or quasi-periodic evolutions that exhibit sharp spectral lines while lacking strict periodicity ("limit tori"). Among the consequences of nonlinearity, the onset of chaos is definitely the most fascinating one. Chaos is a dynamical regime characterized by unpredictability and lack of periodicity, despite being generated by deterministic laws. Signals generated by chaotic dynamical systems appear as irregular: the corresponding spectra are broad and flat, prediction of future values is challenging, and evolutions within the systems' state spaces converge to strange attractor sets with noninteger dimensionality. Because of these properties, chaotic signals can be mistakenly classified as noise if linear techniques such as spectral analysis are used. The identification of chaos and its characterization require the assessment of dynamical invariants that quantify the complex features of a chaotic system's evolution. For example, Lyapunov exponents provide a marker of unpredictability; the estimation of attractor dimensions, on the other hand, highlights the unconventional geometry of a chaotic system's state space. Nonlinear time series analysis techniques act directly within the state space of the system under investigation. However, experimentally, full access to a system's state space is not always available. Often, only a scalar signal stemming from the dynamical system can be recorded, thus providing, upon sampling, a scalar sequence. Nevertheless, by virtue of a fundamental theorem by Takens, it is possible to reconstruct a proxy of the original state space evolution out of a single, scalar sequence. This reconstruction is carried out by means of the so-called embedding procedure: m-dimensional vectors are built by picking successive elements of the scalar sequence delayed by a lag L. On the other hand, besides posing some necessary conditions on the integer embedding parameters m and L, Takens' theorem does not provide any clue on how to choose them correctly. Although many optimal embedding criteria were proposed, a general answer to the problem is still lacking. As a matter of fact, conventional methods for optimal embedding are flawed by several drawbacks, the most relevant being the need for a subjective evaluation of the outcomes of applied algorithms. Tackling the issue of optimally selecting embedding parameters makes up the core topic of this thesis work. In particular, I will discuss a novel approach that was pursued by our research group and that led to the development of a new method for the identification of suitable embedding parameters. Rather than most conventional approaches, which seek a single optimal value for m and L to embed an input sequence, our approach provides a set of embedding choices that are equivalently suitable to reconstruct the dynamics. The suitability of each embedding choice m, L is assessed by relying on statistical testing, thus providing a criterion that does not require a subjective evaluation of outcomes. The starting point of our method are embedding-dependent correlation integrals, i.e. cumulative distributions of embedding vector distances, built out of an input scalar sequence. In the case of Gaussian white noise, an analytical expression for correlation integrals is available, and, by exploiting this expression, a gauge transformation of distances is introduced to provide a more convenient representation of correlation integrals. Under this new gauge, it is possible to test—in a computationally undemanding way—whether an input sequence is compatible with Gaussian white noise and, subsequently, whether the sequence is compatible with the hypothesis of an underlying chaotic system. These two statistical tests allow ruling out embedding choices that are unsuitable to reconstruct the dynamics. The estimation of correlation dimension, carried out by means of a newly devised estimator, makes up the third stage of the method: sets of embedding choices that provide uniform estimates of this dynamical invariant are deemed to be suitable to embed the sequence.The method was successfully applied to synthetic and experimental sequences, providing new insight into the longstanding issue of optimal embedding. For example, the relevance of the embedding window (m-1)L, i.e. the time span covered by each embedding vector, is naturally highlighted by our approach. In addition, our method provides some information on the adequacy of the sampling period used to record the input sequence.The method correctly distinguishes a chaotic sequence from surrogate ones generated out of it and having the same power spectrum. The technique of surrogate generation, which I also addressed during my Ph. D. work to develop new dedicated algorithms and to analyze brain signals, allows to estimate significance levels in situations where standard analytical algorithms are unapplicable. The novel embedding approach being able to tell apart an original sequence from surrogate ones shows its capability to distinguish signals beyond their spectral—or autocorrelation—similarities.One of the possible applications of the new approach concerns another longstanding issue, namely that of distinguishing noise from chaos. To this purpose, complementary information is provided by analyzing the asymptotic (long-time) behaviour of the so-called time-dependent divergence exponent. This embedding-dependent metric is commonly used to estimate—by processing its short-time linearly growing region—the maximum Lyapunov exponent out of a scalar sequence. However, insights on the kind of source generating the sequence can be extracted from the—usually overlooked—asymptotic behaviour of the divergence exponent. Moreover, in the case of chaotic sources, this analysis also provides a precise estimate of the system's correlation dimension. Besides describing the results concerning the discrimination of chaotic systems from noise sources, I will also discuss the possibility of using the related correlation dimension estimates to improve the third stage of the method introduced above for the identification of suitable embedding parameters. The discovery of chaos as a possible dynamical regime for nonlinear systems led to the search of chaotic behaviour in experimental recordings. In some fields, this search gave plenty of positive results: for example, chaotic dynamics was successfully identified and tamed in electronic circuits and laser-based optical setups. These two families of experimental chaotic systems eventually became versatile tools to study chaos and its possible applications. On the other hand, chaotic behaviour is also looked for in climate science, biology, neuroscience, and even economics. In these fields, nonlinearity is widespread: many smaller units interact nonlinearly, yielding a collective motion that can be described by means of few, nonlinearly coupled effective degrees of freedom. The corresponding recorded signals exhibit, in many cases, an irregular and complex evolution. A possible underlying chaotic evolution—as opposed to a stochastic one—would be of interest both to reveal the presence of determinism and to predict the system's future states. While some claims concerning the existence of chaos in these fields have been made, most results are debated or inconclusive. Nonstationarity, low signal-to-noise ratio, external perturbations and poor reproducibility are just few among the issues that hinder the search of chaos in natural systems. In the final part of this work, I will briefly discuss the problem of chasing chaos in experimental recordings by considering two example sequences, the first one generated by an electronic circuit and the second one corresponding to recordings of brain activity. The present thesis is organized as follows. The core concepts of time series analysis, including the key features of chaotic dynamics, are presented in Chapter 1. A brief review of the search for chaos in experimental systems is also provided; the difficulties concerning this quest in some research fields are also highlighted. Chapter 2 describes the embedding procedure and the issue of optimally choosing the related parameters. Thereupon, existing methods to carry out the embedding choice are reviewed and their limitations are pointed out. In addition, two embedding-dependent nonlinear techniques that are ordinarily used to characterize chaos, namely the estimation of correlation dimension by means of correlation integrals and the assessment of maximum Lyapunov exponent, are presented. The new approach for the identification of suitable embedding parameters, which makes up the core topic of the present thesis work, is the subject of Chapter 3 and 4. While Chapter 3 contains the theoretical outline of the approach, as well as its implementation details, Chapter 4 discusses the application of the approach to benchmark synthetic and experimental sequences, thus illustrating its perks and its limitations. The study of the asymptotic behaviour of the time-dependent divergent exponent is presented in Chapter 5. The alternative estimator of correlation dimension, which relies on this asymptotic metric, is discussed as a possible improvement to the approach described in Chapters 3, 4. The search for chaos out of experimental data is discussed in Chapter 6 by means of two examples of real-world recordings. Concluding remarks are finally drawn in Chapter 7.
APA, Harvard, Vancouver, ISO, and other styles
29

Otten, Edward W. "The Influence of Stimulus Complexity and Perception-action Coupling on Postural Sway." Miami University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=miami1218562177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Salah, Abdellatif. "Schémas de décodage MIMO à Complexité Réduite." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00682392.

Full text
Abstract:
L'utilisation des antennes MIMO est une technique qui permet d'exploiter de façon très efficace la diversité spatiale et temporelle présente dans certains systèmes de communication, dont le canal sans fil. Le principal avantage de cette technique est une très grande efficacité spectrale. De nos jours, où le canal radio-mobile est de plus en plus utilisé pour transmettre tout type d'information, les méthodes permettant une utilisation plus efficace du spectre électromagnétique ont une importance fondamentale. Les algorithmes de réception connus aujourd'hui sont très complexes, même en ce qui concerne les systèmes MIMO avec les codes espace-temps les plus simples. Cette complexité reste l'un des obstacles principaux à l'exploitation réelle. Cette thèse présente une étude très détaillée de la complexité, la performance et les aspects les plus intéressants du comportement des algorithmes de la réception pour le décodage MIMO, étude qui présente un moyen rapide pour une éventuelle conception des architectures adaptées à ce problème. Parmi les sujets présentés dans cette thèse, une étude approfondie de la performance et la complexité de ces algorithmes a été réalisée, ayant pour objectif d'avoir une connaissance suffisante pour pouvoir choisir, parmi le grand nombre d'algorithmes connus, le mieux adapté à chaque système particulier. Des améliorations aux algorithmes connus ont aussi été proposées et analysées.
APA, Harvard, Vancouver, ISO, and other styles
31

Fielbaum, Schnitzler Andrés Salomón. "Effects of the introduction of spatial and temporal complexity on the optimal design, economies of scale and pricing of public transport." Tesis, Universidad de Chile, 2019. http://repositorio.uchile.cl/handle/2250/171789.

Full text
Abstract:
Tesis para optar al grado de Doctor en Sistemas de Ingeniería
En esta tesis estudiamos modelos microeconómicos para el diseño estratégico de transporte público de buses, incorporando los efectos que implican tanto la composición espacial de la demanda por viajes y la necesidad de representarla en una red, como la heterogeneidad entre la cantidad de viajes realizados en distintos períodos del día. Esto se realiza complejizando espacial y temporalmente los modelos clásicos de una línea estudiados por Jansson (1980) y Jara-Díaz y Gschwender (2009). Para el análisis espacial, estudiamos el diseño óptimo de estructuras de línea (es decir, el conjunto de rutas de las líneas de transporte público) sobre el modelo urbano propuesto por Fielbaum et al (2016, 2017) basado en la jerarquía entre los centros de la ciudad- y analizamos los resultados del enfoque heurístico, la presencia de economías de escala y sus fuentes, y la densidad espacial de líneas. Respecto al enfoque heurístico, comparamos las cuatro estructuras básicas propuestas por Fielbaum et al (2016) con las resultantes de cuatro heurísticas propuestas previamente en la literatura. Los fenómenos de escala se analizan bajo la definición del concepto de directness , que muestra que al aumentar el flujo de pasajeros el sistema prioriza rutas que minimicen los trasbordos, detenciones y los largos de los viajes de los pasajeros, es decir, ésta es una nueva fuente de economías de escala; esto permite estudiar los efectos de este fenómeno en tarifas y subsidios óptimos. Cuando la densidad espacial de líneas se incorpora como variable de diseño, se muestra que ésta crece con el número de pasajeros, manteniendo siempre los costos de acceso iguales a los costos de espera en el sistema, mostrando cierto nivel de sustitución con el nivel de directness y constituyendo una nueva fuente de economías de escala. La heterogeneidad temporal de la demanda se analiza al estudiar los modelos de una línea incluyendo dos períodos: punta y fuera de punta. El sistema se optimiza bajo distintas maneras de operación, como son el considerar una flota única, una flota independiente para cada período y dos flotas que operan de manera conjunta en el período punta (y sólo una de ellas en fuera de punta); el sistema con dos flotas simultáneas es el más eficiente, siendo ligeramente mejor que el de una sola flota. Las soluciones se comparan con aquellas que se obtienen al considerar solamente un período, y los efectos cruzados entre períodos son identificados. Adicionalmente, se estudian estrategias de tipo second-best, al comparar la optimización del sistema de acuerdo a las características del período punta, y la utilización de una sub-flota para el período fuera de punta, con la estrategia inversa: como resultado, una regla aproximada es priorizar aquél período en que el número total de pasajeros (en toda su duración) sea mayor.
APA, Harvard, Vancouver, ISO, and other styles
32

Valkanova, Elena. "Algorithms for simple stochastic games." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Brancotte, Bryan. "Agrégation de classements avec égalités : algorithmes, guides à l'utilisateur et applications aux données biologiques." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112184/document.

Full text
Abstract:
L'agrégation de classements consiste à établir un consensus entre un ensemble de classements (éléments ordonnés). Bien que ce problème ait de très nombreuses applications (consensus entre les votes d'utilisateurs, consensus entre des résultats ordonnés différemment par divers moteurs de recherche...), calculer un consensus exact est rarement faisable dans les cas d'applications réels (problème NP-difficile). De nombreux algorithmes d'approximation et heuristiques ont donc été conçus. Néanmoins, leurs performances (en temps et en qualité de résultat produit) sont très différentes et dépendent des jeux de données à agréger. Plusieurs études ont cherché à comparer ces algorithmes mais celles-ci n’ont généralement pas considéré le cas (pourtant courant dans les jeux de données réels) des égalités entre éléments dans les classements (éléments classés au même rang). Choisir un algorithme de consensus adéquat vis-à-vis d'un jeu de données est donc un problème particulièrement important à étudier (grand nombre d’applications) et c’est un problème ouvert au sens où aucune des études existantes ne permet d’y répondre. Plus formellement, un consensus de classements est un classement qui minimise le somme des distances entre ce consensus et chacun des classements en entrés. Nous avons considérés (comme une grande partie de l’état-de-art) la distance de Kendall-Tau généralisée, ainsi que des variantes, dans nos études. Plus précisément, cette thèse comporte trois contributions. Premièrement, nous proposons de nouveaux résultats de complexité associés aux cas que l'on rencontre dans les données réelles où les classements peuvent être incomplets et où plusieurs éléments peuvent être classés à égalité. Nous isolons les différents « paramètres » qui peuvent expliquer les variations au niveau des résultats produits par les algorithmes d’agrégation (par exemple, utilisation de la distance de Kendall-Tau généralisée ou de variantes, d’un pré-traitement des jeux de données par unification ou projection). Nous proposons un guide pour caractériser le contexte et le besoin d’un utilisateur afin de le guider dans le choix à la fois d’un pré-traitement de ses données mais aussi de la distance à choisir pour calculer le consensus. Nous proposons finalement une adaptation des algorithmes existants à ce nouveau contexte. Deuxièmement, nous évaluons ces algorithmes sur un ensemble important et varié de jeux de données à la fois réels et synthétiques reproduisant des caractéristiques réelles telles que similarité entre classements, la présence d'égalités, et différents pré-traitements. Cette large évaluation passe par la proposition d’une nouvelle méthode pour générer des données synthétiques avec similarités basée sur une modélisation en chaîne Markovienne. Cette évaluation a permis d'isoler les caractéristiques des jeux de données ayant un impact sur les performances des algorithmes d'agrégation et de concevoir un guide pour caractériser le besoin d'un utilisateur et le conseiller dans le choix de l'algorithme à privilégier. Une plateforme web permettant de reproduire et étendre ces analyses effectuée est disponible (rank-aggregation-with-ties.lri.fr). Enfin, nous démontrons l'intérêt d'utiliser l'approche d'agrégation de classements dans deux cas d'utilisation. Nous proposons un outil reformulant à-la-volé des requêtes textuelles d'utilisateur grâce à des terminologies biomédicales, pour ensuite interroger de bases de données biologiques, et finalement produire un consensus des résultats obtenus pour chaque reformulation (conqur-bio.lri.fr). Nous comparons l'outil à la plateforme de références et montrons une amélioration nette des résultats en qualité. Nous calculons aussi des consensus entre liste de workflows établie par des experts dans le contexte de la similarité entre workflows scientifiques. Nous observons que les consensus calculés sont très en accord avec les utilisateurs dans une large proportion de cas
The rank aggregation problem is to build consensus among a set of rankings (ordered elements). Although this problem has numerous applications (consensus among user votes, consensus between results ordered differently by different search engines ...), computing an optimal consensus is rarely feasible in cases of real applications (problem NP-Hard). Many approximation algorithms and heuristics were therefore designed. However, their performance (time and quality of product loss) are quite different and depend on the datasets to be aggregated. Several studies have compared these algorithms but they have generally not considered the case (yet common in real datasets) that elements can be tied in rankings (elements at the same rank). Choosing a consensus algorithm for a given dataset is therefore a particularly important issue to be studied (many applications) and it is an open problem in the sense that none of the existing studies address it. More formally, a consensus ranking is a ranking that minimizes the sum of the distances between this consensus and the input rankings. Like much of the state-of-art, we have considered in our studies the generalized Kendall-Tau distance, and variants. Specifically, this thesis has three contributions. First, we propose new complexity results associated with cases encountered in the actual data that rankings may be incomplete and where multiple items can be classified equally (ties). We isolate the different "features" that can explain variations in the results produced by the aggregation algorithms (for example, using the generalized distance of Kendall-Tau or variants, pre-processing the datasets with unification or projection). We propose a guide to characterize the context and the need of a user to guide him into the choice of both a pre-treatment of its datasets but also the distance to choose to calculate the consensus. We finally adapt existing algorithms to this new context. Second, we evaluate these algorithms on a large and varied set of datasets both real and synthetic reproducing actual features such as similarity between rankings, the presence of ties and different pre-treatments. This large evaluation comes with the proposal of a new method to generate synthetic data with similarities based on a Markov chain modeling. This evaluation led to the isolation of datasets features that impact the performance of the aggregation algorithms, and to design a guide to characterize the needs of a user and advise him in the choice of the algorithm to be use. A web platform to replicate and extend these analyzes is available (rank-aggregation-with-ties.lri.fr). Finally, we demonstrate the value of using the rankings aggregation approach in two use cases. We provide a tool to reformulating the text user queries through biomedical terminologies, to then query biological databases, and ultimately produce a consensus of results obtained for each reformulation (conqur-bio.lri.fr). We compare the results to the references platform and show a clear improvement in quality results. We also calculate consensus between list of workflows established by experts in the context of similarity between scientific workflows. We note that the computed consensus agree with the expert in a very large majority of cases
APA, Harvard, Vancouver, ISO, and other styles
34

Yeleswarapu, Radhika M. "Scheduling Of 2-Operation Jobs On A Single Machine To Minimize The Number Of Tardy Jobs." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ayala, Obregón Alan. "Complexity reduction methods applied to the rapid solution to multi-trace boundary integral formulations." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS581.

Full text
Abstract:
L'objectif de cette thèse est de fournir des techniques de réduction de complexité pour la solution des équations intégrales de frontière (BIE). En particulier, nous sommes intéressés par les BIE issues de la modélisation des problèmes acoustiques et électromagnétiques via les méthodes des éléments de frontière (BEM). Nous utilisons la formulation multi-trace locale pour laquelle nous trouvons une expression explicite pour l’inverse de l'opérateur multi-trace pour un problème modèle de diffusion. Ensuite, nous proposons cet opérateur inverse pour préconditionner des problèmes de diffusion plus générales. Nous montrons également que la formulation multi-trace locale est stable pour les équations de Maxwell posées sur un domaine particulier. Nous posons les problèmes de type BEM dans le cadre des matrices hiérarchiques, pour lesquelles c'est possible d'identifier sous-matrices admettant des approximations de rang faible (blocs admissibles). Nous introduisons une technique appelée échantillonnage géométrique qui utilise des structures d'arbre pour créer des algorithmes CUR en complexité linéaire, lesquelles sont orientés pour créer des algorithmes parelles avec communication optimale. Finalement, nous étudions des méthodes QR et itération sur sous-espaces; pour le premier, nous fournissons de nouvelles bornes pour l’erreur d’approximation, et pour le deuxième nous résolvons une question ouverte dans la littérature consistant à prouver que l'approximation des vecteurs singuliers converge exponentiellement. Enfin, nous proposons une technique appelée approximation affine de rang faible destinée à accroître la précision des méthodes classiques d’approximation de rang faible
In this thesis, we provide complexity reduction techniques for the solution of Boundary Integral Equations (BIE). In particular, we focus on BIE arising from the modeling of acoustic and electromagnetic problems via Boundary Element Methods (BEM). We use the local multi-trace formulation which is friendly to operator preconditioning. We find a closed form inverse of the local multi-trace operator for a model problem and then we propose this inverse operator for preconditioning general scattering problems. Moreover, we show that the local multi-trace formulation is stable for Maxwell equations posed on a particular domain configuration. For general problems where BEM are applied, we propose to use the framework of hierarchical matrices, which are constructed using cluster trees and allow to represent the original matrix in such a way that submatrices that admit low-rank approximations (admissible blocks) are well identified. We introduce a technique called geometric sampling which uses cluster trees to create accurate linear-time CUR algorithms for the compression and matrix-vector product acceleration of admissible matrix blocks, and which are oriented to develop parallel communication-avoiding algorithms. We also contribute to the approximation theory of QR and subspace iteration methods; for the former we provide new bounds for the approximation error, and for the later we solve an open question in the literature consisting in proving that the approximation of singular vectors exponentially converges. Finally, we propose a technique called affine low-rank approximation intended to increase the accuracy of classical low-rank approximation methods
APA, Harvard, Vancouver, ISO, and other styles
36

Daher, Ali. "Application de la théorie des nombres à la conception optimale et à l'implémentation de très faible complexité des filtres numériques." Phd thesis, Université de Bretagne occidentale - Brest, 2009. http://tel.archives-ouvertes.fr/tel-00490369.

Full text
Abstract:
L'objectif principal de notre étude est de développer des algorithmes rapides pour une conception optimale et une implantation de très faible complexité des filtres numériques. Le critère d'optimisation choisi est celui de la minimisation de l'erreur quadratique moyenne. Ainsi, nous avons étudié et développé de nouveaux algorithmes de synthèse des filtres à réponse impulsionnelle finie (RIF) associés aux deux techniques de filtrage par blocs, overlap-save (OLS) et overlap-add (OLA). Ces deux techniques de filtrage RIF consistent à traiter le signal par blocs au moyen de la transformée de Fourier rapide (TFR) et permettent ainsi de réduire la complexité arithmétique des calculs de convolution. Les algorithmes que nous avons proposés sont basés sur le développement du modèle matriciel des structures OLS et OLA et sur l'utilisation des propriétés de l'algèbre linéaire, en particulier celles des matrices circulantes. Pour réduire davantage la complexité et la distorsion de filtrage, nous avons approfondi les bases mathématiques de la transformée en nombres de Fermat (FNT : Fermat Number Transform) qui est amenée à trouver des applications de plus en plus diverses en traitement du signal. Cette transformée, définie sur un corps de Galois d'ordre égal à un nombre de Fermat, est un cas particulier des transformées en nombres entiers (NTT : Number Theoretic Transform). Comparé à la TFR, la FNT permet un calcul sans erreur d'arrondi ainsi qu'une large réduction du nombre de multiplications nécessaires à la réalisation du produit de convolution. Pour mettre en évidence cette transformée, nous avons proposé et étudié une nouvelle conception des filtres blocs OLS et OLA mettant en oeuvre la FNT. Nous avons ensuite développé un algorithme de très faible complexité pour la synthèse du filtre optimal en utilisant les propriétés des matrices circulantes que nous avons développées dans le corps de Galois. Les résultats de l'implantation en virgule fixe du filtrage par blocs ont montré que l'utilisation de la FNT à la place de la TFR permettra de réduire la complexité et les erreurs de filtrage ainsi que le coût de synthèse du filtre optimal.
APA, Harvard, Vancouver, ISO, and other styles
37

Daher, Ali. "Application de la théorie des nombres à la conception optimale et à l’implémentation de très faible complexité des filtres numériques." Brest, 2009. http://www.theses.fr/2009BRES2039.

Full text
Abstract:
L’objectif principal de notre étude est de développer des algorithmes rapides pour une conception optimale et une implantation de très faible complexité des filtres numériques. Le critère d’optimisation choisi est celui de la minimisation de l’erreur quadratique moyenne. Ainsi, nous avons étudié et développé de nouveaux algorithmes de synthèse des filtrés à réponse impulsionnelle finie (RIF) associés aux deux techniques de filtrage par blocs, overlap-save (OLS) et overlap-add (OLA). Ces deux techniques de filtrage RIF consistent à traiter le signal par blocs au moyen de la transformée de Fourier rapide (TFR) et permettent ainsi de réduire la complexité arithmétique des calculs de convolution. Les algorithmes que nous avons proposés sont basés sur le développement du modèle matriciel des structures OLS et OLA et sur l’utilisation des propriétés de l’algèbre linéaire, en particulier celles des matrices circulantes. Pour réduire davantage la complexité et la distorsion de filtrage, nous avons approfondi les bases mathématiques de la transformée en nombres de Fermat (FNT Fermat Number Transform) qui est amenée à trouver des applications de plus en plus diverses en traitement du signal. Cette transformée, définie sur un corps de Galois d’ordre égal à un nombre de Fermat, est un cas particulier des transformées en nombres entiers (NTT Number Theoretic Transform). Comparé à la TFR, la FNT permet un calcul sans erreur d’arrondi ainsi qu’une large réduction du nombre de multiplications nécessaires à la réalisation du produit de convolution. Pour mettre en évidence cette transformée, nous avons proposé et étudié une nouvelle conception des filtres blocs OLS et OLA mettant en oeuvre la FNT. Nous avons ensuite développé un algorithme de très faible complexité pour la synthèse du filtre optimal en utilisant les propriétés des matrices circulantes que nous avons développées dans le corps de Galois. Les résultats de l’implantation en virgule fixe du filtrage par blocs ont montré que l’utilisation de la FNT à la place de la TFR permettra de réduire la complexité et les erreurs de filtrage ainsi que le coût de synthèse du filtre optimal
The main objective of our study is to develop fast algorithms for an optimal design and an implementation with low complexity of digital filters. The optimization criterion is the mean squared error at the filter output. Thus, we have studied and developed new algorithms for synthesis of finite impulse response (FIR) filters related to the two techniques of block filtering, overlap-save (OLS) and overlap-add (OLA). These two filtering techniques consist in processing the signal by blocks and use the fast Fourier transform (FFT) to reduce the complexity of the convolution calculation. Our algorithms, based on the matrix model development of the OLA and OLS structures, use the linear algebra properties, especially those of circulant matrices. To further reduce the complexity and the distortion, we have looked further into the mathematical foundations of the Fermat Number Transform (FNT). This transform is a particular case of the Number Theoretic Transforms (NTT) defined in the Galois field. Compared to the FFT, the FNT allows a calculation without rounding error and a large reduction of the number of multiplications necessary to carry out the convolution product. To highlight this transform, we have proposed and studied a new design of OLS and OLA filtering using the FNT. We have developed a low complexity algorithm for the optimal synthesis of filters using the properties of circulant matrices that we have developed in the Galois field. The simulation results of the block filtering with fixed-point implementation have shown that the use of the FNT instead of the FFT reduces the complexity and the filtering errors, as well as the cost of optimal filter synthesis
APA, Harvard, Vancouver, ISO, and other styles
38

Simard, Catherine. "Analyse d'algorithmes de type Nesterov et leurs applications à l'imagerie numérique." Mémoire, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/7714.

Full text
Abstract:
Ce mémoire se veut d'abord un recueil des principales variantes de l'algorithme optimal en pire cas pour la résolution de problèmes convexes et fortement convexes sans contraintes présenté par Yurii Nesterov en 1983 et en 2004. Ces variantes seront présentées dans un cadre unifié et analysées de manière théorique et empirique. On y retrouve une analyse des rôles des différents paramètres composant l'algorithme de base ainsi que de l'influence des constantes L et mu, respectivement la constante de Lipschitz du gradient et la constante de forte convexité de la fonction objectif, sur le comportement des algorithmes. On présentera également une nouvelle variante hybride et nous démontrerons empiriquement qu'elle performe mieux que plusieurs variantes dans la majorité des situations. La comparaison empirique des différentes variantes sur des problèmes sans contraintes utilise un modèle de calcul se basant sur le nombre d'appels à un oracle de premier ordre plutôt que sur le nombre d'itérations. Enfin, une application de ces variantes sur trois instances de problèmes en imagerie numérique ainsi qu'une analyse empirique des résultats obtenus en confrontation avec la méthode optimale FISTA et l'algorithme classique L-BFGS-B viennent clore ce mémoire.
APA, Harvard, Vancouver, ISO, and other styles
39

Kattami, Christalena. "Development of a construction methodology of goal directed, optimal complexity, flexible and task oriented (GOFT) training materials for novice computer users : application and evaluation in adults with mental health problems." Thesis, City University London, 1996. http://openaccess.city.ac.uk/7779/.

Full text
Abstract:
A number of information technology schemes have been developed in order to provide people with mental health problems the opportunity to acquire skills in micro-computer technology. Even though positive results have been reported a high incidence of dropouts during the beginning of the training have been found. The research is based on the assumption that in order for a computer training method to be effective in fostering computer skills and confidence to adult novice users with mental health problems has to: (a) bridge the gap between the user's capacities, needs, and preferences and the demands of the computer interfaces and their real task applications; (b) consider the ways adult novice users prefer to learn and the skill acquisition theories; (c) facilitate a goal directed interaction with the computer system; (d) maintain an optimal complexity level across training; and (e) allow flexibility of use. Based on the relevant literature, a methodology model and a set of design propositions and construction guidelines have been derived and have been implemented for the development of Goaldirected, optimal complexity, Flexible & Task oriented (GOFT) training materials for adult, novice users with mental health problems. The GOFT training materials were based on three different models, the one for the creation of a goal directed instruction format and the other two for the organisation of the training, and the estimation of the difficulty level of each new computer operation or real task application. Evaluation of use of the GOFT Training Materials by 34 adult, novice users (aged 18-51) with mental health problems revealed positive results. More specifically, the use of the GOFT training materials as compared to traditional methods resulted in a significant increase in the number of participants at the different training stages (85.3% versus 47.2%; and 44.5% versus 22.2% at three and twelve months respectively), in perfect & regular attendance rate ( 44,12% versus 11.11% & 32.35% versus 16.67%) and in the performance level (means of 3.75 versus 2.67) of the users. The subjective evaluation by the users also revealed significant differences between the GOFT and traditional training materials. In their evaluation the GOFT materials were rated significantly higher in terms of systematic arrangement, personal affect, understandability, task relevance, fitness, sense of control, confidence in using the mastered functions and in supporting goal directed learning approach.
APA, Harvard, Vancouver, ISO, and other styles
40

Bountourelis, Theologos. "Efficient pac-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/28144.

Full text
Abstract:
Thesis (M. S.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Reveliotis, Spyros; Committee Member: Ayhan, Hayriye; Committee Member: Goldsman, Dave; Committee Member: Shamma, Jeff; Committee Member: Zwart, Bert.
APA, Harvard, Vancouver, ISO, and other styles
41

Yapici, Yavuz. "A Bidirectional Lms Algorithm For Estimation Of Fast Time-varying Channels." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613220/index.pdf.

Full text
Abstract:
Effort to estimate unknown time-varying channels as a part of high-speed mobile communication systems is of interest especially for next-generation wireless systems. The high computational complexity of the optimal Wiener estimator usually makes its use impractical in fast time-varying channels. As a powerful candidate, the adaptive least mean squares (LMS) algorithm offers a computationally efficient solution with its simple first-order weight-vector update equation. However, the performance of the LMS algorithm deteriorates in time-varying channels as a result of the eigenvalue disparity, i.e., spread, of the input correlation matrix in such chan nels. In this work, we incorporate the L MS algorithm into the well-known bidirectional processing idea to produce an extension called the bidirectional LMS. This algorithm is shown to be robust to the adverse effects of time-varying channels such as large eigenvalue spread. The associated tracking performance is observed to be very close to that of the optimal Wiener filter in many cases and the bidirectional LMS algorithm is therefore referred to as near-optimal. The computational complexity is observed to increase by the bidirectional employment of the LMS algorithm, but nevertheless is significantly lower than that of the optimal Wiener filter. The tracking behavior of the bidirectional LMS algorithm is also analyzed and eventually a steady-state step-size dependent mean square error (MSE) expression is derived for single antenna flat-fading channels with various correlation properties. The aforementioned analysis is then generalized to include single-antenna frequency-selective channels where the so-called ind ependence assumption is no more applicable due to the channel memory at hand, and then to multi-antenna flat-fading channels. The optimal selection of the step-size values is also presented using the results of the MSE analysis. The numerical evaluations show a very good match between the theoretical and the experimental results under various scenarios. The tracking analysis of the bidirectional LMS algorithm is believed to be novel in the sense that although there are several works in the literature on the bidirectional estimation, none of them provides a theoretical analysis on the underlying estimators. An iterative channel estimation scheme is also presented as a more realistic application for each of the estimation algorithms and the channel models under consideration. As a result, the bidirectional LMS algorithm is observed to be very successful for this real-life application with its increased but still practical level of complexity, the near-optimal tracking performa nce and robustness to the imperfect initialization.
APA, Harvard, Vancouver, ISO, and other styles
42

Anil, Gautham. "A Fitness Function Elimination Theory for Blackbox Optimization and Problem Class Learning." Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5106.

Full text
Abstract:
The modern view of optimization is that optimization algorithms are not designed in a vacuum, but can make use of information regarding the broad class of objective functions from which a problem instance is drawn. Using this knowledge, we want to design optimization algorithms that execute quickly (efficiency), solve the objective function with minimal samples (performance), and are applicable over a wide range of problems (abstraction). However, we present a new theory for blackbox optimization from which, we conclude that of these three desired characteristics, only two can be maximized by any algorithm. We put forward an alternate view of optimization where we use knowledge about the problem class and samples from the problem instance to identify which problem instances from the class are being solved. From this Elimination of Fitness Functions approach, an idealized optimization algorithm that minimizes sample counts over any problem class, given complete knowledge about the class, is designed. This theory allows us to learn more about the difficulty of various problems, and we are able to use it to develop problem complexity bounds. We present general methods to model this algorithm over a particular problem class and gain efficiency at the cost of specifically targeting that class. This is demonstrated over the Generalized Leading-Ones problem and a generalization called LO'', and efficient algorithms with optimal performance are derived and analyzed. We also tighten existing bounds for LO'''. Additionally, we present a probabilistic framework based on our Elimination of Fitness Functions approach that clarifies how one can ideally learn about the problem class we face from the objective functions. This problem learning increases the performance of an optimization algorithm at the cost of abstraction. In the context of this theory, we re-examine the blackbox framework as an algorithm design framework and suggest several improvements to existing methods, including incorporating problem learning, not being restricted to blackbox framework and building parametrized algorithms. We feel that this theory and our recommendations will help a practitioner make substantially better use of all that is available in typical practical optimization algorithm design scenarios.
Ph.D.
Doctorate
Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
43

Villon, Pierre. "Contribution à l'optimisation." Compiègne, 1991. http://www.theses.fr/1991COMPDE95.

Full text
Abstract:
Cette thèse se décompose en trois sous ensembles relativement disjoints. Le dénominateur commun de tous ces travaux est l'optimisation. La première et la troisième partie concernent la résolution de problèmes concrets et contiennent une part importante de modélisation. Les méthodes de résolutions sont originales, ce qui a conduit après la phase théorique de conception, à une phase de mise au point expérimentale assez longue et délicate afin d'aboutir à des codes de calcul industriels efficaces et dont les limites d'utilisation sont parfaitement cernées. Dans la deuxième partie est exposée une méthode originale d'approximation que nous avons baptisée approximation diffuse. Celle-ci fournit une estimation d'une fonction et de ses dérivées successives à partir des valeurs de cette fonction en un certain nombre de points. Ceci a été développé pour résoudre des équations aux dérivées partielles sans être obligé de mailler le domaine. Après une étude théorique contenant les principaux théorèmes de convergence, nous comparons nos résultats avec ceux obtenus par éléments finis. Les disciplines scientifiques qui sont abordées dans cette thèse sont les suivants : 1) pour la première partie, la modélisation fait appel à la thermique. La résolution est basée sur l'analyse numérique et la théorie du contrôle optimal des systèmes gouvernés par des équations aux dérivées partielles. 2) la deuxième partie fait appel à la théorie de l'approximation et à l'algorithmique numérique. 3) la troisième partie a trait aux mathématiques discrètes. Les outils utilisés sont la théorie des graphes et l'optimisation combinatoire. Ces travaux ont donné lieu à des publications qui sont insérées dans la thèse ou mentionnées.
APA, Harvard, Vancouver, ISO, and other styles
44

Rankine, Luke. "Newborn EEG seizure detection using adaptive time-frequency signal processing." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16200/.

Full text
Abstract:
Dysfunction in the central nervous system of the neonate is often first identified through seizures. The diffculty in detecting clinical seizures, which involves the observation of physical manifestations characteristic to newborn seizure, has placed greater emphasis on the detection of newborn electroencephalographic (EEG) seizure. The high incidence of newborn seizure has resulted in considerable mortality and morbidity rates in the neonate. Accurate and rapid diagnosis of neonatal seizure is essential for proper treatment and therapy. This has impelled researchers to investigate possible methods for the automatic detection of newborn EEG seizure. This thesis is focused on the development of algorithms for the automatic detection of newborn EEG seizure using adaptive time-frequency signal processing. The assessment of newborn EEG seizure detection algorithms requires large datasets of nonseizure and seizure EEG which are not always readily available and often hard to acquire. This has led to the proposition of realistic models of newborn EEG which can be used to create large datasets for the evaluation and comparison of newborn EEG seizure detection algorithms. In this thesis, we develop two simulation methods which produce synthetic newborn EEG background and seizure. The simulation methods use nonlinear and time-frequency signal processing techniques to allow for the demonstrated nonlinear and nonstationary characteristics of the newborn EEG. Atomic decomposition techniques incorporating redundant time-frequency dictionaries are exciting new signal processing methods which deliver adaptive signal representations or approximations. In this thesis we have investigated two prominent atomic decomposition techniques, matching pursuit and basis pursuit, for their possible use in an automatic seizure detection algorithm. In our investigation, it was shown that matching pursuit generally provided the sparsest (i.e. most compact) approximation for various real and synthetic signals over a wide range of signal approximation levels. For this reason, we chose MP as our preferred atomic decomposition technique for this thesis. A new measure, referred to as structural complexity, which quantifes the level or degree of correlation between signal structures and the decomposition dictionary was proposed. Using the change in structural complexity, a generic method of detecting changes in signal structure was proposed. This detection methodology was then applied to the newborn EEG for the detection of state transition (i.e. nonseizure to seizure state) in the EEG signal. To optimize the seizure detection process, we developed a time-frequency dictionary that is coherent with the newborn EEG seizure state based on the time-frequency analysis of the newborn EEG seizure. It was shown that using the new coherent time-frequency dictionary and the change in structural complexity, we can detect the transition from nonseizure to seizure states in synthetic and real newborn EEG. Repetitive spiking in the EEG is a classic feature of newborn EEG seizure. Therefore, the automatic detection of spikes can be fundamental in the detection of newborn EEG seizure. The capacity of two adaptive time-frequency signal processing techniques to detect spikes was investigated. It was shown that a relationship between the EEG epoch length and the number of repetitive spikes governs the ability of both matching pursuit and adaptive spectrogram in detecting repetitive spikes. However, it was demonstrated that the law was less restrictive forth eadaptive spectrogram and it was shown to outperform matching pursuit in detecting repetitive spikes. The method of adapting the window length associated with the adaptive spectrogram used in this thesis was the maximum correlation criterion. It was observed that for the time instants where signal spikes occurred, the optimal window lengths selected by the maximum correlation criterion were small. Therefore, spike detection directly from the adaptive window optimization method was demonstrated and also shown to outperform matching pursuit. An automatic newborn EEG seizure detection algorithm was proposed based on the detection of repetitive spikes using the adaptive window optimization method. The algorithm shows excellent performance with real EEG data. A comparison of the proposed algorithm with four well documented newborn EEG seizure detection algorithms is provided. The results of the comparison show that the proposed algorithm has significantly better performance than the existing algorithms (i.e. Our proposed algorithm achieved a good detection rate (GDR) of 94% and false detection rate (FDR) of 2.3% compared with the leading algorithm which only produced a GDR of 62% and FDR of 16%). In summary, the novel contribution of this thesis to the fields of time-frequency signal processing and biomedical engineering is the successful development and application of sophisticated algorithms based on adaptive time-frequency signal processing techniques to the solution of automatic newborn EEG seizure detection.
APA, Harvard, Vancouver, ISO, and other styles
45

Marchandon, Mathilde. "Vers la compréhension des séquences sismiques sur un système de failles : de l’observation spatiale à la modélisation numérique. Application à la séquence du Nord-Est Lut, Iran." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4055/document.

Full text
Abstract:
De nombreuses études montrent que les transferts de contrainte co- et postsismiques jouent un rôle majeur dans l’occurrence des séquences de séismes. Cependant, la grande majorité de ces études implique des systèmes de failles à la configuration géométrique simple (e.g. failles parallèles ou colinéaires). Dans cette thèse, nous étudions une séquence de séismes s’étant produite au sein d’un système de failles à la configuration géométrique plus complexe (i.e. failles conjuguées), la séquence du NE Lut (1939-1997, NE Iran), afin d’évaluer (1) si les transferts de contrainte favorisent la succession de séismes de la séquence et (2) s’ils permettent sur le long-terme de synchroniser les ruptures des failles du système. Pour cela, nous mesurons d’abord les déformations de surface produites par la séquence afin de mieux contraindre par la suite la modélisation des transferts de contrainte. A partir de la technique de corrélation subpixel d'images optiques, nous mesurons les champs de déplacements de surface horizontaux produits par les séismes de Khuli-Boniabad (Mw 7.1, 1979) et de Zirkuh (Mw 7.2, 1997). Nous montrons que ces séismes sont caractérisés par la rupture de plusieurs segments dont les limites sont corrélées avec les complexités géométriques des failles. Nous interprétons les différences de leurs caractéristiques de rupture (longueur de rupture, glissement moyen, nombre de segments rompus) comme étant dues à des différences de maturité des failles de Dasht-e-Bayaz et d’Abiz. Nous détectons également les déplacements produits par un séisme historique modéré, le séisme de Korizan (Mw 6.6, 1979). C’est la première fois que les déplacements produits par un séisme historique de si petite taille sont mesurés par corrélation d’images optiques. Ensuite, en combinant le champ de déplacements InSAR déjà publié avec les données optiques proche-faille précédemment acquises, nous estimons un nouveau modèle de source pour le séisme de Zirkuh (Mw 7.2, 1997). Nous montrons que les données proche-faille sont essentielles pour mieux contraindre la géométrie de la rupture et la distribution du glissement en profondeur. Le modèle estimé montre que le séisme de Zirkuh a rompu trois aspérités séparées par des barrières géométriques où les répliques du séisme se localisent. Seul le segment central de la faille présente un déficit de glissement en surface que nous interprétons comme étant dû à de la déformation distribuée dans des dépôts quaternaires non consolidés. Enfin, à partir des informations précédemment acquises, nous modélisons les transferts de contrainte au cours de la séquence du NE Lut. Nous montrons que ceux-ci ont favorisé l’occurrence de 7 des 11 séismes de la séquence et que modéliser précisément la géométrie des ruptures est essentiel à une estimation robuste des transferts de contrainte. De plus, nous montrons que l’occurrence du séisme de Zirkuh (Mw 7.2, 1992) est principalement favorisée par les séismes modérés de la séquence. Pour finir, la simulation d’une multitude de cycles sismiques sur les failles du NE Lut montre que les transferts de contrainte, en particulier les transferts postsismiques liés à la relaxation viscoélastique de la lithosphère, sont le principal processus permettant la mise en place répétée de séquences de séismes sur les failles du NE Lut. Enfin, d'après les simulations réalisées, l'ordre dans lequel se sont produits les séismes majeurs durant la séquence du NE Lut est assez exceptionnel
Many studies show that static and postseismic stress transfers play an important role in the occurrence of seismic sequences. However, a large majority of these studies involves seismic sequences that occurred within fault systems having simple geometric configurations (e.g. collinear or parallel fault system). In this thesis, we study a seismic sequence that occurred within a complex fault system (i.e. conjugate fault system), the NE Lut seismic sequence (1939-1997, NE Iran), in order to assess if (1) stress transfers can explain the succession of earthquakes in the sequence and (2) stress transfers can lead to the synchronization of the NE Lut faults over multiple seismic cycles. To this end, we first measure the surface displacement field produced by the sequence in order to precisely constrain the stress transfer modeling afterwards. We use optical correlation technique to measure the surface displacement fields of the Khuli-Boniabad (Mw 7.1, 1979) and Zirkuh earthquake (Mw 7.2, 1997). We find that these earthquakes broke several segments limited by geometrical complexities of the faults. We interpret the differences in failure style of these earthquakes (i.e. rupture length, mean slip and number of broken segments) as being due to different level of structural maturity of the Dasht-e-Bayaz and Abiz faults. Furthermore, we succeed to detect offsets produced by the 1979 Mw 6.6 Korizan earthquake. It is the first time that surface displacements for such a small historical earthquake have been measured using optical correlation. Then, combining previously published intermediate-field InSAR data and our near-field optical data, we estimate a new source model for the Zirkuh earthquake (Mw 7.2, 1997). We show that near-field data are crucial to better constrain the fault geometry and the slip distribution at depth. According to our source model, the Zirkuh earthquake broke three asperities separated by geometrical barriers where aftershocks are located. No shallow slip deficit is found for the overall rupture except on the central segment where it could be due to off-fault deformation in quaternary deposits. Finally, we use the information acquired in the first parts of this work to model the stress transfers within the NE Lut sequence. We find that 7 out of 11 earthquakes are triggered by the previous ones and that the precise modeling of the rupture geometry is crucial to robustly estimate the stress transfers. We also show that the Zirkuh earthquake is mainly triggered by the moderate earthquakes of the NE Lut sequence. Lastly, the simulation of multiple seismic cycles on the NE Lut fault system shows that stress transfers, in particular postseismic stress transfers due to viscoelastic relaxation, enhance the number of seismic sequences and synchronize the rupture of the faults. The simulations also show that the order in which the Mw>7 earthquakes occurred during the NE Lut sequence is quite exceptional
APA, Harvard, Vancouver, ISO, and other styles
46

Chinot, Geoffrey. "Localization methods with applications to robust learning and interpolation." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAG002.

Full text
Abstract:
Cette thèse de doctorat est centrée sur l'apprentissage supervisé. L'objectif principal est l'utilisation de méthodes de localisation pour obtenir des vitesses rapides de convergence, c'est-à-dire, des vitesse de l'ordre O(1/n), où n est le nombre d'observations. Ces vitesses ne sont pas toujours atteignables. Il faut imposer des contraintes sur la variance du problème comme une condition de Bernstein ou de marge. Plus particulièrement, dans cette thèse nous tentons d'établir des vitesses rapides de convergences pour des problèmes de robustesse et d'interpolation.On dit qu'un estimateur est robuste si ce dernier présente certaines garanties théoriques, sous le moins d'hypothèses possibles. Cette problématique de robustesse devient de plus en plus populaire. La raison principale est que dans l'ère actuelle du “big data", les données sont très souvent corrompues. Ainsi, construire des estimateurs fiables dans cette situation est essentiel. Dans cette thèse nous montrons que le fameux minimiseur du risque empirique (regularisé) associé à une fonction de perte Lipschitz est robuste à des bruits à queues lourde ainsi qu'a des outliers dans les labels. En revanche si la classe de prédicteurs est à queue lourde, cet estimateur n'est pas fiable. Dans ce cas, nous construisons des estimateurs appelé estimateur minmax-MOM, optimal lorsque les données sont à queues lourdes et possiblement corrompues.En apprentissage statistique, on dit qu'un estimateur interpole, lorsque ce dernier prédit parfaitement sur un jeu d'entrainement. En grande dimension, certains estimateurs interpolant les données peuvent être bons. En particulier, cette thèse nous étudions le modèle linéaire Gaussien en grande dimension et montrons que l'estimateur interpolant les données de plus petite norme est consistant et atteint même des vitesses rapides
This PhD thesis deals with supervized machine learning and statistics. The main goal is to use localization techniques to derive fast rates of convergence, with a particular focus on robust learning and interpolation problems.Localization methods aim to analyze localized properties of an estimator to obtain fast rates of convergence, that is rates of order O(1/n), where n is the number of observations. Under assumptions, such as the Bernstein condition, such rates are attainable.A robust estimator is an estimator with good theoretical guarantees, under as few assumptions as possible. This question is getting more and more popular in the current era of big data. Large dataset are very likely to be corrupted and one would like to build reliable estimators in such a setting. We show that the well-known regularized empirical risk minimizer (RERM) with Lipschitz-loss function is robust with respect to heavy-tailed noise and outliers in the label. When the class of predictor is heavy-tailed, RERM is not reliable. In this setting, we show that minmax Median of Means estimators can be a solution. By construction minmax-MOM estimators are also robust to an adversarial contamination.Interpolation problems study learning procedure with zero training error. Surprisingly, in large dimension, interpolating the data does not necessarily implies over-fitting. We study a high dimensional Gaussian linear model and show that sometimes the over-fitting may be benign
APA, Harvard, Vancouver, ISO, and other styles
47

Kapfunde, Goodwell. "Near-capacity sphere decoder based detection schemes for MIMO wireless communication systems." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/11350.

Full text
Abstract:
The search for the closest lattice point arises in many communication problems, and is known to be NP-hard. The Maximum Likelihood (ML) Detector is the optimal detector which yields an optimal solution to this problem, but at the expense of high computational complexity. Existing near-optimal methods used to solve the problem are based on the Sphere Decoder (SD), which searches for lattice points confined in a hyper-sphere around the received point. The SD has emerged as a powerful means of finding the solution to the ML detection problem for MIMO systems. However the bottleneck lies in the determination of the initial radius. This thesis is concerned with the detection of transmitted wireless signals in Multiple-Input Multiple-Output (MIMO) digital communication systems as efficiently and effectively as possible. The main objective of this thesis is to design efficient ML detection algorithms for MIMO systems based on the depth-first search (DFS) algorithms whilst taking into account complexity and bit error rate performance requirements for advanced digital communication systems. The increased capacity and improved link reliability of MIMO systems without sacrificing bandwidth efficiency and transmit power will serve as the key motivation behind the study of MIMO detection schemes. The fundamental principles behind MIMO systems are explored in Chapter 2. A generic framework for linear and non-linear tree search based detection schemes is then presented Chapter 3. This paves way for different methods of improving the achievable performance-complexity trade-off for all SD-based detection algorithms. The suboptimal detection schemes, in particular the Minimum Mean Squared Error-Successive Interference Cancellation (MMSE-SIC), will also serve as pre-processing as well as comparison techniques whilst channel capacity approaching Low Density Parity Check (LDPC) codes will be employed to evaluate the performance of the proposed SD. Numerical and simulation results show that non-linear detection schemes yield better performance compared to linear detection schemes, however, at the expense of a slight increase in complexity. The first contribution in this thesis is the design of a near ML-achieving SD algorithm for MIMO digital communication systems that reduces the number of search operations within the sphere-constrained search space at reduced detection complexity in Chapter 4. In this design, the distance between the ML estimate and the received signal is used to control the lower and upper bound radii of the proposed SD to prevent NP-complete problems. The detection method is based on the DFS algorithm and the Successive Interference Cancellation (SIC). The SIC ensures that the effects of dominant signals are effectively removed. Simulation results presented in this thesis show that by employing pre-processing detection schemes, the complexity of the proposed SD can be significantly reduced, though at marginal performance penalty. The second contribution is the determination of the initial sphere radius in Chapter 5. The new initial radius proposed in this thesis is based on the variable parameter α which is commonly based on experience and is chosen to ensure that at least a lattice point exists inside the sphere with high probability. Using the variable parameter α, a new noise covariance matrix which incorporates the number of transmit antennas, the energy of the transmitted symbols and the channel matrix is defined. The new covariance matrix is then incorporated into the EMMSE model to generate an improved EMMSE estimate. The EMMSE radius is finally found by computing the distance between the sphere centre and the improved EMMSE estimate. This distance can be fine-tuned by varying the variable parameter α. The beauty of the proposed method is that it reduces the complexity of the preprocessing step of the EMMSE to that of the Zero-Forcing (ZF) detector without significant performance degradation of the SD, particularly at low Signal-to-Noise Ratios (SNR). More specifically, it will be shown through simulation results that using the EMMSE preprocessing step will substantially improve performance whenever the complexity of the tree search is fixed or upper bounded. The final contribution is the design of the LRAD-MMSE-SIC based SD detection scheme which introduces a trade-off between performance and increased computational complexity in Chapter 6. The Lenstra-Lenstra-Lovasz (LLL) algorithm will be utilised to orthogonalise the channel matrix H to a new near orthogonal channel matrix H ̅.The increased computational complexity introduced by the LLL algorithm will be significantly decreased by employing sorted QR decomposition of the transformed channel H ̅ into a unitary matrix and an upper triangular matrix which retains the property of the channel matrix. The SIC algorithm will ensure that the interference due to dominant signals will be minimised while the LDPC will effectively stop the propagation of errors within the entire system. Through simulations, it will be demonstrated that the proposed detector still approaches the ML performance while requiring much lower complexity compared to the conventional SD.
APA, Harvard, Vancouver, ISO, and other styles
48

Keyder, Emil Ragip. "New Heuristics for Planning with Action Costs." Doctoral thesis, Universitat Pompeu Fabra, 2010. http://hdl.handle.net/10803/7570.

Full text
Abstract:
Classical planning is the problem of nding a sequence of actions that take an agent from an initial state to a desired goal situation, assuming deter- ministic outcomes for actions and perfect information. Satis cing planning seeks to quickly nd low-cost solutions with no guarantees of optimality. The most e ective approach for satis cing planning has proved to be heuristic search using non-admissible heuristics. In this thesis, we introduce several such heuristics that are able to take into account costs on actions, and there- fore try to minimize the more general metric of cost, rather than length, of plans, and investigate their properties and performance. In addition, we show how the problem of planning with soft goals can be compiled into a classical planning problem with costs, a setting in which cost-sensitive heuristics such as those presented here are essential.
La plani caci on cl asica es el problema que consiste en hallar una secuencia de acciones que lleven a un agente desde un estado inicial a un objetivo, asum- iendo resultados determin sticos e informaci on completa. La plani caci on \satis cing" busca encontrar una soluci on de bajo coste, sin garant as de op- timalidad. La b usqueda heur stica guiada por heur sticas no admisibles es el enfoque que ha tenido mas exito. Esta tesis presenta varias heur sticas de ese g enero que consideran costes en las acciones, y por lo tanto encuentran soluciones que minimizan el coste, en lugar de la longitud del plan. Adem as, demostramos que el problema de plani caci on con \soft goals", u objetivos opcionales, se puede reducir a un problema de plani caci on clasica con costes en las acciones, escenario en el que heur sticas sensibles a costes, tal como las aqu presentadas, son esenciales.
APA, Harvard, Vancouver, ISO, and other styles
49

Roche, Jean-Christophe. "Localisation spatiale par subdivision pour l'accélération des calculs en radiométrie :." Phd thesis, Université Joseph Fourier (Grenoble), 2000. http://tel.archives-ouvertes.fr/tel-00006752.

Full text
Abstract:
La physique de la lumière ainsi que les outils géométriques pour la Conception Assistée par Ordinateur sont à la base des logiciels de simulation des phénomènes lumineux pour la fabrication des systèmes optiques. Ce n'est pas sans difficulté que les industriels conçoivent ces logiciels dont un des principaux handicaps est que les simulations sont très coûteuses en temps. L'objectif principal de ce travail est de rechercher et développer des algorithmes de calcul plus performants. Dans un premier temps, on décrit précisément le modèle du transport des photons dans ce contexte, composé de l'équation de Boltzmann accompagné de conditions de bord, et qui, dans le cas de milieux homogènes par morceaux, se ramène à l'équation de radiosité. Ensuite, on présente les outils géométriques utilisés dans le modeleur hybride CSG (Constructive Solid Geometry) et BRep (Boundary Representation) ainsi que les algorithmes de base nécessaires à la recherche d'intersections entre des demi-droites et des objets géométriques. Puis, un tour d'horizon des méthodes d'accélération des calculs en radiométrie par localisation spatiale est présenté. En tenant compte des contraintes industrielles, une telle méthode d'accélération est alors adaptée au contexte puis développée dans un environnement logiciel existant. Des expérimentations numériques montrent l'efficacité des nouvelles bibliothèques. Enfin, une étude théorique des complexités en temps et en mémoire liées aux méthodes de localisation spatiale, faisant intervenir les sommes de Minkowski d'ensembles géométriques, débouche sur une stratégie consistant à minimiser la complexité en temps pour choisir les paramètres de localisation.
APA, Harvard, Vancouver, ISO, and other styles
50

"Low Complexity Optical Flow Using Neighbor-Guided Semi-Global Matching." Master's thesis, 2017. http://hdl.handle.net/2286/R.I.44132.

Full text
Abstract:
abstract: Many real-time vision applications require accurate estimation of optical flow. This problem is quite challenging due to extremely high computation and memory requirements. This thesis focuses on designing low complexity dense optical flow algorithms. First, a new method for optical flow that is based on Semi-Global Matching (SGM), a popular dynamic programming algorithm for stereo vision, is presented. In SGM, the disparity of each pixel is calculated by aggregating local matching costs over the entire image to resolve local ambiguity in texture-less and occluded regions. The proposed method, Neighbor-Guided Semi-Global Matching (NG-fSGM) achieves significantly less complexity compared to SGM, by 1) operating on a subset of the search space that has been aggressively pruned based on neighboring pixels’ information, 2) using a simple cost aggregation function, 3) approximating aggregated cost array and embedding pixel-wise matching cost computation and flow computation in aggregation. Evaluation on the Middlebury benchmark suite showed that, compared to a prior SGM extension for optical flow, the proposed basic NG-fSGM provides robust optical flow with 0.53% accuracy improvement, 40x reduction in number of operations and 6x reduction in memory size. To further reduce the complexity, sparse-to-dense flow estimation method is proposed. The number of operations and memory size are reduced by 68% and 47%, respectively, with only 0.42% accuracy degradation, compared to the basic NG-fSGM. A parallel block-based version of NG-fSGM is also proposed. The image is divided into overlapping blocks and the blocks are processed in parallel to improve throughput, latency and power efficiency. To minimize the amount of overlap among blocks with minimal effect on the accuracy, temporal information is used to estimate a flow map that guides flow vector selections for pixels along block boundaries. The proposed block-based NG-fSGM achieves significant reduction in complexity with only 0.51% accuracy degradation compared to the basic NG-fSGM.
Dissertation/Thesis
Masters Thesis Computer Science 2017
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography