To see the other types of publications on this topic, follow the link: Signal complexity.

Dissertations / Theses on the topic 'Signal complexity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Signal complexity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bull, David R. "Signal processing techniques with reduced computational complexity." Thesis, Cardiff University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.388006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lallo, Madeline M. "Good Vibrations: Signal Complexity in Schizocosa Ethospecies." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1554215678769319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shah, Kushal Yogeshkumar. "Computational Complexity of Signal Processing Functions in Software Radio." Cleveland State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=csu1292854939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Tong. "Low-complexity signal processing algorithms for wireless sensor networks." Thesis, University of York, 2012. http://etheses.whiterose.ac.uk/2844/.

Full text
Abstract:
Recently, wireless sensor networks (WSNs) have attracted a great deal of research interest because of their unique features that allow a wide range of applications in the areas of military, environment, health and home. One of the most important constraints on WSNs is the low power consumption requirement as sensor nodes carry limited, generally irreplaceable, power sources. Therefore, low complexity and high energy efficiency are the most important design characteristics for WSNs. In this thesis, we focus on the development of low complexity signal processing algorithms for the physical layer and cross layer designs for WSNs. For the physical layer design, low-complexity set-membership (SM) channel estimation algorithms for WSNs are investigated. Two matrix-based SM algorithms are developed for the estimation of the complex matrix channel parameters. The main goal is to reduce the computational complexity significantly as compared with existing channel estimators and extend the lifetime of the WSN by reducing its power consumption. For the cross layer design, strategies to jointly design linear receivers and the power allocation parameters for WSNs via an alternating optimization approach are proposed. We firstly consider a two-hop wireless sensor network with multiple relay nodes. Two design criteria are considered: the first one minimizes the mean-square error (MMSE) and the second one maximizes the sum-rate (MSR) of the wireless sensor network. Then, in order to increase the applicability of our investigation, we develop joint strategies for general multihop WSNs. They can be considered as an extension of the strategies proposed for the two-hop WSNs and more complex mathematical derivations are presented. The major advantage is that they are applicable to general multihop WSNs which can provide larger coverage than the two-hop WSNs.
APA, Harvard, Vancouver, ISO, and other styles
5

Paraskevas, Alexandros. "Organisational crisis signal detection from a complexity thinking perspective." Thesis, Oxford Brookes University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.515276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Perry, Russell. "Low complexity adaptive equalisation for wireless applications." Thesis, University of Bristol, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ma, Hannan. "Iterative row-column algorithms for two-dimensional intersymbol interference channel equalization complexity reduction and performance enhancement /." Pullman, Wash. : Washington State University, 2010. http://www.dissertations.wsu.edu/Thesis/Summer2010/h_ma_062110.pdf.

Full text
Abstract:
Thesis (M.S. in electrical engineering)--Washington State University, August 2010.<br>Title from PDF title page (viewed on July 28, 2010). "School of Electrical Engineering and Computer Science." Includes bibliographical references (p. 51).
APA, Harvard, Vancouver, ISO, and other styles
8

Kale, Kaustubh R. "Low complexity, narrow baseline beamformer for hand-held devices." [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0001223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sepehr, H. "Advanced adaptive signal processing techniques for low complexity speech enhancement applications." Thesis, University College London (University of London), 2011. http://discovery.ucl.ac.uk/1306808/.

Full text
Abstract:
This thesis research is focused on using subband and multi rate adaptive signal processing techniques in order to develop practical speech enhancement algorithms. This thesis comprises of research on three different speech enhancement applications. Firstly, design of a novel method for attenuation of a siren signal in an emergency telephony system (by use of single source siren noise reduction algorithms) is investigated. The proposed method is based on wavelet filter banks and series of adaptive notch filters in order to detect and attenuate the siren noise signal with minimal effect on quality of speech signal. Results of my testing show that this algorithm provides superior results in comparison to prior art solutions. Secondly, effect of time and frequency resolution of a filter bank used in a statistical single source noise reduction algorithm is investigated. Following this study, a novel method for improvement of time domain noise reduction algorithm is presented. The suggested method is based on detection of transient elements of speech signal followed by a time varying signal dependent filter bank. This structure provides a high time resolution at points of transient in a noisy speech signal hence temporal smearing of the processed signal is avoided. Additionally, this algorithm provides high frequency resolution at other times which results in a good performing noise reduction algorithm and benchmarking results against a prior art algorithm and a commercially available noise reduction solution show better performance of proposed algorithm. The time domain nature of algorithm provides a low processing delay algorithm that is suitable for applications with low latency requirement such as hearing aid devices. Thirdly, a low footprint delayless subband adaptive filtering algorithm for applications with low processing delay requirement such as echo cancellation (EC) in telephony networks is proposed. The suggested algorithm saves substantial memory and MIPS and provides significantly faster convergence rate in comparison with prior art algorithms. Finally, challenges and issues for implementation of real-time audio signal processing algorithms on DSP chipsets (especially low power DSPs) are briefly explained and some applications of research conducted in this thesis are presented.
APA, Harvard, Vancouver, ISO, and other styles
10

Page, Kevin J. "Reduced complexity interconnection and computation for digital signal processing in communications /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1997. http://wwwlib.umi.com/cr/ucsd/fullcit?p9804035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sjöberg, Frank. "High speed communication on twisted-pair wires and low complexity multiuser detectors." Licentiate thesis, Luleå tekniska universitet, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-18771.

Full text
Abstract:
This thesis deals with two different topics: High speed communication on twisted pair wires (digital subscriber lines) and low complexity multiuser detectors. The major part of this thesis concerns a technique for high speed communication over the telephone network called Very high bit rate Digital Subscriber Line VDSL). VDSL is not standardized yet but it is intended to offer bit rates up to 52 Mbit per second on twisted-pair wires. An important problem in VDSL is crosstalk between wire pairs, especially Near End Cross Talk (NEXT). A novel duplex method, called Zipper, that mitigates NEXT is presented herein. Zipper is a flexible duplex method that has high duplex efficiency and offers compatibility with existing services. It is based on Discrete Multi Tone (DMT) modulation, and uses different subcarriers in the two different transmission directions. The method relies on an additional cyclic extension to ensure orthogonality between the transmission directions. Zipper achieves best performance when all transmitters in the access network are synchronized, but it can also operate in an asynchronous mode with only a small loss in performance. Another important issue for VDSL is the problem with Radio Frequency Interference (RFI). The copper wires can act as large antennas and hence can transmit and receive radio signals. Herein the problem of radio frequency signals interfering with VDSL systems (called RFI-ingress) is addressed. The proposed method for suppressing the RFI works in the frequency domain of the DMT-receiver and can be used by any DMT-based VDSL system. By modeling the RFI and measuring the disturbance on some unmodulated subcarriers we can extrapolate and subtract the disturbance on all the other subcarriers. For a typical scenario with an average Signal-to-Noise Ratio (SNR) of 30 dB without RFI, about 20 dB can be lost due to RFI, but with the presented RFI-canceller this SNR-loss is reduced to less than 1 dB. The last part of this thesis deals with low complexity multiuser detection in a direct sequence code division multiple access system. The Maximum Likelihood Sequence Detector (MLSD) gives very good performance but is known to be very computationally complex. The detector presented herein is a simple threshold detector that makes MLSD-decisions on some, but not necessarily all, bits. A pipelined structure of the detector is presented which is attractive from an implementation point of view, since it allows parallel processing of the data. Using a single-user matched filter detector as post processor, taking care of the previously undetected bits, a complete multiuser detector with very low complexity is achieved. This detector gives better performance than the decorrelator receiver for a limited number of simultaneous users, e.g. up to 25 users with a spreading factor of 127.<br><p>Godkänd; 1998; 20070404 (ysko)</p>
APA, Harvard, Vancouver, ISO, and other styles
12

Brooks, Duncan John. "Adaptive algorithms for low complexity equalizers in mobile communications." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Jia, Yugang. "Stochastic approximations for reduced complexity signal processing algorithms in MIMO wireless communications." Thesis, University of Bristol, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zhu, Ying. "Signal detection on two-dimensional intersymbol interference channels correlated sources and reduced complexity algorithms /." [Pullman, Wash.] : Washington State University, 2008. http://www.dissertations.wsu.edu/Dissertations/Fall2008/y_zhu_081408.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Washington State University, December 2008.<br>Title from PDF title page (viewed on Sept. 23, 2008) "School of Electrical Engineering and Computer Science." Includes bibliographical references (p. 83-90).
APA, Harvard, Vancouver, ISO, and other styles
15

Drumright, Thomas Alexander. "Monte Carlo and reduced complexity methods for signal detection in wireless communication systems /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2004. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zhao, Jing. "Information theoretic approach for low-complexity adaptive motion estimation." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0013068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kei, Chun-Ling. "Efficient complexity reduction methods for short-frame iterative decoding /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20KEI.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.<br>Includes bibliographical references (leaves 86-91). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
18

Koutrouli, Eleni. "Low Complexity Beamformer structures for application in Hearing Aids." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17612.

Full text
Abstract:
Background noise is particularly damaging to speech intelligibility for people with hearing loss. The problem of reducing noise in hearing aids is one of great importance and great difficulty. Over the years, many solutions and different algorithms have been implemented in order to provide the optimal solution to the problem. Beamforming has been used for a long time and has therefore been extensively researched. Studying the performance of Minimum Variance Distortionless Response (MVDR) beamforming with a three- and four- microphone array compared to the conventional two-microphone array, the aim is to implement a speech signal enhancement and a noise reduction algorithm. By using multiple microphones, it is possible to achieve spatial selectivity, which is the ability to select certain signals based on the angle of incidence, and improve the performance of noise reduction beamformers. This thesis proposes the use of beamforming, an existing technique in order to create a new way to reduce noise transmitted by hearing aids. In order to reduce the complexity of that system, we use hybrid cascades, which are simpler beamformers of two inputs each and connected in series. The configurations that we consider are a three-microphone linear array (monaural beamformer), a three-microphone configuration with a two-microphone linear array and the 3rd microphone in the ear (monaural beamformer), a three-microphone configuration with a two-microphone linear array and the 3rd microphone on contra-lateral ear (binaural beamformer), and finally four-microphone configurations. We also investigate the performance improvement of the beamformer with more than two microphones for the different configurations, against the two-microphone beamformer reference. This can be measured by using objective measurements, such as the amount of noise suppression, target energy loss, output SNR, speech intelligibility index and speech quality evaluation. These objective measurements are good indicators of subjective performance. In this project, we prove that most hybrid structures can perform satisfyingly well compared to the full complexity beamformer. The low complexity beamformer is designed with a fixed target location (azimuth), where its weights are calibrated with respect to a target signal located in front of the listener and for a diffuse noise field. Both second- and third- order beamformers are tested in different acoustic scenarios, such as a car environment, a meeting room, a party occasion and a restaurant place. In those scenarios, the target signal is not arriving at the hearing aid directly from the front side of the listener and the noise field is not always diffuse. We thoroughly investigate what are the performance limitations in that case and how well the different cascades can perform. It is proven that there are some very critical factors, which can affect the performance of the fixed beamformer, concerning all the hybrid structures that were examined. Finally, we show that lower complexity cascades for both second- and third- order beamformers can perform similarly well as the full complexity beamformers when tested for a set of multiple Head Related Transfer Functions (HRTFs) that correspond to a real head shape.
APA, Harvard, Vancouver, ISO, and other styles
19

Vorhies, John T. "Low-complexity Algorithms for Light Field Image Processing." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590771210097321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Xiaojing. "Relating Brain Signal Complexity, Cognitive Performance and APOE Polymorphism – the Case of Young Healthy Adults." Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21383.

Full text
Abstract:
Das menschliche Gehirn ist ein komplexes System, dessen Komplexität von großer funktioneller Bedeutung. Das APOE ɛ4 Allel ist ein gut untersuchter genetischer Risiko-Faktor für die Ausbildung der Alzheimer’schen Demenz. Das wesentliche Ziel dieser Dissertation ist die Untersuchung der Verbindungen zwischen der Komplexität von Hirn-Signalen, APOE-Genotyp und kognitiver Leistung bei jungen gesunden Erwachsenen unter dem Gesichtspunkt individueller Unterschiede. Nachdem ich in der ersten Studie die Reliabilität der Residual Iteration Decomposition (RIDE), einer Methode zur Analyse von Gehirnsignalen, validiert hatte, im der zweiten Studie untersuchte ich, wie APOE-Genotypen mit der Komplexität des Gehirnsignals assoziiert sind, gemessen mit Multiscale Entropy (MSE) und kognitiven Fähigkeiten. Die zweite Studie zeigte, dass APOE ɛ4 mit einer höheren Entropie im Skala 1 bis 4 und einer niedrigeren Entropie im Skala 5 und darüber assoziiert ist; Darüber hinaus gibt es bei ε4-Trägern einen stärkeren Abfall der MSE von geschlossenen zu offenen Augen als bei Nicht-Trägern. Die ε4-Assoziation mit der kognitiven Leistung war komplex, aber im Grunde scheint ε4 mit einer schlechteren kognitiven Leistung bei Menschen mit niedrigerem Bildungsstand verbunden zu sein, während bei Hochschulabsolventen keine solche Assoziation auftrat. Anschließend verband die dritte Studie MSE mit einer anderen kognitiven Domäne - Gesichts- und Objekterkennungsfähigkeiten. Wir haben gezeigt, dass 1) eine erhöhte MSE bei geschlossenen Augen auf allen Skalen mit einer besseren kognitiven Leistung verbunden ist. 2) Eine erhöhte MSE in höheren Skalen war mit einer engeren Kopplung zwischen der RIDE-extrahierten Geschwindigkeit der Bewertung des Stimulus für einen einzelnen Versuch und der Reaktionszeit verbunden. Zusammenfassend, die Ergebnisse verbanden die Komplexität des Gehirnsignals, den APOE-Genotyp und das kognitive Verhalten bieten ein tieferes Verständnis der Gehirn-Verhaltens-Beziehungen.<br>Human brain is a complex dynamical system, whose complexity could be highly functional and characterize cognitive abilities or mental disorders. The APOE ɛ4 allele is a well-known genetic risk factor for the development of Alzheimer’s Disease and cognitive decline in later human life. The main goal of this study is to investigate the bridges between brain signal complexity, APOE genotype and cognitive performance among young adults under the framework of individual difference. After validating the reliability of Residue Iteration decomposition (RIDE), a method for analysis brain signals in the first study, I investigated in the second study how individual differences in APOE genotypes are associated with brain signal complexity measured with Multiscale Entropy (MSE) and cognitive ability. The second study demonstrated that APOE ε4 is associated with higher entropy at scale 1-4 and lower entropy at scale 5 and above, especially at frontal scalp regions and in an eyes open condition; in addition, there’s a stronger drop in MSE from closed to open eyes condition among ε4 carriers than non-carriers. The ε4 association with cognitive performance was complex, but basically ε4 seems to be associated with worse cognitive performance among lower educated people, whereas no such association appeared among the higher educated. Afterwards, the third study connected MSE with a different cognitive domain – face and object cognition abilities. We showed that 1) increased MSE for a closed eyes condition at all scales is associated with better cognitive performance. 2) Increased MSE at higher scales (7 or 8) was associated with tighter coupling between RIDE-extracted single trial stimulus evaluation speed at the neural level and reaction time at the behavior level. To summarize, the results of my doctoral study connected brain signal complexity, APOE genotype and cognitive behavior among young healthy adults, providing a deeper understanding of brain-behavior relationships and – potentially – for early AD diagnosis when cognitive decline is not yet evident.
APA, Harvard, Vancouver, ISO, and other styles
21

El, Sayed Hussein Jomaa Mohamad. "Signal processing of electroencephalograms with 256 sensors in epileptic children." Thesis, Angers, 2019. http://www.theses.fr/2019ANGE0028.

Full text
Abstract:
Dans cette thèse, nous proposons des méthodes de traitement du signal et les appliquons à des signaux d’électro-encéphalographie (EEG) enregistrés chez des patients épileptiques. L’objectif est de pouvoir quantifier l’état du patient et d’étudier l’évolution du trouble neurologique au cours du temps. Les méthodes que nous avons développées sont basées sur des mesures d’entropie. Ainsi, nous introduisons la « multivariate Improved Weighted Multi-scale Permutation Entropy» (mvIWMPE) que nous appliquons à des signaux EEG d’enfants sains et épileptiques. Elle donne des résultats prometteurs. Nous proposons également une approche multivariée pour la « Sample Entropy». Les résultats montrent qu’elle permet de traiter correctement un plus grand nombre de canaux que la méthode existante. Nous présentons aussi une mesure de complexité temps-fréquence variable dans le temps, basée sur la « Singular Value Decomposition » et la « Rényi Entropy ». Ces mesures, appliquées sur l’EEG d’enfants épileptiques avant et 4-6 semaines après un traitement, conduisent à des résultats qui sont en accord avec le diagnostic clinique quant à l’évolution de la pathologie. La dernière partie de la thèse porte sur les mesures de connectivité fonctionnelle. Nous proposons une méthode de connectivité fonctionnelle basée sur la mvIWMPE et l’information mutuelle. Elle est appliquée sur des signaux EEG d’enfants sains au repos. A l’aide de mesures de réseau, nous pouvons identifier des régions cérébrales actives dans des réseaux précédemment découverts grâce à l’imagerie par résonance magnétique fonctionnelle. La méthode est également utilisée pour étudier les réseaux chez des enfants épileptiques<br>In this thesis, our focus is to develop signal processing methods to be used on electroencephalography (EEG) signals recorded from epileptic patients. The aim of these methods is to be able to quantify the state of the patient with epilepsy and to study the progress of the neurological disorder over time. The methods we developed are based on entropy. From previous permutation entropy methods we introduce the multivariate Improved Weighted Multi-scale Permutation Entropy (mvIWMPE). This method is applied on EEG signals of both healthy and epileptic children and gives promising results. We also introduce a new multivariate approach for sample entropy and, when tested and compared with the existing multivariate approach, we find that the introduced approach is much betterin handling a larger numbers of channels. We also introduce a time-varying time frequency complexity measure based on Singular Value Decomposition and Rényi Entropy. These measures are applied on EEG of epileptic children before and after 4-6 weeks of treatment. The results come in correspondence with the clinical diagnosis from the hospital on whether the patients improve or not. The final part of the thesis focuses on functional connectivity measures. We introduce a new functional connectivity method based on mvIWMPE and Mutual Information. The method is applied on EEG signals of healthy children at rest. Using network measures, we are able to identify regions in the brain that are active in networks previously found using functional magnetic resonance imaging. The method is also used to study the networks of epileptic children at several points throughout the treatment
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Xiaojing. "Relating brain signal complexity, cognitive performance and APOE polymorphisms : the case of young healthy human adults." HKBU Institutional Repository, 2019. https://repository.hkbu.edu.hk/etd_oa/670.

Full text
Abstract:
Human brain is a complex dynamical system, whose complexity could be highly functional and characterize cognitive abilities or mental disorders. The APOE ɛ4 allele is a well-known genetic risk factor for the development of Alzheimer's Disease and cognitive decline in later human life. However, there are no robust conclusions about the APOE genotype-phenotype association among young healthy adults. The main goal of this study is to investigate the bridges between brain signal complexity, APOE genotype and cognitive performance among young adults under the framework of individual difference. Before going deeper to the main topic, the first study assessed the reliability of Residue Iteration decomposition (RIDE), a method for analysis brain signals that was applied in the main parts of my thesis. Using a dataset independent from the main topic, I demonstrated that as compared with conventional analysis method, the RIDE-reconstructed event-related-potentials (ERPs), including the N400 component reflecting the evaluation of semantic incongruities during social communication, could more sensitively characterize people across a spectrum of autistic level. The second study investigated how individual differences in APOE genotypes are associated with 1) brain signal complexity measured with Multiscale Entropy (MSE) and 2) cognitive ability in specific domain, especially, working memory capacity. Using Structural Equation Modelling (SEM) we showed that APOE ε4 is associated with higher entropy at scale 1-4 and lower entropy at scale 5 and above, especially at frontal scalp regions and in an eyes open condition; in addition, we showed a stronger drop in MSE from closed to open eyes condition among ε4 carriers than non-carriers. The ε4 association with cognitive performance was complex, but basically ε4 seems to be associated with worse cognitive performance among lower educated people, whereas no such association appeared among the higher educated. The third study connected MSE with a different cognitive domain - face and object cognition abilities We showed that 1) increased MSE at all scales is associated with better cognitive performance from the view of both diffusion process during perceptual decision making and task performance accuracy. However, the association was only consistent for a closed eyes condition. 2) Increased MSE at higher scales (7 or 8) was associated with tighter coupling between RIDE-extracted single trial stimulus evaluation speed at the neural level and reaction time at the behavior level. To summarize, the results of my doctoral study connected brain signal complexity, APOE genotype and cognitive behavior among young healthy adults, providing a deeper understanding of brain-behavior relationships and - potentially - for early AD diagnosis when cognitive decline is not yet evident.
APA, Harvard, Vancouver, ISO, and other styles
23

Streich, Sebastian. "Music complexity: a multi-faceted description of audio content." Doctoral thesis, Universitat Pompeu Fabra, 2007. http://hdl.handle.net/10803/7545.

Full text
Abstract:
Esta tesis propone un juego de algoritmos que puede emplearse para computar estimaciones de las distintas facetas de complejidad que ofrecen señales musicales auditivas. Están enfocados en los aspectos de acústica, ritmo, timbre y tonalidad. Así pues, la complejidad musical se entiende aquí en el nivel más basto del común acuerdo entre oyentes humanos. El objetivo es obtener juicios de complejidad mediante computación automática que resulten similares al punto de vista de un oyente ingenuo. La motivación de la presente investigación es la de mejorar la interacción humana con colecciones de música digital. Según se discute en la tesis,hay toda una serie de tareas a considerar, como la visualización de una colección, la generación de listas de reproducción o la recomendación automática de música. A través de las estimaciones de complejidad musical provistas por los algoritmos descritos, podemos obtener acceso a un nivel de descripción semántica de la música que ofrecerá novedosas e interesantes soluciones para estas tareas.<br>This thesis proposes a set of algorithms that can be used to compute estimates of music complexity facets from musical audio signals. They focus on aspects of acoustics, rhythm, timbre, and tonality. Music complexity is thereby considered on the coarse level of common agreement among human listeners. The target is to obtain complexity judgments through automatic computation that resemble a naive listener's point of view. The motivation for the presented research lies in the enhancement of human interaction with digital music collections. As we will discuss, there is a variety of tasks to be considered, such as collection visualization, play-list generation, or the automatic recommendation of music. Through the music complexity estimates provided by the described algorithms we can obtain access to a level of semantic music description, which allows for novel and interesting solutions of these tasks.
APA, Harvard, Vancouver, ISO, and other styles
24

Osman, Ammar. "Low-complexity OFDM transceiver design for UMTS-LTE." Thesis, Blekinge Tekniska Högskola, Avdelningen för signalbehandling, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3716.

Full text
Abstract:
Over the past two decades the mobile wireless communication systems has been growing fast and continuously. Therefore, the standardization bodies together with wireless researchers and mobile operators around the globe have been constantly working on new technical specifications in order to meet the demand for this rapid growth. The 3rd Generation Partnership Project (3GPP) one of the largest of such standardization bodies, works on developing the current third generation (3G) mobile telecommunication systems towards the future 4th generation. Research towards meeting the higher demands for higher data rates was the main reason for the birth of an evolution technology towards the 4th generation mobile systems. This evolution to the current 3rd generation UMTS systems was given the name E-UTRA/UTRAN Long Term Evolution (LTE) by the 3GPP. This thesis research has been carried out at the Telecommunications Research Center (ftw.) in Vienna. It was conducted in the framework of the C10 project “Wireless Evolution Beyond 3G”. One of the fields of research within this project is to have a special focus on the OFDM modulation schemes that are discussed under the new evolution technology (LTE) of the UMTS mobile networks. Therefore, this thesis focuses mainly in analyzing the new requirements, and evaluating them by designing a low-complexity UMTS-LTE OFDM based transceiver. This thesis aims mainly in studying the feasibility of this technology by means of simulation.<br>Tel: +46-704469795 Email: osman@ftw.at,amos04@student.bth.se, ammarmao@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
25

Liao, Yizheng. "Phase and Frequency Estimation: High-Accuracy and Low- Complexity Techniques." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/283.

Full text
Abstract:
The estimation of the frequency and phase of a complex exponential in additive white Gaussian noise (AWGN) is a fundamental and well-studied problem in signal processing and communications. A variety of approaches to this problem, distinguished primarily by estimation accuracy, computational complexity, and processing latency, have been developed. One class of approaches is based on the Fast Fourier Transform (FFT) due to its connections with the maximum likelihood estimator (MLE) of frequency. This thesis compares several FFT-based approaches to the MLE in terms of their estimation accuracy and computational complexity. While FFT-based frequency estimation tends to be very accurate, the computational complexity of the FFT and the latency associated with performing these computations after the entire signal has been received can be prohibitive in some scenarios. Another class of approaches that addresses some of these shortcomings is based on linear regression of samples of the instantaneous phase of the observation. Linear- regression-based techniques have been shown to be very accurate at moderate to high signal to noise ratios and have the additional benefit of low computational complexity and low latency due to the fact that the processing can be performed as the samples arrive. These techniques, however, typically require the computation of four-quadrant arctangents, which must be approximated to retain low computational complexity. This thesis proposes a new frequency and phase estimator based on simple estimates of the zero-crossing times of the observation. An advantage of this approach is that it does not require arctangent calculations. Simulation results show that the zero-crossing frequency and phase estimator can provide high estimation accuracy, low computational complexity, and low processing latency, making it suitable for real-time applications. Accordingly, this thesis also presents a real-time implementation of the zero-crossing frequency and phase estimator in the context of a time-slotted round-trip carrier synchronization system for distributed beamforming. The experimental results show this approach can outperform a Phase Locked Loop (PLL) implementation of the same distributed beamforming system.
APA, Harvard, Vancouver, ISO, and other styles
26

Conzelmann, Holger. "Mathematical modeling of biochemical signal transduction pathways in mammalian cells a domain-oriented approach to reduce combinatorial complexity /." [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-38819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ohlsson, Henrik. "Studies on Design and Implementation of Low-Complexity Digital Filters." Doctoral thesis, Linköping : Dept. of Electrical Engineering, Univ, 2005. http://www.ep.liu.se/diss/science_technology/09/49/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Flanery, Trevor H. "Planning Local and Regional Development: Exploring Network Signal, Sites, and Economic Opportunity Dynamics." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/82907.

Full text
Abstract:
Urban development planning efforts are challenged to enhance coevolving spatial and socioeconomic systems that exist and interact at multiple scales. While network and simulation sciences have created new tools and theories suitable for urban studies, models of development are not yet suitable for local and regional development planning. A case study of the City of Roanoke, Virginia, grounded network development theories of scaling, engagement, and collective perception function, as well as network forms. By advancing urban development network theory, frameworks for urban simulation like agent-based models take more coherent shape. This in turn better positions decision-making and planning practitioners to adapt, transform, or renew local network-oriented development systems, and conceptualize a framework for computational urban development planning for regions and localities.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
29

Waters, Deric Wayne. "Signal Detection Strategies and Algorithms for Multiple-Input Multiple-Output Channels." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7514.

Full text
Abstract:
In todays society, a growing number of users are demanding more sophisticated services from wireless communication devices. In order to meet these rising demands, it has been proposed to increase the capacity of the wireless channel by using more than one antenna at the transmitter and receiver, thereby creating multiple-input multiple-output (MIMO) channels. Using MIMO communication techniques is a promising way to improve wireless communication technology because in a rich-scattering environment the capacity increases linearly with the number of antennas. However, increasing the number of transmit antennas also increases the complexity of detection at an exponential rate. So while MIMO channels have the potential to greatly increase the capacity of wireless communication systems, they also force a greater computational burden on the receiver. Even suboptimal MIMO detectors that have relatively low complexity, have been shown to achieve unprecedented high spectral efficiency. However, their performance is far inferior to the optimal MIMO detector, meaning they require more transmit power. The fact that the optimal MIMO detector is an impractical solution due to its prohibitive complexity, leaves a performance gap between detectors that require reasonable complexity and the optimal detector. The objective of this research is to bridge this gap and provide new solutions for managing the inherent performance-complexity trade-off in MIMO detection. The optimally-ordered decision-feedback (BODF) detector is a standard low-complexity detector. The contributions of this thesis can be regarded as ways to either improve its performance or reduce its complexity - or both. We propose a novel algorithm to implement the BODF detector based on noise-prediction. This algorithm is more computationally efficient than previously reported implementations of the BODF detector. Another benefit of this algorithm is that it can be used to easily upgrade an existing linear detector into a BODF detector. We propose the partial decision-feedback detector as a strategy to achieve nearly the same performance as the BODF detector, while requiring nearly the same complexity as the linear detector. We propose the family of Chase detectors that allow the receiver to trade performance for reduced complexity. By adapting some simple parameters, a Chase detector may achieve near-ML performance or have near-minimal complexity. We also propose two new detection strategies that belong to the family of Chase detectors called the B-Chase and S-Chase detectors. Both of these detectors can achieve near-optimal performance with less complexity than existing detectors. Finally, we propose the double-sorted lattice-reduction algorithm that achieves near-optimal performance with near-BODF complexity when combined with the decision-feedback detector.
APA, Harvard, Vancouver, ISO, and other styles
30

Sengupta, Arindam. "Multidimensional Signal Processing Using Mixed-Microwave-Digital Circuits and Systems." University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1407977367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Dikbas, Salih. "A low-complexity approach for motion-compensated video frame rate up-conversion." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42730.

Full text
Abstract:
Video frame rate up-conversion is an important issue for multimedia systems in achieving better video quality and motion portrayal. Motion-compensated methods offer better quality interpolated frames since the interpolation is performed along the motion trajectory. In addition, computational complexity, regularity, and memory bandwidth are important for a real-time implementation. Motion-compensated frame rate up-conversion (MC-FRC) is composed of two main parts: motion estimation (ME) and motion-compensated frame interpolation (MCFI). Since ME is an essential part of MC-FRC, a new fast motion estimation (FME) algorithm capable of producing sub-sample motion vectors at low computational-complexity has been developed. Unlike existing FME algorithms, the developed algorithm considers the low complexity sub-sample accuracy in designing the search pattern for FME. The developed FME algorithm is designed in such a way that the block distortion measure (BDM) is modeled as a parametric surface in the vicinity of the integer-sample motion vector; this modeling enables low computational-complexity sub-sample motion estimation without pixel interpolation. MC-FRC needs more accurate motion trajectories for better video quality; hence, a novel true-motion estimation (TME) algorithm targeting to track the projected object motion has been developed for video processing applications, such as motion-compensated frame interpolation (MCFI), deinterlacing, and denoising. Developed TME algorithm considers not only the computational complexity and regularity but also memory bandwidth. TME is obtained by imposing implicit and explicit smoothness constraints on block matching algorithm (BMA). In addition, it employs a novel adaptive clustering algorithm to keep the low-complexity at reasonable levels yet enable exploiting more spatiotemporal neighbors. To produce better quality interpolated frames, dense motion field at the interpolation instants are obtained for both forward and backward motion vectors (MVs); then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly.
APA, Harvard, Vancouver, ISO, and other styles
32

Diouf, Cherif El Valid. "Modélisation comportementale de drivers de ligne de transmission pour des besoins d'intégrité du signal et de compatibilité électromagnétique." Thesis, Brest, 2014. http://www.theses.fr/2014BRES0040/document.

Full text
Abstract:
La miniaturisation de circuits intégrés, les hautes fréquences de fonctionnement, la baisse des potentiels d'alimentation, les fortes densités d'intégration rendent les signaux numériques propagés sur les interconnexions très susceptibles à la dégradation voire à la corruption. En vue d’évaluer la compatibilité électromagnétique et l’intégrité du signal il est nécessaire de disposer dès les premières phases de développement de modèles précis de ces interconnexions pour les insérer dans les simulateurs temporels. Nos travaux s'inscrivent dans ce contexte et concernent plus particulièrement la modélisation comportementale des buffers et drivers de ligne de transmission. Ils ont abouti à une approche originale de modélisation notamment basée sur les séries de Volterra-Laguerre. Les modèles boites noires développés disposent d’une implémentation SPICE assez simple autorisant ainsi une très bonne portabilité. Ils sont faciles à identifier et disposent d’une complexité paramétrique permettant un gain important de temps de simulation vis-à-vis des modèles transistors des drivers. En outre les méthodes développées permettent une modélisation dynamique non linéaire plus précise du port de sortie, et une gestion plus générale des entrées autorisant notamment une très bonne prise en compte du régime de sur-cadencement ce que par exemple ne fait pas le standard IBIS<br>Integrated circuits miniaturization, high operating frequencies, lower supply voltages, high-density integration make digital signals propagating on interconnects highly vulnerable to degradation. Assessing EMC and signal integrity in the early stages of the design flow requires accurate interconnect models allowing for efficient time-domain simulations. In this context, our work addressed the issue of behavioral modeling of transmission line buffers, and particularly that of drivers. The main result is an original modeling approach partially based on Volterra-Laguerre series. The black box models we developed have a fairly simple implementation in SPICE thus allowing a very good portability. They are easy to identify and have a parametric complexity allowing a large gain in simulation time with respect to transistor driver models. In addition, the developed methods allow a more accurate output port nonlinear dynamics modeling, and a more general management of inputs. A very good reproduction of driver behaviour in overclocking conditions provides a significant advantage over standard IBIS models
APA, Harvard, Vancouver, ISO, and other styles
33

Rankine, Luke. "Newborn EEG seizure detection using adaptive time-frequency signal processing." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16200/.

Full text
Abstract:
Dysfunction in the central nervous system of the neonate is often first identified through seizures. The diffculty in detecting clinical seizures, which involves the observation of physical manifestations characteristic to newborn seizure, has placed greater emphasis on the detection of newborn electroencephalographic (EEG) seizure. The high incidence of newborn seizure has resulted in considerable mortality and morbidity rates in the neonate. Accurate and rapid diagnosis of neonatal seizure is essential for proper treatment and therapy. This has impelled researchers to investigate possible methods for the automatic detection of newborn EEG seizure. This thesis is focused on the development of algorithms for the automatic detection of newborn EEG seizure using adaptive time-frequency signal processing. The assessment of newborn EEG seizure detection algorithms requires large datasets of nonseizure and seizure EEG which are not always readily available and often hard to acquire. This has led to the proposition of realistic models of newborn EEG which can be used to create large datasets for the evaluation and comparison of newborn EEG seizure detection algorithms. In this thesis, we develop two simulation methods which produce synthetic newborn EEG background and seizure. The simulation methods use nonlinear and time-frequency signal processing techniques to allow for the demonstrated nonlinear and nonstationary characteristics of the newborn EEG. Atomic decomposition techniques incorporating redundant time-frequency dictionaries are exciting new signal processing methods which deliver adaptive signal representations or approximations. In this thesis we have investigated two prominent atomic decomposition techniques, matching pursuit and basis pursuit, for their possible use in an automatic seizure detection algorithm. In our investigation, it was shown that matching pursuit generally provided the sparsest (i.e. most compact) approximation for various real and synthetic signals over a wide range of signal approximation levels. For this reason, we chose MP as our preferred atomic decomposition technique for this thesis. A new measure, referred to as structural complexity, which quantifes the level or degree of correlation between signal structures and the decomposition dictionary was proposed. Using the change in structural complexity, a generic method of detecting changes in signal structure was proposed. This detection methodology was then applied to the newborn EEG for the detection of state transition (i.e. nonseizure to seizure state) in the EEG signal. To optimize the seizure detection process, we developed a time-frequency dictionary that is coherent with the newborn EEG seizure state based on the time-frequency analysis of the newborn EEG seizure. It was shown that using the new coherent time-frequency dictionary and the change in structural complexity, we can detect the transition from nonseizure to seizure states in synthetic and real newborn EEG. Repetitive spiking in the EEG is a classic feature of newborn EEG seizure. Therefore, the automatic detection of spikes can be fundamental in the detection of newborn EEG seizure. The capacity of two adaptive time-frequency signal processing techniques to detect spikes was investigated. It was shown that a relationship between the EEG epoch length and the number of repetitive spikes governs the ability of both matching pursuit and adaptive spectrogram in detecting repetitive spikes. However, it was demonstrated that the law was less restrictive forth eadaptive spectrogram and it was shown to outperform matching pursuit in detecting repetitive spikes. The method of adapting the window length associated with the adaptive spectrogram used in this thesis was the maximum correlation criterion. It was observed that for the time instants where signal spikes occurred, the optimal window lengths selected by the maximum correlation criterion were small. Therefore, spike detection directly from the adaptive window optimization method was demonstrated and also shown to outperform matching pursuit. An automatic newborn EEG seizure detection algorithm was proposed based on the detection of repetitive spikes using the adaptive window optimization method. The algorithm shows excellent performance with real EEG data. A comparison of the proposed algorithm with four well documented newborn EEG seizure detection algorithms is provided. The results of the comparison show that the proposed algorithm has significantly better performance than the existing algorithms (i.e. Our proposed algorithm achieved a good detection rate (GDR) of 94% and false detection rate (FDR) of 2.3% compared with the leading algorithm which only produced a GDR of 62% and FDR of 16%). In summary, the novel contribution of this thesis to the fields of time-frequency signal processing and biomedical engineering is the successful development and application of sophisticated algorithms based on adaptive time-frequency signal processing techniques to the solution of automatic newborn EEG seizure detection.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Xiaojing [Verfasser], Florian [Gutachter] Schmiedek, and Alan Chun-Nang [Gutachter] Wong. "Relating Brain Signal Complexity, Cognitive Performance and APOE Polymorphism – the Case of Young Healthy Adults / Xiaojing Li ; Gutachter: Florian Schmiedek, Alan Chun-Nang Wong." Berlin : Humboldt-Universität zu Berlin, 2020. http://d-nb.info/1211648419/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Krupnik, Viktoria, Ingo Nietzold, Bengt Bartsch, and Beate Rassler. "The effect of motor-respiratory coordination on the precision of tracking movements: influence of attention, task complexity and training." European journal of applied physiology (2015) 115, 12, S. 2543-2556, 2015. https://ul.qucosa.de/id/qucosa%3A14101.

Full text
Abstract:
Purpose: We investigated motor-respiratory coordination (MRC) in visually guided forearm tracking movements focusing on two main questions: (1) Does attentional demand, training or complexity of the tracking task have an effect on the degree of MRC? (2) Does MRC impair the precision of those movements? We hypothesized that (1) enhanced attention to the tracking task and training increase the degree of MRC while higher task complexity would reduce it, and (2) MRC impairs tracking precision. Methods: Thirty-five volunteers performed eight tracking trials with several conditions: positive (direct) signal–response relation (SRR), negative (inverse) SRR to increase task complexity, specific instruction for enhanced attention to maximize tracking precision (“strict” instruction), and specific instruction that tracking precision would not be evaluated (“relaxed” instruction). The trials with positive and negative SRR were performed three times each to study training effects. Results: While the degree of MRC remained in the same range throughout all experimental conditions, a switch in phase-coupling pattern was observed. In conditions with positive SRR or with relaxed instruction, we found one preferred phase-relationship per period. With higher task complexity (negative SRR) or increased attentional demand (strict instruction), a tighter coupling pattern with two preferred phase-relationships per period was adopted. Our main result was that MRC improved tracking precision in all conditions except for that with relaxed instruction. Reduction of amplitude errors mainly contributed to this precision improvement. Conclusion: These results suggest that attention devoted to a precision movement intensifies its phase-coupling with breathing and enhances MRC-related improvement of tracking precision.
APA, Harvard, Vancouver, ISO, and other styles
36

Senot, Maxime. "Modèle géométrique de calcul : fractales et barrières de complexité." Phd thesis, Université d'Orléans, 2013. http://tel.archives-ouvertes.fr/tel-00870600.

Full text
Abstract:
Les modèles géométriques de calcul permettent d'effectuer des calculs à l'aide de primitives géométriques. Parmi eux, le modèle des machines à signaux se distingue par sa simplicité, ainsi que par sa puissance à réaliser efficacement de nombreux calculs. Nous nous proposons ici d'illustrer et de démontrer cette aptitude, en particulier dans le cas de processus massivement parallèles. Nous montrons d'abord à travers l'étude de fractales que les machines à signaux sont capables d'une utilisation massive et parallèle de l'espace. Une méthode de programmation géométrique modulaire est ensuite proposée pour construire des machines à partir de composants géométriques de base -- les modules -- munis de certaines fonctionnalités. Cette méthode est particulièrement adaptée pour la conception de calculs géométriques parallèles. Enfin, l'application de cette méthode et l'utilisation de certaines des structures fractales résultent en une résolution géométrique de problèmes difficiles comme les problèmes de satisfaisabilité booléenne SAT et Q-SAT. Ceux-ci, ainsi que plusieurs de leurs variantes, sont résolus par machines à signaux avec une complexité en temps intrinsèque au modèle, appelée profondeur de collisions, qui est polynomiale, illustrant ainsi l'efficacité et le pouvoir de calcul parallèle des machines à signaux.
APA, Harvard, Vancouver, ISO, and other styles
37

Farquharson, Maree Louise. "Estimating the parameters of polynomial phase signals." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16312/.

Full text
Abstract:
Nonstationary signals are common in many environments such as radar, sonar, bioengineering and power systems. The nonstationary nature of the signals found in these environments means that classicalspectralanalysis techniques are notappropriate for estimating the parameters of these signals. Therefore it is important to develop techniques that can accommodate nonstationary signals. This thesis seeks to achieve this by firstly, modelling each component of the signal as having a polynomial phase and by secondly, developing techniques for estimating the parameters of these components. Several approaches can be used for estimating the parameters of polynomial phase signals, eachwithvarying degrees ofsuccess.Criteria to consider in potential estimation algorithms are (i) the signal-to-noise (SNR) ratio threshold of the algorithm, (ii) the amount of computation required for running the algorithm, and (iii) the closeness of the resulting estimates' mean-square errors to the minimum theoretical bound. These criteria will be used to compare the new techniques developed in this thesis with existing techniques. The literature on polynomial phase signal estimation highlights the recurring trade-off between the accuracy of the estimates and the amount of computation required. For example, the Maximum Likelihood (ML) method provides near-optimal estimates above threshold, but also incurs a heavy computational cost for higher order phase signals. On the other hand, multi-linear techniques such as the high-order ambiguity function (HAF) method require little computation, but have a significantly higher SNR threshold than the ML method. Of the existing techniques, the cubic phase (CP) function method is a promising technique because it provides an attractive SNR threshold and computational complexity trade-off. For this reason, the analysis techniques developed in this thesis will be derived from the CP function. A limitation of the CP function is its inability to accurately process phase orders greater than three. Therefore, the first novel contribution to this thesis develops a broadened class of discrete-time higher order phase (HP)functions to address this limitation.This broadened class is achieved by providing a multi-linear extension of the CP function. Monte Carlo simulations are performed to demonstrate the statistical advantage of the HP functions compared to the HAFs. A first order statistical analysis of the HP functions is presented. This analysis verifies the simulation results. The next novel contribution is a technique called the lower SNR cubic phase function (LCPF)method. It is an extension of the CP function, with the extension enabling performance at lower signal-to-noise ratios (SNRs). The improvement of the SNR threshold's performance is achieved by coherently integrating the CP function over a compact interval in the two-dimensional CP function space. The computation of the new algorithm is quite moderate, especially when compared to the ML method. Above threshold, the LCPF method's parameter estimates are asymptotically efficient. Monte Carlo simulation results are presented and a threshold analysis of the algorithm closely predicts the thresholds observed in these results. The next original contribution to this research involves extending the LCPF method so that it is able to process multicomponent cubic phase signals and higher order phase signals. The LCPF method is extended to higher orders by applying a windowing technique as opposed to adjusting the order of the kernel as implemented in the HP function method. To demonstrate the extension of the LCPF method for processing higher order phase signals and multicomponent cubic phase signals, some Monte Carlo simulations are presented. Finally, these estimation techniques are applied to real-worldscenarios in the fields of Power Systems Analysis, Neuroethology and Speech Analysis.
APA, Harvard, Vancouver, ISO, and other styles
38

Bourbia, Salma. "Algorithmes de prise de décision pour la "cognitive radio" et optimisation du "mapping" de reconfigurabilité de l'architecture de l'implémentation numérique." Phd thesis, Supélec, 2013. http://tel.archives-ouvertes.fr/tel-00931350.

Full text
Abstract:
Dans cette thèse nous nous intéressons au développement d'une méthode de prise de décision pour un équipement de réception de Radio Intelligente qui s'adapte dynamiquement à son environnement. L'approche que nous adoptons est basée sur la modélisation statistique de l'environnement radio. En caractérisant statistiquement les observations fournies par les capteurs de l'environnement, nous mettons en place des règles de décisions statistiques qui prennent en considération les erreurs d'observation des métriques radio, ce qui contribue à minimiser les taux des décisions erronées. Nous visons aussi à travers cette thèse à utiliser les capacités intelligentes de prise de décision pour contribuer à la réduction de la complexité de calcul au niveau de l'équipement de réception. En effet, nous identifions des scénarios de prise de décision de reconfiguration qui limitent la présence de certains composants ou fonctions de la chaîne de réception. En particulier, nous traitons, deux scénarios de décision qui adaptent respectivement la présence des fonctions d'égalisation et du beamforming en réception. La limitation de ces deux opérations contribue à la réduction de la complexité de calcul au niveau de la chaîne de réception sans dégrader ses performances. Enfin, nous intégrons notre méthode de décision par modélisation statistique ainsi que les deux scénarios de décision traités dans une architecture de gestion d'une radio intelligente, afin de mettre en valeur le contrôle de l'intelligence et de la reconfiguration dans un équipement radio.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Wei Zhang. "Wireless receiver designs from information theory to VLSI implementation /." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31817.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.<br>Committee Chair: Ma, Xiaoli; Committee Member: Anderson, David; Committee Member: Barry, John; Committee Member: Chen, Xu-Yan; Committee Member: Kornegay, Kevin. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
40

Velmurugan, Rajbabu. "Implementation Strategies for Particle Filter based Target Tracking." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14611.

Full text
Abstract:
This thesis contributes new algorithms and implementations for particle filter-based target tracking. From an algorithmic perspective, modifications that improve a batch-based acoustic direction-of-arrival (DOA), multi-target, particle filter tracker are presented. The main improvements are reduced execution time and increased robustness to target maneuvers. The key feature of the batch-based tracker is an image template-matching approach that handles data association and clutter in measurements. The particle filter tracker is compared to an extended Kalman filter~(EKF) and a Laplacian filter and is shown to perform better for maneuvering targets. Using an approach similar to the acoustic tracker, a radar range-only tracker is also developed. This includes developing the state update and observation models, and proving observability for a batch of range measurements. From an implementation perspective, this thesis provides new low-power and real-time implementations for particle filters. First, to achieve a very low-power implementation, two mixed-mode implementation strategies that use analog and digital components are developed. The mixed-mode implementations use analog, multiple-input translinear element (MITE) networks to realize nonlinear functions. The power dissipated in the mixed-mode implementation of a particle filter-based, bearings-only tracker is compared to a digital implementation that uses the CORDIC algorithm to realize the nonlinear functions. The mixed-mode method that uses predominantly analog components is shown to provide a factor of twenty improvement in power savings compared to a digital implementation. Next, real-time implementation strategies for the batch-based acoustic DOA tracker are developed. The characteristics of the digital implementation of the tracker are quantified using digital signal processor (DSP) and field-programmable gate array (FPGA) implementations. The FPGA implementation uses a soft-core or hard-core processor to implement the Newton search in the particle proposal stage. A MITE implementation of the nonlinear DOA update function in the tracker is also presented.
APA, Harvard, Vancouver, ISO, and other styles
41

Von, Eden Elric Omar. "Optical arbitrary waveform generation using chromatic dispersion in silica fibers." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/24780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Udupa, Pramod. "Algorithmes parallèles et architectures évolutives de faible complexité pour systèmes optiques OFDM cohérents temps réel." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S039/document.

Full text
Abstract:
Dans cette thèse, des algorithmes à faible complexité et des architectures parallèles et efficaces sont explorés pour les systèmes CO-OFDM. Tout d'abord, des algorithmes de faible complexité pour la synchronisation et l'estimation du décalage en fréquence en présence d'un canal dispersif sont étudiés. Un nouvel algorithme de synchronisation temporelle à faible complexité qui peut résister à grande quantité de retard dispersif est proposé et comparé par rapport aux propositions antérieures. Ensuite, le problème de la réalisation d'une architecture parallèle à faible coût est étudié et une architecture parallèle générique et évolutive qui peut être utilisée pour réaliser tout type d'algorithme d'auto-corrélation est proposé. Cette architecture est ensuite étendue pour gérer plusieurs échantillons issus du convertisseur analogique/numérique (ADC) en parallèle et fournir une sortie qui suive la fréquence des ADC. L'évolutivité de l'architecture pour un nombre plus élevé de sorties en parallèle et les différents types d'algorithmes d'auto-corrélation sont explorés. Une approche d'adéquation algorithme-architecture est ensuite appliquée à l'ensemble de la chaîne de l'émetteur-récepteur CO-OFDM. Du côté de l'émetteur, un algorithme IFFT à radix-22 est choisi pour et une architecture parallèle Multipath Delay Commutator (MDC). Feed-forward (FF) est choisie car elle consomme moins de ressources par rapport aux architectures MDC-FF en radix-2/4. Au niveau du récepteur, un algorithme efficace pour l'estimation du Integer CFO est adopté et implémenté de façon optimisée sans l'utilisation de multiplicateurs complexes. Une réduction de la complexité matérielle est obtenue grâce à la conception d'architectures efficaces pour la synchronisation temporelle, la FFT et l'estimation du CFO. Une exploration du compromis entre la précision des calculs en virgule fixe et la complexité du matériel est réalisée pour la chaîne complète de l'émetteur- récepteur, de façon à trouver des points de fonctionnement qui n'affectent pas le taux d'erreur binaire (TEB) de manière significative. Les algorithmes proposés sont validés à l'aide d'une part d'expériences off-line en utilisant un générateur AWG (arbitrary wave- form generator) à l'émetteur et un oscilloscope numérique à mémoire (DSO) en sortie de la détection cohérente au récepteur, et d'autre part un émetteur-récepteur temps-réel basé sur des plateformes FPGA et des convertisseurs numériques. Le TEB est utilisé pour montrer la validité du système intégré et en donner les performances<br>In this thesis, low-complexity algorithms and architectures for CO-OFDM systems are explored. First, low-complexity algorithms for estimation of timing and carrier frequency offset (CFO) in dispersive channel are studied. A novel low-complexity timing synchro- nization algorithm, which can withstand large amount of dispersive delay, is proposed and compared with previous proposals. Then, the problem of realization of low-complexity parallel architecture is studied. A generalized scalable parallel architecture, which can be used to realize any auto-correlation algorithm, is proposed. It is then extended to handle multiple parallel samples from ADC and provide outputs, which can match the input ADC rate. The scalability of the architecture for higher number of parallel outputs and different kinds of auto-correlation algorithms is explored. An algorithm-architecture approach is then applied to the entire CO-OFDM transceiver chain. At the transmitter side, radix-22 algorithm for IFFT is chosen and parallel Mul- tipath Delay Commutator (MDC) Feed-forward (FF) architecture is designed which con- sumes lesser resources compared to MDC FF architectures of radix-2/4. At the receiver side, efficient algorithm for Integer CFO estimation is adopted and efficiently realized with- out the use of complex multipliers. Reduction in complexity is achieved due to efficient architectures for timing synchronization, FFT and Integer CFO estimation. Fixed-point analysis for the entire transceiver chain is done to find fixed-point sensitive blocks, which affect bit error rate (BER) significantly. The algorithms proposed are validated using opti- cal experiments by the help of arbitrary waveform generator (AWG) at the transmitter and digital storage oscilloscope (DSO) and Matlab at the receiver. BER plots are used to show the validity of the system built. Hardware implementation of the proposed synchronization algorithm is validated using real-time FPGA platform
APA, Harvard, Vancouver, ISO, and other styles
43

Britton, Douglas Frank. "Generalized Gaussian Decompositions for Image Analysis and Synthesis." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14054.

Full text
Abstract:
This thesis presents a new technique for performing image analysis, synthesis, and modification using a generalized Gaussian model. The joint time-frequency characteristics of a generalized Gaussian are combined with the flexibility of the analysis-by-synthesis (ABS) decomposition technique to form the basis of the model. The good localization properties of the Gaussian make it an appealing basis function for image analysis, while the ABS process provides a more flexible representation with enhanced functionality. ABS was first explored in conjunction with sinusoidal modeling of speech and audio signals [George87]. A 2D extension of the ABS technique is developed here to perform the image decomposition. This model forms the basis for new approaches in image analysis and enhancement. The major contribution is made in the resolution enhancement of images generated using coherent imaging modalities such as Synthetic Aperture Radar (SAR) and ultrasound. The ABS generalized Gaussian model is used to decouple natural image features from the speckle and facilitate independent control over feature characteristics and speckle granularity. This has the beneficial effect of increasing the perceived resolution and reducing the obtrusiveness of the speckle while preserving the edges and the definition of the image features. A consequence of its inherent flexibility, the model does not preclude image processing applications for non-coherent image data. This is illustrated by its application as a feature extraction tool for a FLIR imagery complexity measure.
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Houhao. "Reduced-complexity noncoherent receiver for GMSK signals." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ63034.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Arbi, Tarak. "Les constellations tournées pour les réseaux sans fil et l'internet des objets sous-marins." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAE002.

Full text
Abstract:
La croissance exponentielle du nombre d’objets communicants et la demande accrue pour les services sans fil d’une part, et le besoin de réduire le coût énergétique des communications radio et les émissions associées de gaz à effet de serre, rendent les techniques qui sont efficaces à la fois en terme de ressources spectrales et en terme de ressources énergétiques, comme les constellations tournées, particulièrement intéressantes. En effet, les constellations tournées donnent de meilleures performances théoriques que les constellations conventionnelles sur les canaux à évanouissements grâce à un ordre de diversité implicite.Néanmoins, plusieurs problèmes liés au déploiement de cette technique constituent un frein pour une adoption plus large de cette modulation dans la pratique. Pour cela, dans cette thèse, nous proposons plusieurs solutions originales pour faire face à ces limitations.Tout d’abord, nous avons étudié les propriétés structurelles des constellations M-QAM tournées avec une série particulière d’angles de rotation α=arctan⁡(1/√M) afin de mettre en œuvre une méthode de détection de faible complexité pour les canaux à évanouissements avec ou sans effacements.Ensuite, nous avons proposé deux techniques distinctes et originales pour la réduction du PAPR pour les systèmes OFDM utilisant les constellations tournées ; d’abord, une technique SLM aveugle est proposée pour laquelle deux angles de rotation sont sélectionnés de façon à optimiser conjointement le taux d’erreur binaire des constellations et les performances théoriques de décodage aveugle. De surcroît, une technique d’entrelacement aveugle est proposée. Afin de réduire la complexité de décodage aveugle pour cette technique, nous nous basons à nouveau sur les propriétés des constellations M-QAM tournées avec α=arctan⁡(1/√M), pour concevoir un estimateur à faible complexité.Enfin, nous avons proposé un estimateur original de synchronisation de phase par lissage à faible complexité inspiré de l’approche Maximum a Posteriori (MAP). Cet estimateur permet d’estimer le décalage de phase variable aléatoirement dans le temps, à travers deux boucles d’aller-retour. L’estimateur proposé prend en compte de façon inhérente les caractéristiques des constellations tournées, contrairement aux estimateurs conventionnels, ce qui lui permet d’approcher la borne de Cramèr-Rao que nous avons développée<br>The exponential growth of the number of communicating objects and the rising demand for wireless services on the one hand, and the need to reduce the energy cost of radio communications with the associated greenhouse gas emissions on the other side, make bandwidth-power efficient techniques, such as rotated constellations, particularly interesting. Indeed, rotated constellations allow better theoretical performance than conventional constellations over fading channels thanks to an inherent Signal Space Diversity (SSD).Nevertheless, several issues prevent wider deployment of this technique. Therefore, in this PhD thesis, we propose several original solutions to face these limitations.First, we study the structural properties of M-QAM constellations rotated with a series of rotation angles α=arctan⁡(1/√M) , so as to propose a low-complexity detection technique for fading channels with or without erasures.Then, we propose two distinct and original techniques to reduce the PAPR of OFDM systems with SSD; first, a blind SLM technique is proposed for which two rotation angles are selected so as to jointly optimize the theoretical bit error rate of the constellations and the theoretical blind decoding performance. In addition, a completely blind interleaving technique is proposed. In order to reduce the blind decoding complexity for this second technique, we rely again on the properties of the M-QAM constellations rotated with α=arctan⁡(1/√M) , to design a low-complexity estimator.Finally, we propose an original phase estimator for rotated constellations based on a low-complexity smoothing approach inspired by the Maximum a Posteriori (MAP) principle. This estimator operates off-line and allows the estimation of randomly time-variable phase shifts through two phase-locked loops. The proposed estimator inherently takes into account the characteristics of rotated constellations, in contrast with conventional estimators, which allows to approach the Cramèr-Rao bound
APA, Harvard, Vancouver, ISO, and other styles
46

Gulfo, monsalve Jorge. "GreenOFDM a new method for OFDM PAPR reduction : application to the Internet of Things energy saving." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT106.

Full text
Abstract:
Ce travail est consacré à l'étude de la modulation OFDM et plus particulièrement au problème de son PAPR élevé.Une solution pour la réduction de PAPR, appelée GreenOFDM, est proposée. Ses performances sont analysées et comparées avec d'autres techniques existantes dans la littérature, les résultats obtenus sont très prometteurs. La complexité calculatoire de cette technique est analysée en vue de sa mise en œuvre. Deux méthodes sont proposées pour réduire le nombre total d'opérations de la technique GreenOFDM ; leurs performances sont obtenues par simulation. Nous montrons comment il est possible de réduire considérablement le coût de calcul afin d'obtenir une implémentation numérique efficace. Enfin, pour démontrer cette efficacité, le coût énergétique de la mise en œuvre de GreenOFDM dans un processeur programmable est analysé et comparé à la consommation d'énergie de la partie analogique de l'émetteur. Une comparaison en termes de consommation d'énergie avec d'autres techniques de modulation est également menée à bien<br>This work is devoted to the study of the OFDM modulation and more particularly to its high PAPR problem.A solution for the reduction of the PAPRs, called GreenOFDM, is proposed. Its performance is analyzed and compared with two other techniques available in the literature, the achieved performance of GreenOFDM is very promising. The computational complexity of this technique is analyzed in order to achieve an efficient implementation on a programmable processor. Two methods are proposed to reduce the total number of operations of the GreenOFDM technique; their performance is obtained by computer simulations. We show how it is possible to considerably reduce the number of operations and to obtain an efficient digital implementation. In fine, to demonstrate the efficiency, the energy cost of implementing GreenOFDM in a programmable processor is analyzed and compared to the energy consumption of the analog part of the transmitter. A comparison in terms of energy consumption with other modulation techniques is also carried out
APA, Harvard, Vancouver, ISO, and other styles
47

Fuentes, Muela Manuel. "Non-Uniform Constellations for Next-Generation Digital Terrestrial Broadcast Systems." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/84743.

Full text
Abstract:
Nowadays, the digital terrestrial television (DTT) market is characterized by the high capacity needed for high definition TV services. There is a need for an efficient use of the broadcast spectrum, which requires new technologies to guarantee increased capacities. Non-Uniform Constellations (NUC) arise as one of the most innovative techniques to approach those requirements. NUCs reduce the gap between uniform Gray-labelled Quadrature Amplitude Modulation (QAM) constellations and the theoretical unconstrained Shannon limit. With these constellations, symbols are optimized in both in-phase (I) and quadrature (Q) components by means of signal geometrical shaping, considering a certain signal-to-noise ratio (SNR) and channel model. There are two types of NUC, one-dimensional and two-dimensional NUCs (1D-NUC and 2D-NUC, respectively). 1D-NUCs maintain the squared shape from QAM, but relaxing the distribution between constellation symbols in a single component, with non-uniform distance between them. These constellations provide better SNR performance than QAM, without any demapping complexity increase. 2D-NUCs also relax the square shape constraint, allowing to optimize the symbol positions in both dimensions, thus achieving higher capacity gains and lower SNR requirements. However, the use of 2D-NUCs implies a higher demapping complexity, since a 2D-demapper is needed, i.e. I and Q components cannot be separated. In this dissertation, NUCs are analyzed from both transmit and receive point of views, using either single-input single-output (SISO) or multiple-input multiple-output (MIMO) antenna configurations. In SISO transmissions, 1D-NUCs and 2D-NUCs are optimized for a wide range of SNRs and different constellation orders. The optimization of rotated 2D-NUCs is also investigated. Even though the demapping complexity is not increased, the SNR gain of these constellations is not significant. The highest rotation gain is obtained for low-order constellations and high SNRs. However, with multi-RF techniques, the SNR gain is drastically increased, since I and Q components are transmitted in different RF channels. In this thesis, multi-RF gains of NUCs with and without rotation are provided for some representative scenarios. At the receiver, two different implementation bottlenecks are explored. First, the demapping complexity of all considered constellations is analyzed. Afterwards, two complexity reduction algorithms for 2D-NUCs are proposed. Both algorithms drastically reduce the number of distances to compute. Moreover, both are finally combined in a single demapper. Quantization of NUCs is also explored in this dissertation, since LLR values and I/Q components are modified when using these constellations, compared to traditional QAM constellations. A new algorithm that is based on the optimization of the quantizer levels for a particular constellation is proposed. The use of NUCs in multi-antenna communications is also investigated. It includes the optimization in one or two antennas, the use of power imbalance, the cross-polar discrimination (XPD) between receive antennas, or the use of different demappers. Assuming different values for the parameters evaluated, new Multi-Antenna Non-Uniform Constellations (MA-NUC) are obtained by means of a particularized re-optimization process, specific for MIMO. At the receiver, an extended demapping complexity analysis is performed, where it is shown that the use of 2D-NUCs in MIMO extremely increases the demapping complexity. As an alternative, an efficient solution for 2D-NUCs and MIMO systems based on Soft-Fixed Sphere Decoding (SFSD) is proposed. The main drawback is that SFSD demappers do not work with 2D-NUCs, since they perform a Successive Interference Cancellation (SIC) step that needs to be performed in separated I and Q components. The proposed method quantifies the closest symbol using Voronoi regions and allows SFSD demappers to work.<br>Hoy en día, el mercado de la televisión digital terrestre (TDT) está caracterizado por la alta capacidad requerida para transmitir servicios de televisión de alta definición y el espectro disponible. Es necesario por tanto un uso eficiente del espectro radioeléctrico, el cual requiere nuevas tecnologías para garantizar mayores capacidades. Las constelaciones no-uniformes (NUC) emergen como una de las técnicas más innovadoras para abordar tales requerimientos. Las NUC reducen el espacio existente entre las constelaciones uniformes QAM y el límite teórico de Shannon. Con estas constelaciones, los símbolos se optimizan en ambas componentes fase (I) y cuadratura (Q) mediante técnicas geométricas de modelado de la señal, considerando un nivel señal a ruido (SNR) concreto y un modelo de canal específico. Hay dos tipos de NUC, unidimensionales y bidimensionales (1D-NUC y 2D-NUC, respectivamente). Las 1D-NUC mantienen la forma cuadrada de las QAM, pero permiten cambiar la distribución entre los símbolos en una componente concreta, teniendo una distancia no uniforme entre ellos. Estas constelaciones proporcionan un mejor rendimiento SNR que QAM, sin ningún incremento en la complejidad en el demapper. Las 2D-NUC también permiten cambiar la forma cuadrada de la constelación, permitiendo optimizar los símbolos en ambas dimensiones y por tanto obteniendo mayores ganancias en capacidad y menores requerimientos en SNR. Sin embargo, el uso de 2D-NUCs implica una mayor complejidad en el receptor. En esta tesis se analizan las NUC desde el punto de vista tanto de transmisión como de recepción, utilizando bien configuraciones con una antena (SISO) o con múltiples antenas (MIMO). En transmisiones SISO, se han optimizado 1D-NUCs para un rango amplio de distintas SNR y varios órdenes de constelación. También se ha investigado la optimización de 2D-NUCs rotadas. Aunque la complejidad no aumenta, la ganancia SNR de estas constelaciones no es significativa. La mayor ganancia por rotación se obtiene para bajos órdenes de constelación y altas SNR. Sin embargo, utilizando técnicas multi-RF, la ganancia aumenta drásticamente puesto que las componentes I y Q se transmiten en distintos canales RF. En esta tesis, se han estudiado varias ganancias multi-RF representativas de las NUC, con o sin rotación. En el receptor, se han identificado dos cuellos de botella diferentes en la implementación. Primero, se ha analizado la complejidad en el receptor para todas las constelaciones consideradas y, posteriormente, se proponen dos algoritmos para reducir la complejidad con 2D-NUCs. Además, los dos pueden combinarse en un único demapper. También se ha explorado la cuantización de estas constelaciones, ya que tanto los valores LLR como las componentes I/Q se ven modificados, comparando con constelaciones QAM tradicionales. Además, se ha propuesto un algoritmo que se basa en la optimización para diferentes niveles de cuantización, para una NUC concreta. Igualmente, se ha investigado en detalle el uso de NUCs en MIMO. Se ha incluido la optimización en una sola o en dos antenas, el uso de un desbalance de potencia, factores de discriminación entre antenas receptoras (XPD), o el uso de distintos demappers. Asumiendo distintos valores, se han obtenido nuevas constelaciones multi-antena (MA-NUC) gracias a un nuevo proceso de re-optimización específico para MIMO. En el receptor, se ha extendido el análisis de complejidad en el demapper, la cual se incrementa enormemente con el uso de 2D-NUCs y sistemas MIMO. Como alternativa, se propone una solución basada en el algoritmo Soft-Fixed Sphere Decoding (SFSD). El principal problema es que estos demappers no funcionan con 2D-NUCs, puesto que necesitan de un paso adicional en el que las componentes I y Q necesitan separarse. El método propuesto cuantifica el símbolo más cercano utilizando las regiones de Voronoi, permitiendo el uso de este tipo de receptor.<br>Actualment, el mercat de la televisió digital terrestre (TDT) està caracteritzat per l'alta capacitat requerida per a transmetre servicis de televisió d'alta definició i l'espectre disponible. És necessari per tant un ús eficient de l'espectre radioelèctric, el qual requereix noves tecnologies per a garantir majors capacitats i millors servicis. Les constel·lacions no-uniformes (NUC) emergeixen com una de les tècniques més innovadores en els sistemes de televisió de següent generació per a abordar tals requeriments. Les NUC redueixen l'espai existent entre les constel·lacions uniformes QAM i el límit teòric de Shannon. Amb estes constel·lacions, els símbols s'optimitzen en ambdós components fase (I) i quadratura (Q) per mitjà de tècniques geomètriques de modelatge del senyal, considerant un nivell senyal a soroll (SNR) concret i un model de canal específic. Hi ha dos tipus de NUC, unidimensionals i bidimensionals (1D-NUC i 2D-NUC, respectivament). 1D-NUCs mantenen la forma quadrada de les QAM, però permet canviar la distribució entre els símbols en una component concreta, tenint una distància no uniforme entre ells. Estes constel·lacions proporcionen un millor rendiment SNR que QAM, sense cap increment en la complexitat al demapper. 2D-NUC també canvien la forma quadrada de la constel·lació, permetent optimitzar els símbols en ambdós dimensions i per tant obtenint majors guanys en capacitat i menors requeriments en SNR. No obstant això, l'ús de 2D-NUCs implica una major complexitat en el receptor, ja que es necessita un demapper 2D, on les components I i Q no poden ser separades. En esta tesi s'analitzen les NUC des del punt de vista tant de transmissió com de recepció, utilitzant bé configuracions amb una antena (SISO) o amb múltiples antenes (MIMO). En transmissions SISO, s'han optimitzat 1D-NUCs, per a un rang ampli de distintes SNR i diferents ordes de constel·lació. També s'ha investigat l'optimització de 2D-NUCs rotades. Encara que la complexitat no augmenta, el guany SNR d'estes constel·lacions no és significativa. El major guany per rotació s'obté per a baixos ordes de constel·lació i altes SNR. No obstant això, utilitzant tècniques multi-RF, el guany augmenta dràsticament ja que les components I i Q es transmeten en distints canals RF. En esta tesi, s'ha estudiat el guany multi-RF de les NUC, amb o sense rotació. En el receptor, s'han identificat dos colls de botella diferents en la implementació. Primer, s'ha analitzat la complexitat en el receptor per a totes les constel·lacions considerades i, posteriorment, es proposen dos algoritmes per a reduir la complexitat amb 2D-NUCs. Ambdós algoritmes redueixen dràsticament el nombre de distàncies. A més, els dos poden combinar-se en un únic demapper. També s'ha explorat la quantització d'estes constel·lacions, ja que tant els valors LLR com les components I/Q es veuen modificats, comparant amb constel·lacions QAM tradicionals. A més, s'ha proposat un algoritme que es basa en l'optimització per a diferents nivells de quantització, per a una NUC concreta. Igualment, s'ha investigat en detall l'ús de NUCs en MIMO. S'ha inclòs l'optimització en una sola o en dos antenes, l'ús d'un desbalanç de potència, factors de discriminació entre antenes receptores (XPD), o l'ús de distints demappers. Assumint distints valors, s'han obtingut noves constel·lacions multi-antena (MA-NUC) gràcies a un nou procés de re-optimització específic per a MIMO. En el receptor, s'ha modificat l'anàlisi de complexitat al demapper, la qual s'incrementa enormement amb l'ús de 2D-NUCs i sistemes MIMO. Com a alternativa, es proposa una solució basada en l'algoritme Soft-Fixed Sphere Decoding (SFSD) . El principal problema és que estos demappers no funcionen amb 2D-NUCs, ja que necessiten d'un pas addicional en què les components I i Q necessiten separar-se. El mètode proposat quantifica el símbol més pròxim utilitzan<br>Fuentes Muela, M. (2017). Non-Uniform Constellations for Next-Generation Digital Terrestrial Broadcast Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/84743<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
48

Zaylaa, Amira. "Analyse et extraction de paramètres de complexité de signaux biomédicaux." Thesis, Tours, 2014. http://www.theses.fr/2014TOUR3315/document.

Full text
Abstract:
L'analyse de séries temporelles biomédicales chaotiques tirées de systèmes dynamiques non-linéaires est toujours un challenge difficile à relever puisque dans certains cas bien spécifiques les techniques existantes basées sur les multi-fractales, les entropies et les graphes de récurrence échouent. Pour contourner les limitations des invariants précédents, de nouveaux descripteurs peuvent être proposés. Dans ce travail de recherche nos contributions ont porté à la fois sur l’amélioration d’indicateurs multifractaux (basés sur une fonction de structure) et entropiques (approchées) mais aussi sur des indicateurs de récurrences (non biaisés). Ces différents indicateurs ont été développés avec pour objectif majeur d’améliorer la discrimination entre des signaux de complexité différente ou d’améliorer la détection de transitions ou de changements de régime du système étudié. Ces changements agissant directement sur l’irrégularité du signal, des mouvements browniens fractionnaires et des signaux tirés du système du Lorenz ont été testés. Ces nouveaux descripteurs ont aussi été validés pour discriminer des fœtus en souffrance de fœtus sains durant le troisième trimestre de grossesse. Des mesures statistiques telles que l’erreur relative, l’écart type, la spécificité, la sensibilité ou la précision ont été utilisées pour évaluer les performances de la détection ou de la classification. Le fort potentiel de ces nouveaux invariants nous laisse penser qu’ils pourraient constituer une forte valeur ajoutée dans l’aide au diagnostic s’ils étaient implémentés dans des logiciels de post-traitement ou dans des dispositifs biomédicaux. Enfin, bien que ces différentes méthodes aient été validées exclusivement sur des signaux fœtaux, une future étude incluant des signaux tirés d’autres systèmes dynamiques nonlinéaires sera réalisée pour confirmer leurs bonnes performances<br>The analysis of biomedical time series derived from nonlinear dynamic systems is challenging due to the chaotic nature of these time series. Only few classical parameters can be detected by clinicians to opt the state of patients and fetuses. Though there exist valuable complexity invariants such as multi-fractal parameters, entropies and recurrence plot, they were unsatisfactory in certain cases. To overcome this limitation, we propose in this dissertation new entropy invariants, we contributed to multi-fractal analysis and we developed signal-based (unbiased) recurrence plots based on the dynamic transitions of time series. Principally, we aim to improve the discrimination between healthy and distressed biomedical systems, particularly fetuses by processing the time series using our techniques. These techniques were either validated on Lorenz system, logistic maps or fractional Brownian motions modeling chaotic and random time series. Then the techniques were applied to real fetus heart rate signals recorded in the third trimester of pregnancy. Statistical measures comprising the relative errors, standard deviation, sensitivity, specificity, precision or accuracy were employed to evaluate the performance of detection. Elevated discernment outcomes were realized by the high-order entropy invariants. Multi-fractal analysis using a structure function enhances the detection of medical fetal states. Unbiased cross-determinism invariant amended the discrimination process. The significance of our techniques lies behind their post-processing codes which could build up cutting-edge portable machines offering advanced discrimination and detection of Intrauterine Growth Restriction prior to fetal death. This work was devoted to Fetal Heart Rates but time series generated by alternative nonlinear dynamic systems should be further considered
APA, Harvard, Vancouver, ISO, and other styles
49

Kannappa, Sandeep Mavuduru. "Reduced Complexity Viterbi Decoders for SOQPSK Signals over Multipath Channels." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/604300.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California<br>High data rate communication between airborne vehicles and ground stations over the bandwidth constrained Aeronautical Telemetry channel is attributed to the development of bandwidth efficient Advanced Range Telemetry (ARTM) waveforms. This communication takes place over a multipath channel consisting of two components - a line of sight and one or more ground reflected paths which result in frequency selective fading. We concentrate on the ARTM SOQPSKTG transmit waveform suite and decode information bits using the reduced complexity Viterbi algorithm. Two different methodologies are proposed to implement reduced complexity Viterbi decoders in multipath channels. The first method jointly equalizes the channel and decodes the information bits using the reduced complexity Viterbi algorithm while the second method utilizes the minimum mean square error equalizer prior to applying the Viterbi decoder. An extensive numerical study is performed in comparing the performance of the above methodologies. We also demonstrate the performance gain offered by our reduced complexity Viterbi decoders over the existing linear receiver. In the numerical study, both perfect and estimated channel state information are considered.
APA, Harvard, Vancouver, ISO, and other styles
50

Vaiter, Samuel. "Régularisations de Faible Complexité pour les Problèmes Inverses." Phd thesis, Université Paris Dauphine - Paris IX, 2014. http://tel.archives-ouvertes.fr/tel-01026398.

Full text
Abstract:
Cette thèse se consacre aux garanties de reconstruction et de l'analyse de sensibilité de régularisation variationnelle pour des problèmes inverses linéaires bruités. Il s'agit d'un problème d'optimisation convexe combinant un terme d'attache aux données et un terme de régularisation promouvant des solutions vivant dans un espace dit de faible complexité. Notre approche, basée sur la notion de fonctions partiellement lisses, permet l'étude d'une grande variété de régularisations comme par exemple la parcimonie de type analyse ou structurée, l'antiparcimonie et la structure de faible rang. Nous analysons tout d'abord la robustesse au bruit, à la fois en termes de distance entre les solutions et l'objet original, ainsi que la stabilité de l'espace modèle promu. Ensuite, nous étudions la stabilité de ces problèmes d'optimisation à des perturbations des observations. À partir d'observations aléatoires, nous construisons un estimateur non biaisé du risque afin d'obtenir un schéma de sélection de paramètre.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography