To see the other types of publications on this topic, follow the link: Compressive spectrum sensing.

Dissertations / Theses on the topic 'Compressive spectrum sensing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Compressive spectrum sensing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Xingjian. "Compressive spectrum sensing in cognitive IoT." Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/44700.

Full text
Abstract:
With the rising of new paradigms in wireless communications such as Internet of things (IoT), current static frequency allocation policy faces a primary challenge of spectrum scarcity, and thus encourages the IoT devices to have cognitive capabilities to access the underutilised spectrum in the temporal and spatial dimensions. Wideband spectrum sensing is one of the key functions to enable dynamic spectrum access, but entails a major implementation challenge in terms of sampling rate and computation cost since the sampling rate of analog-to-digital converters (ADCs) should be higher than twice of the spectrum bandwidth based on the Nyquist-Shannon sampling theorem. By exploiting the sparse nature of wideband spectrum, sub-Nyquist sampling and sparse signal recovery have shown potential capabilities in handling these problems, which are directly related to compressive sensing (CS) from the viewpoint of its origin. To invoke sub-Nyquist wideband spectrum sensing in IoT, blind signal acquisition with low-complexity sparse recovery is desirable on compact IoT devices. Moreover, with cooperation among distributed IoT devices, the complexity of sampling and reconstruc- tion can be further reduced with performance guarantee. Specifically, an adaptively- regularized iterative reweighted least squares (AR-IRLS) reconstruction algorithm is proposed to speed up the convergence of reconstruction with less number of iterations. Furthermore, a low-complexity compressive spectrum sensing algorithm is proposed to reduce computation complexity in each iteration of IRLS-based reconstruction algorithm, from cubic time to linear time. Besides, to transfer computation burden from the IoT devices to the core network, a joint iterative reweighted sparse recovery scheme with geo-location database is proposed to adopt the occupied channel information from geo- location database to reduce the complexity in the signal reconstruction. Since numerous IoT devices access or release the spectrum randomly, the sparsity levels of wideband spec-trum signals are varying and unknown. A blind CS-based sensing algorithm is proposed to enable the local secondary users (SUs) to adaptively adjust the sensing time or sam- pling rate without knowledge of spectral sparsity. Apart from the signal reconstruction at the back-end, a distributed sub-Nyquist sensing scheme is proposed by utilizing the surrounding IoT devices to jointly sample the spectrum based on the multi-coset sam- pling theory, in which only the minimum number of low-rate ADCs on the IoT devices are required to form coset samplers without the prior knowledge of the number of occu- pied channels and signal-to-noise ratios. The models of the proposed algorithms are derived and verified by numerical analyses and tested on both real-world and simulated TV white space signals.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Feng. "Compressive Measurement of Spread Spectrum Signals." Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/347310.

Full text
Abstract:
Spread Spectrum (SS) techniques are methods used in communication systems where the spectra of the signal is spread over a much wider bandwidth. The large bandwidth of the resulting signals make SS signals difficult to intercept using conventional methods based on Nyquist sampling. Recently, a novel concept called compressive sensing has emerged. Compressive sensing theory suggests that a signal can be reconstructed from much fewer measurements than suggested by the Shannon Nyquist theorem, provided that the signal can be sparsely represented in a dictionary. In this work, motivated by this concept, we study compressive approaches to detect and decode SS signals. We propose compressive detection and decoding systems based both on random measurements (which have been the main focus of the CS literature) as well as designed measurement kernels that exploit prior knowledge of the SS signal. Compressive sensing methods for both Frequency-Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS) systems are proposed.
APA, Harvard, Vancouver, ISO, and other styles
3

Lui, Feng. "Spread Spectrum Signal Detection from Compressive Measurements." International Foundation for Telemetering, 2013. http://hdl.handle.net/10150/579660.

Full text
Abstract:
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV<br>Spread Spectrum (SS) techniques are methods used to deliberately spread the spectrum of transmitted signals in communication systems. The increased bandwidth makes detection of these signals challenging for non-cooperative receivers. In this paper, we investigate detection of Frequency Hopping Spread Spectrum (FHSS) signals from compressive measurements. The theoretical and simulated performances of the proposed methods are compared to those of the conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Qin, Zhijin. "Compressive sensing over TV white space in wideband cognitive radio." Thesis, Queen Mary, University of London, 2016. http://qmro.qmul.ac.uk/xmlui/handle/123456789/24244.

Full text
Abstract:
Spectrum scarcity is an important challenge faced by high-speed wireless communications. Meanwhile, caused by current spectrum assignment policy, a large portion of spectrum is underutilized. Motivated by this, cognitive radio (CR) has emerged as one of the most promising candidate solutions to improve spectrum utilization, by allowing secondary users (SUs) to opportunistically access the temporarily unused spectrum, without introducing harmful interference to primary users. Moreover, opening of TV white space (TVWS) gives us the con dence to enable CR for TVWS spectrum. A crucial requirement in CR networks (CRNs) is wideband spectrum sensing, in which SUs should detect spectral opportunities across a wide frequency range. However, wideband spectrum sensing could lead to una ordably high sampling rates at energy-constrained SUs. Compressive sensing (CS) was developed to overcome this issue, which enables sub-Nyquist sampling by exploiting sparse property. As the spectrum utilization is low, spectral signals exhibit a natural sparsity in frequency domain, which motivates the promising application of CS in wideband CRNs. This thesis proposes several e ective algorithms for invoking CS in wideband CRNs. Speci cally, a robust compressive spectrum sensing algorithm is proposed for reducing computational complexity of signal recovery. Additionally, a low-complexity algorithm is designed, in which original signals are recovered with fewer measurements, as geolocation database is invoked to provide prior information. Moreover, security enhancement issue of CRNs is addressed by proposing a malicious user detection algorithm, in which data corrupted by malicious users are removed during the process of matrix completion (MC). One key spotlight feature of this thesis is that both real-world signals and simulated signals over TVWS are invoked for evaluating network performance. Besides invoking CS and MC to reduce energy consumption, each SU is supposed to harvest energy from radio frequency. The proposed algorithm is capable of o ering higher throughput by performing signal recovery at a remote fusion center.
APA, Harvard, Vancouver, ISO, and other styles
5

Lagunas, Targarona Eva. "Compressive sensing based candidate detector and its applications to spectrum sensing and through-the-wall radar imaging." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/144629.

Full text
Abstract:
Signal acquisition is a main topic in signal processing. The well-known Shannon-Nyquist theorem lies at the heart of any conventional analog to digital converters stating that any signal has to be sampled with a constant frequency which must be at least twice the highest frequency present in the signal in order to perfectly recover the signal. However, the Shannon-Nyquist theorem provides a worst-case rate bound for any bandlimited data. In this context, Compressive Sensing (CS) is a new framework in which data acquisition and data processing are merged. CS allows to compress the data while is sampled by exploiting the sparsity present in many common signals. In so doing, it provides an efficient way to reduce the number of measurements needed for perfect recovery of the signal. CS has exploded in recent years with thousands of technical publications and applications being developed in areas such as channel coding, medical imaging, computational biology and many more. Unlike majority of CS literature, the proposed Ph.D. thesis surveys the CS theory applied to signal detection, estimation and classification, which not necessary requires perfect signal reconstruction or approximation. In particular, a novel CSbased detection technique which exploits prior information about some features of the signal is presented. The basic idea is to scan the domain where the signal is expected to lie with a candidate signal estimated from the known features. The proposed detector is called candidate-based detector because their main goal is to react only when the candidate signal is present. The CS-based candidate detector is applied to two topical detection problems. First, the powerful CS theory is used to deal with the sampling bottleneck in wideband spectrum sensing for open spectrum scenarios. The radio spectrum is a natural resource which is recently becoming scarce due to the current spectrum assignment policy and the increasing number of licensed wireless systems. To deal with the crowded spectrum problem, a new spectrum management philosophy is required. In this context, the revolutionary Cognitive Radio (CR) emerges as a solution. CR benefits from the poor usage of the spectrum by allowing the use of temporarily unused licensed spectrum to secondary users who have no spectrum licenses. The identification procedure of available spectrum is commonly known as spectrum sensing. However, one of the most important problems that spectrum sensing techniques must face is the scanning of wide band of frequencies, which implies high sampling rates. The proposed CS-based candidate detector exploits some prior knowledge of primary users, not only to relax the sampling bottleneck, but also to provide an estimation of the candidate signals' frequency, power and angle of arrival without reconstructing the whole spectrum. The second application is Through-the-Wall Radar Imaging (TWRI). Sensing through obstacles such as walls, doors, and other visually opaque materials, using microwave signals is emerging as a powerful tool supporting a range of civilian and military applications. High resolution imaging is achieved if large bandwidth signals and long antenna arrays are used. However, this implies acquisition and processing of large amounts of data volume. Decreasing the number of acquired samples can also be helpful in TWRI from a logistic point of view, as some of the data measurements in space and frequency can be difficult, or impossible to attain. In this thesis, we addressed the problem of imaging building interior structures using a reduced number of measurements. The proposed technique for the determination of the building layout is based on prior knowledge about common construction practices. Real data collection experiments in a laboratory environment, using Radar Imaging Lab facility at the Center for Advanced Communications, Villanova University, USA, are conducted to validate the proposed approach.<br>La adquisición de datos es un tema fundamental en el procesamiento de señales. Hasta ahora, el teorema de Shannon-Nyquist ha sido el núcleo de los métodos convencionales de conversión analógico-digital. El teorema dice que para recuperar perfectamente la información, cualquier señal debe ser muestreada a una frecuencia constante igual al doble de la máxima frecuencia presente en la señal. Sin embargo, este teorema asume el peor de los casos: cuando las señales ocupan todo el espectro. En este contexto aparece la teoría del muestreo compresivo (conocido en inglés como Compressed Sensing (CS)). CS ha supuesto una auténtica revolución en lo que se refiere a la adquisición y muestreo de datos analógicos en un esfuerzo hacia resolver la problemática de recuperar un proceso continuo comprimible con un nivel suficiente de similitud si únicamente se realiza un número muy reducido de medidas o muestras del mismo. El requerimiento para el éxito de dicha técnica es que la señal debe poder expresarse de forma dispersa en algún dominio. Esto es, que la mayoría de sus componentes sean cero o puedan considerarse despreciables. La aplicación de este tipo de muestreo compresivo supone una línea de investigación de gran auge e interés investigador en áreas como la transmisión de datos, procesamiento de imágenes médicas, biología computacional, entre otras. A diferencia de la mayoría de publicaciones relacionadas con CS, en esta tesis se estudiará CS aplicado a detección, estimación y clasificación de señales, que no necesariamente requiere la recuperación perfecta ni completa de la señal. En concreto, se propone un nuevo detector basado en cierto conocimiento a priori sobre la señal a detectar. La idea básica es escanear el dominio de la señal con una señal llamada Candidata, que se obtiene a partir de la información a priori de la señal a detectar. Por lo tanto, el detector únicamente reaccionará cuando la señal candidata esté presente. El detector es aplicado a dos problemas particulares de detección. En primer lugar, la novedosa teoría de CS es aplicada al sensado de espectro o spectrum sensing, en el contexto de Radio Cognitiva (CR). El principal problema radica en que las políticas actuales de asignación de bandas frecuenciales son demasiado estrictas y no permiten un uso óptimo del espectro radioeléctrico disponible. El uso del espectro radioeléctrico puede ser mejorado significativamente si se posibilita que un usuario secundario (sin licencia) pueda acceder a un canal desocupado por un usuario primario en ciertas localizaciones y momentos temporales. La tecnología CR se ha identificado recientemente como una solución prometedora al denominado problema de escasez de espectro, es decir, la creciente demanda de espectro y su actual infrautilización. Un requerimiento esencial de los dispositivos cognitivos es la capacidad de detectar la presencia de usuarios primarios (para no causarles interferencia). Uno de los problemas que se afronta en este contexto es la necesidad de escanear grandes anchos de banda que requieren frecuencias de muestreo extremadamente elevadas. El detector propuesto basado en CS aprovecha los huecos libres de espectro no sólo para relajar los requerimientos de muestreo, sino también para proporcionar una estimación precisa de la frecuencia, potencia y ángulo de llegada de los usuarios primarios, todo ello sin necesidad de reconstruir el espectro. La segunda aplicación es en radar con visión a través de paredes (Through-the-Wall Radar Imaging - TWRI). Hace ya tiempo que la capacidad de ver a través de las paredes ya no es un asunto de ciencia ficción. Esto es posible mediante el envío de ondas de radio, capaces de atravesar objetos opacos, que rebotan en los objetivos y retornan a los receptores. Este es un tipo de radar con gran variedad de aplicaciones, tanto civiles como militares. La resolución de las imágenes proporcionadas por dichos radares mejora cuando se usan grandes anchos de banda y mayor número de antenas, lo que directamente implica la necesidad de adquirir un mayor número de muestras y un mayor volumen de datos que procesar. A veces, reducir el número de muestras es interesante en TWRI desde un punto de vista logístico, ya que puede que algunas muestras frecuenciales o espaciales sean difíciles o imposibles de obtener. En esta tesis focalizaremos el trabajo en la detección de estructuras internas como paredes internas para reconstruir la estructura del edificio. Las paredes y/o diedros formados por la intersección de dos paredes internas formaran nuestras señales candidatas para el detector propuesto. En general, las escenas de interiores de edificios están formadas por pocas estructuras internas dando paso a la aplicaci´on de CS. La validación de la propuesta se llevará a cabo con experimentos realizados en el Radar Imaging Lab (RIL) del Center for Advanced Communications (CAC), Villanova University, PA, USA<br>L’adquisició de dades és un tema fonamental en el processament de senyals. Fins ara, el teorema de Shannon-Nyquist ha sigut la base dels mètodes convencionals de conversió analògic-digital. El teorema diu que per recuperar perfectament la informació, qualsevol senyal ha de ser mostrejada a una freqüència constant igual al doble de la màxima freqüència present a la senyal. No obstant, aquest teorema assumeix el pitjor dels casos: quan les senyals ocupen tot l’espectre. En aquest context apareix la teoria del mostreig compressiu (conegut en anglès amb el nom de Compressed Sensing (CS)). CS ha suposat una autèntica revolució pel que fa a l’adquisició i mostreig de dades analògiques en un esforç cap a resoldre la problemàtica de recuperar un procés continu comprimible amb un nivell suficient de similitud si únicament es realitza un número molt reduït de mesures o mostres del mateix. El requisit para l’èxit d’aquesta tècnica és que la senyal ha de poder ser expressada de forma dispersa en algun domini. Això és, que la majoria dels seus components siguin zero o puguin considerar-se despreciables. L’aplicació d’aquest tipus de mostreig compressiu suposa una l’ínia de investigació de gran interès en àrees com la transmissió de dades, el processament d’imatges mèdiques, biologia computacional, entre altres. A diferència de la majoria de publicacions relacionades amb CS, en aquesta tesi s’estudiarà CS aplicat a detecció, estimació i classificació de senyals, que no necessàriament requereix la recuperació perfecta ni completa de la senyal. En concret, es proposa un nou detector basat en cert coneixement a priori sobre la senyal a detectar. La idea bàsica és escanejar el domini de la senyal amb una senyal anomenada Candidata, que s’obté a partir de la informació a priori de la senyal a detectar. Per tant, el detector únicament reaccionarà quan la senyal candidata estigui present. El detector és aplicat a dos problemes particulars de detecció. En primer lloc, la teoria de CS és aplicada al sensat d’espectre o spectrum sensing, en el context de Radio Cognitiva (CR). El principal problema radica en que les polítiques actuals d’assignació de bandes freqüencials són massa estrictes i no permeten l’ús òptim de l’espectre radioelèctric disponible. L’ús de l’espectre radioelèctric pot ser significativament millorat si es possibilita que un usuari secundari (sense llicència) pugui accedir a un canal desocupat per un usuari primari en certes localitzacions i moments temporals. La tecnologia CR s’ha identificat recentment com una solució prometedora al problema d’escassetat d’espectre, és a dir, la creixent demanda d’espectre i la seva actual infrautilització. Un requeriment essencial dels dispositius cognitius és la capacitat de detectar la presència d’usuaris primaris (per no causar interferència). Un dels problemes que s’afronta en aquest context és la necessitat d’escanejar grans amples de banda que requereixen freqüències de mostreig extremadament elevades. El detector proposat basat en CS aprofita els espais buits lliures d’espectre no només per relaxar els requeriments de mostreig, sinó també per proporcionar una estimació precisa de la freqüència, potència i angle d’arribada dels usuaris primaris, tot això sense necessitat de reconstruir l’espectre. La segona aplicació ´es en radars amb visió a través de parets (Through-the-Wall Radar Imaging - TWRI). Ja fa un temps que la capacitat de veure a través de les parets no és un assumpte de ciència ficció. Això ´es possible mitjançant l’enviament d’ones de radio, capaços de travessar objectes opacs, que reboten en els objectius i retornen als receptors. Aquest és un tipus de radar amb una gran varietat d’aplicacions, tant civils como militars. La resolució de las imatges proporcionades per aquests radars millora quan s’usen grans amples de banda i més nombre d’antenes, cosa que directament implica la necessitat d’adquirir un major nombre de mostres i un major volum de dades que processar. A vegades, reduir el nombre mostres és interessant en TWRI des de un punt de vista logístic, ja que pot ser que algunes mostres freqüencials o espacials siguin difícils o impossibles d’obtenir. En aquesta tesis focalitzarem el treball en la detecció d’estructures internes com per exemple parets internes per reconstruir l’estructura de l’edifici. Les parets i/o díedres formats per la intersecció de dos parets internes formaran les nostres senyals candidates per al detector proposat. En general, les escenes d’interiors d’edificis estan formades per poques estructures internes donant pas a l’aplicació de CS. La validació de la proposta es durà a terme amb experiments realitzats en el Radar Imaging Lab (RIL) del Center for Advanced Communications (CAC), Villanova University, PA, USA
APA, Harvard, Vancouver, ISO, and other styles
6

Amrani, Naoufal. "Spectral decorrelation for coding remote sensing data." Doctoral thesis, Universitat Autònoma de Barcelona, 2017. http://hdl.handle.net/10803/402237.

Full text
Abstract:
Hoy en día, los datos de teledetección son esenciales para muchas aplicaciones dirigidas a la observación de la tierra. El potencial de los datos de teledetección en ofrecer información valiosa permite entender mejor las características de la tierra y las actividades humanas. Los desarrollos recientes en los sensores de satélites permiten cubrir amplias áreas geográficas, produciendo imágenes con resoluciones espaciales, espectrales y temporales sin precedentes. Esta cantidad de datos producidos implica una necesidad requiere técnicas de compresión eficientes para mejorar la transmisión y la capacidad de almacenamiento. La mayoría de estas técnicas se basan en las transformadas o en los métodos de predicción. Con el fin de entender la independencia no lineal y la compactación de datos para las imágenes hiperespectrales, empezaos por investigar la mejora de la transformada “Principa Component Analysis” (PCA) que proporciona una decorrelación optima para fuentes Gausianas. Analizamos la eficiencia en compresión sin perdida de “Principal Polynomial Analysis” (PPA) que generaliza PCA con la eliminación de las dependencias non lineales a través de regresión polinomial. Mostramos que las componentes principales no son capaces de predecirse con la regresión polinomial y por tanto no se mejora la independencia del PCA. Este análisis nos permite entender mejor el concepto de la predicción en el dominio de la transformada para fines de compresión. Por tanto, en lugar de utilizar transformadas sofisticadas y costosas como PCA, centramos nuestro interés en transformadas más simples como “CDiscrete Wavelet Transform”(DWT). Mientras tanto, adoptamos técnicas de predicción para explotar cualquier dependencia restante entre las componentes transformadas. Así, introducimos un nuevo esquema llamado “Regression Wavelet Analysis” (RWA) para aumentar la independencia entre los coeficientes de las imágenes hiperespectrales. El algoritmo utiliza la regresión multivariante para explotar las relaciones entre los coeficientes de las transformada DWT. El algoritmo RWA ofrece muchas ventajas, como el bajo coste computacional y la no expansión del rango dinámico. Sin embargo, la propiedad más importante es la eficiencia en compresión sin perdida. Experimentaos extensivos sobre un conjunto amplio de imanes indican que RWA supera las técnicas mas competitivas en el estado del arte com. PCA o el estándar CCSDS-123. Extendemos los beneficios de RWA para la compresión progresiva “ Lossy-to-lossless “. Mostramos que RWA puede alcanzar una relación rate-distorsión mejor que las obtenidas por otras técnicas del estado del arte como PCA. Para este fin, proponemos un esquema de pesos que captura la significancia predictiva de las componentes. Para un análisis más profundo, también analizamos el sesgo en los parámetros de regresión cuando se aplica una compresión con perdida. Mostramos que los parámetros de RWA no son sesgados cuando los modelos de regresión se aplican con los datos recuperados que carecen información. Finalmente, introducimos una versión del algoritmo RWA de muy bajo coste computacional. Con este nuevo enfoque, la predicción solo se basa en muy pocas componentes, mientras que el rendimiento se mantiene. Mientras que la complejidad de RWA se lleva a su bajo extremo, un método de selección eficiente es necesario. A diferencia de otros métodos de selección costosos, proponemos una estrategia simple pero eficiente llamada “ neighbor selection” para seleccionar modelos con pocas componentes predictivas. Sobre un amplio conjunto de imágenes hiperespectrales, estos modelos mantienen el excelente rendimiento de RWA con el modelo máximo, mientras que el coste computacional es reducido al<br>Today remote sensing is essential for many applications addressed to Earth Observation. The potential capability of remote sensing in providing valuable information enables a better understanding of Earth characteristics and human activities. Recent advances in satellite sensors allow recovering large areas, producing images with unprecedented spatial, spectral and temporal resolution. This amount of data implies a need for efficient compression techniques to improve the capabilities of storage and transmissions. Most of these techniques are dominated by transforms or prediction methods. This thesis aims at deeply analyzing the state-of-the-art techniques and at providing efficient solutions that improve the compression of remote sensing data. In order to understand the non-linear independence and data compaction of hyperspectral images, we investigate the improvement of Principal Component Analysis (PCA) that provides optimal independence for Gaussian sources. We analyse the lossless coding efficiency of Principal Polynomial Analysis (PPA), which generalizes PCA by removing non-linear relations among components using polynomial regression. We show that principal components are not able to predict each other through polynomial regression, resulting in no improvement of PCA at the cost of higher complexity and larger amount of side information. This analysis allows us to understand better the concept of prediction in the transform domain for compression purposes. Therefore, rather than using expensive sophisticated transforms like PCA, we focus on theoretically suboptimal but simpler transforms like Discrete Wavelet Transform (DWT). Meanwhile, we adopt predictive techniques to exploit any remaining statistical dependence. Thus, we introduce a novel scheme, called Regression Wavelet Analysis (RWA), to increase the coefficient independence in remote sensing images. The algorithm employs multivariate regression to exploit the relationships among wavelet-transformed components. The proposed RWA has many important advantages, like the low complexity and no dynamic range expansion. Nevertheless, the most important advantage consists of its performance for lossless coding. Extensive experimental results over a wide range of sensors, such as AVIRIS, IASI and Hyperion, indicate that RWA outperforms the most prominent transforms like PCA and wavelets, and also the best recent coding standard, CCSDS-123. We extend the benefits of RWA to progressive lossy-to-lossless. We show that RWA can attain a rate-distortion performance superior to those obtained with the state-of-the-art techniques. To this end, we propose a Prediction Weighting Scheme that captures the prediction significance of each transformed components. The reason of using a weighting strategy is that coefficients with similar magnitude can have extremely different impact on the reconstruction quality. For a deeper analysis, we also investigate the bias in the least squares parameters, when coding with low bitrates. We show that the RWA parameters are unbiased for lossy coding, where the regression models are used not with the original transformed components, but with the recovered ones, which lack some information due to the lossy reconstruction. We show that hyperspectral images with large size in the spectral dimension can be coded via RWA without side information and at a lower computational cost. Finally, we introduce a very low-complexity version of RWA algorithm. Here, the prediction is based on only some few components, while the performance is maintained. When the complexity of RWA is taken to an extremely low level, a careful model selection is necessary. Contrary to expensive selection procedures, we propose a simple and efficient strategy called \textit{neighbor selection} for using small regression models. On a set of well-known and representative hyperspectral images, these small models maintain the excellent coding performance of RWA, while reducing the computational cost by about 90\%.
APA, Harvard, Vancouver, ISO, and other styles
7

Merlet, Sylvain. "Acquisition compressée en IRM de diffusion." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00916582.

Full text
Abstract:
Cette thèse est consacrée à l'élaboration de nouvelles méthodes d'acquisition et de traitement de données en IRM de diffusion (IRMd) afin de caractériser la diffusion des molécules d'eau dans les fibres de matière blanche à l'échelle d'un voxel. Plus particulièrement, nous travaillons sur un moyen de reconstruction précis de l'Ensemble Average Propagator (EAP), qui représente la fonction de probabilité de diffusion des molécules d'eau. Plusieurs modèles de diffusion tels que le tenseur de diffusion ou la fonction de distribution d'orientation sont très utilisés dans la communauté de l'IRMd afin de quantifier la diffusion des molécules d'eau dans le cerveau. Ces modèles sont des représentations partielles de l'EAP et ont été développés en raison du petit nombre de mesures nécessaires à leurs estimations. Cependant, il est important de pouvoir reconstruire précisément l'EAP afin d'acquérir une meilleure compréhension des mécanismes du cerveau et d'améliorer le diagnostique des troubles neurologiques. Une estimation correcte de l'EAP nécessite l'acquisition de nombreuses images de diffusion sensibilisées à des orientations différentes dans le q-space. Ceci rend son estimation trop longue pour être utilisée dans la plupart des scanners cliniques. Dans cette thèse, nous utilisons des techniques de reconstruction parcimonieuses et en particulier la technique connue sous le nom de Compressive Sensing (CS) afin d'accélérer le calcul de l'EAP. Les multiples aspects de la théorie du CS et de son application à l'IRMd sont présentés dans cette thèse.
APA, Harvard, Vancouver, ISO, and other styles
8

Dunlop, Matthew, and Phillip Poon. "Adaptive Feature-Specific Spectral Imaging Classifier (AFSSI-C)." International Foundation for Telemetering, 2013. http://hdl.handle.net/10150/579667.

Full text
Abstract:
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV<br>The AFSSI-C is a spectral imager that generates spectral classification directly, in fewer measurements than are required by traditional systems that measure the spectral datacube (which is later interpreted to make material classification). By utilizing adaptive features to constantly update conditional probabilities for the different hypotheses, the AFSSI-C avoids the overhead of directly measuring every element in the spectral datacube. The system architecture, feature design methodology, simulation results, and preliminary experimental results are given.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Yang. "2D signal processing: efficient models for spectral compressive sensing & single image reflection suppression." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6667.

Full text
Abstract:
Two efficient models in two-dimensional signal processing are proposed in the thesis. The first model deals with large scale spectral compressive sensing in continuous domain, which aims to recover a 2D spectrally sparse signal from partially observed time samples. The signal is assumed to be a superposition of s complex sinusoids. We propose a semidefinite program for the 2D signal recovery problem. Our model is able to handle large scale 2D signals of size 500*500, whereas traditional approaches only handle signals of size around 20*20. The second model deals with the problem of single image reflection suppression. Removing the undesired reflection from images taken through glass is of great importance in computer vision. It serves as a means to enhance the image quality for aesthetic purposes as well as to preprocess images in machine learning and pattern recognition applications. We propose a convex model to suppress the reflection from a single input image. Our model implies a partial differential equation with gradient thresholding, which is solved efficiently using Discrete Cosine Transform. Extensive experiments on synthetic and real-world images demonstrate that our approach achieves desirable reflection suppression results and dramatically reduces the execution time compared to the state of the art.
APA, Harvard, Vancouver, ISO, and other styles
10

Álvarez, Cortés Sara. "Pyramidal regression-based coding for remote sensing data." Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/667742.

Full text
Abstract:
Los datos hiperespectrales capturados por teledetección cuentan con cientos o miles de componentes espectrales de similares longitudes de onda. Almacenarlos y transmitirlos conlleva una demanda excesiva en ancho de banda y memoria, ya de por sí bastante limitados, que pueden dar lugar a descartar información ya capturada o a dejar de capturarla. Para paliar estas limitaciones, se aplican algoritmos de compresión. Además, la tecnología de los sensores evoluciona continuamente, pudiéndose adquirir datos con mayores dimensiones. De ahí que, para no penalizar el funcionamiento y rendimiento de futuras misiones espaciales, se necesitan desarrollar métodos de compresión más competitivos. Regression Wavelet Analysis (RWA) es el método de compresión sin pérdidas más eficiente en relación a la complejidad computacional y al rendimiento de codificación. RWA se describe como una transformada espectral sin pérdida seguida de JPEG 2000. Ésta aplica un nivel de descomposición de la transformada discreta de onda Haar y una regresión. Hay varios modelos de regresión (Maximum, Restricted y Parsimonious) y variantes (solo para Maximum). Inicialmente, nos centramos en aumentar el rendimiento de codificación y/o reducir la complejidad computacional de RWA para diseñar técnicas de compresión más competitivas. Primero, investigamos en profundidad la influencia que tiene el reemplazar el filtro de RWA por transformadas más eficientes en cuanto a la compactación de energía. Para ello, redefinimos el modelo Restricted, reduciendo el tiempo de ejecución, incrementando el ratio de compresión, y preservando un cierto grado de escalabilidad por componente. Además, mostramos que las variantes de regresión se pueden aplicar a todos los modelos de regresión, disminuyendo así su complejidad computacional sin apenas penalizar el rendimiento de codificación. Nuestras nuevas configuraciones proporcionan ratios de compresión mayores o bastante competitivos con respecto a otras técnicas de menor y mayor complejidad. Tras ello, describimos el impacto que tiene el aplicar un esquema de pesos predictivo (PWS) en el rendimiento de compresión cuando se decodifica de forma progresiva desde con-pérdida hasta sin-pérdida (PLL). La aplicación de estos pesos a todos los modelos de regresión y variantes de RWA con JPEG 2000 (PWS-RWA + JPEG 2000) mejora los resultados del esquema original (RWA + JPEG 2000). Por otro lado, vemos que un mejor rendimiento de la codificación no implica necesariamente mejores clasificaciones. De hecho, en comparación con otras técnicas que recuperan la escena con mayor calidad, PWS-RWA + JPEG 2000 provee de mejores clasificaciones cuando la distorsión en la recuperación es elevada. Para obtener una implementación de más baja complejidad computacional, presentamos resultados de RWA acompañada de un codificador que se puede ejecutar a bordo. Además, con un sencillo criterio de decisión conseguimos tasas de bits más bajas, mejorando al esquema original y otras técnicas de compresión sin pérdidas al obtener ganancias de codificación promedio entre 0,10 y 1,35 bits-por-píxel-por-componente. Finalmente, presentamos la primera técnica de compresión sin-pérdida/casi-sin-pérdida basada en un sistema piramidal que aplica regresión. Para ello, ampliamos RWA introduciendo cuantización y un algoritmo de retroalimentación para controlar independientemente el error de cuantificación en cada nivel de descomposición, al mismo tiempo que preservamos la complejidad computacional. Proporcionamos también una ecuación que limita el máximo error en valor absoluto admisible en la reconstrucción. A su vez, evitamos probar la gran cantidad de combinaciones posibles de pasos de cuantificación mediante el desarrollo de un esquema de asignación de pasos. Nuestra propuesta, llamada NLRWA, logra obtener un rendimiento de codificación muy competitivo y recuperar la escena con mayor fidelidad. Por último, cuando el codificador por entropía se basa en planos de bits, NLRWA puede proporcionar una compresión progresiva desde con-pérdida hasta sin-pérdida/casi-sin-pérdida y cierto grado de integrabilidad.<br>Remote sensing hyperspectral data have hundreds or thousands of spectral components from very similar wavelengths. To store and transmit it entails excessive demands on bandwidth and on on-board memory resources, which are already strongly restricted. This leads to stop capturing data or to discard some of the already recorded information without further processing. To alleviate these limitations, data compression techniques are applied. Besides, sensors' technology is continuously evolving, acquiring higher dimensional data. Consequently, in order to not jeopardize future space mission's performance, more competitive compression methods are required. Regression Wavelet Analysis (RWA) is the state-of-the-art lossless compression method regarding the trade-off between computational complexity and coding performance. RWA is introduced as a lossless spectral transform followed by JPEG 2000. It applies a Haar Discrete Wavelet Transform (DWT) decomposition and sequentially a regression operation. Several regression models (Maximum, Restricted and Parsimonious) and variants (only for the Maximum model) have been proposed. With the motivation of outperforming the latest compression techniques for remote sensing data, we began focusing on improving the coding performance and/or the computational complexity of RWA. First, we conducted an exhaustive research of the influence of replacing the underlying wavelet filter of RWA by more competitive Integer Wavelet Transforms (in terms of energy compaction). To this end, we reformulated the Restricted model, reducing the execution time, increasing the compression ratio, and preserving some degree of component-scalability. Besides, we showed that the regression variants are also feasible to apply to other models, decreasing their computational complexity while scarcely penalizing the coding performance. As compared to other lowest- and highest-complex techniques, our new configurations provide, respectively, better or similar compression ratios. After gaining a comprehensive understanding of the behavior of each operation block, we described the impact of applying a Predictive Weighting Scheme (PWS) in the Progressive Lossy-to-Lossless (PLL) compression performance. PLL decoding is possible thanks to the use of the rate control system of JPEG 2000. Applying this PWS to all the regression models and variants of RWA coupled by JPEG 2000 (PWS-RWA + JPEG 2000) produces superior outcomes, even for multi-class digital classification. From experimentation, we concluded that improved coding performance does not necessarily entail better classification outcomes. Indeed, in comparison with other widespread techniques that obtain better rate-distortion results, PWS-RWA + JPEG 2000 yields better classification outcomes when the distortion in the recovered scene is high. Moreover, the weighted framework presents far more stable classification versus bitrate trade-off. JPEG 2000 may be too computationally expensive for on-board computation. In order to obtain a cheaper implementation, we render results for RWA followed by another coder amenable for on-board operation. This framework includes the operation of a smart and simple criterion aiming at the lowest bitrates. This final pipeline outperforms the original RWA + JPEG 2000 and other state-of-the-art lossless techniques by obtaining average coding gains between 0.10 to 1.35 bits-per-pixel-per-component. Finally, we present the first lossless/near-lossless compression technique based on regression in a pyramidal multiresolution scheme. It expands RWA by introducing quantization and a feedback loop to control independently the quantization error in each decomposition level, while preserving the computational complexity. To this end, we provide a mathematical formulation that limits the maximum admissible absolute error in reconstruction. Moreover, we tackle the inconvenience of proving the huge number of possible quantization steps combinations by establishing a quantization steps-allocation definition. Our approach, named NLRWA, attains competitive coding performance and superior scene's quality retrieval. In addition, when coupled with a bitplane entropy encoder, NLRWA supports progressive lossy-to-lossless/near-lossless compression and some degree of embeddedness.
APA, Harvard, Vancouver, ISO, and other styles
11

Poon, Phillip K., Esteban Vera, and Michael E. Gehm. "Computational hyperspectral unmixing using the AFSSI-C." SPIE-INT SOC OPTICAL ENGINEERING, 2016. http://hdl.handle.net/10150/621544.

Full text
Abstract:
We have previously introduced a high throughput multiplexing computational spectral imaging device. The device measures scalar projections of pseudo-arbitrary spectral filters at each spatial pixel. This paper discusses simulation and initial experimental progress in performing computational spectral unmixing by taking advantage of the natural sparsity commonly found in the fractional abundances. The simulation results show a lower unmixing error compared to traditional spectral imaging devices. Initial experimental results demonstrate the ability to directly perform spectral unmixing with less error than multiplexing alone.
APA, Harvard, Vancouver, ISO, and other styles
12

Llave, Boris Chullo. "Aplicação do método do Gradiente Espectral Projetado ao problema de Compressive Sensing." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-02102012-162650/.

Full text
Abstract:
A teoria de Compressive Sensing proporciona uma nova estratégia de aquisição e recuperação de dados com bons resultados na área de processamento de imagens. Esta teoria garante recuperar um sinal com alta probabilidade a partir de uma taxa reduzida de amostragem por debaixo do limite de Nyquist-Shanon. O problema de recuperar o sinal original a partir das amostras consiste em resolver um problema de otimização. O método de Gradiente Espectral Projetado é um método para minimizar funções suaves em conjuntos convexos que tem sido aplicado com frequência ao problema de recuperar o sinal original a partir do sinal amostrado. Este trabalho dedica-se ao estudo da aplicação do Método do Gradiente Espectral Projetado ao problema de Compressive Sensing.<br>The theory of compressive sensing has provided a new acquisition strategy and data recovery with good results in the image processing area. This theory guarantees to recover a signal with high probability from a reduced sampling rate below the Nyquist-Shannon limit. The problem of recovering the original signal from the samples consists in solving an optimization problem. The Spectral Projected Gradient (SPG) is a method to minimize continuous functions over convex sets which often has been applied to the problem of recovering the original signal from sampled signals. This work is dedicated to the study and application of the Spectral Projected Gradient method to Compressive Sensing problems.
APA, Harvard, Vancouver, ISO, and other styles
13

Hajji, Zahran. "Gestion des interférences dans les systèmes large-scale MIMO pour la 5G." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0109.

Full text
Abstract:
La thèse s'inscrit dans la perspective de l'explosion du trafic de données générée par l'augmentation du nombre d'utilisateurs ainsi que la croissance du débit qui doivent être prises en compte dans la définition des futures générations de communications radiocellulaires. Une solution est la technologie «large-scale MIMO » (systèmes MIMO de grande dimension) qui pose plusieurs défis. La conception des nouveaux algorithmes de détection de faible complexité est indispensable vu que les algorithmes classiques ne sont plus adaptés à cette configuration à cause de leurs mauvaises performances de détection ou de leur complexité trop élevée fonction du nombre d'antennes. Une première contribution de la thèse est un algorithme basé sur la technique de l'acquisition comprimée en exploitant les propriétés des signaux à alphabet fini. Appliqué à des systèmes MIMO de grande dimension, déterminés et sous-déterminés, cet algorithme réalise des performances (qualité de détection, complexité) prometteuses et supérieures comparé aux algorithmes de l'état de l'art. Une étude théorique approfondie a été menée pour déterminer les conditions optimales de fonctionnement et la distribution statistique des sorties. Une seconde contribution est l'intégration de l'algorithme original dans un récepteur itératif en différenciant les cas codé (code correcteur d'erreurs présent) et non codé. Un autre défi pour tenir les promesses des systèmes large scale MIMO (efficacité spectrale élevée) est l'estimation de canal. Une troisième contribution de la thèse est la proposition d'algorithmes d'estimation semi-aveugles qui fonctionnent avec une taille minimale des séquences d'apprentissage (égale au nombre d'utilisateurs) et atteignent des performances très proches de la borne théorique<br>The thesis is part of the prospect of the explosion of data traffic generated by the increase of the number of users as well as the growth of the bit rate which must be taken into account in the definition of future generations of radio-cellular communications. A solution is the large-scale MIMO technology (MIMO systems oflarge size) which poses several challenges. The design of the new low complexity detection algorithms is indispensable since the conventional algorithms are no longer adapted to this configuration because of their poor detection performance or their too high complexity depending on the number of antennas. A first contribution of the thesis is an algorithm based on the technique of compressed sensing by exploiting the propertiesof the signals with finite alphabet. Applied to large-scale, determined and under-determined MIMO systems, this algorithm achieves promising and superior performance (quality ofdetection, complexity) compared to state-ofthe-art algorithms. A thorough theoretical study was conducted to determine the optimal operating conditions and the statistical distribution of outputs. A second contribution is the integration of the original algorithm into an iterative receiver by differentiating the coded and uncoded cases. Another challenge to keeping the promise of large- scale MIMO systems (high spectral efficiency) is channel estimation. A third contribution of the thesis is the proposal of semi-blind channel estimation algorithms that work with a minimum size of pilot sequences (equal to the number of users) and reach performances very close to the theoretical bound
APA, Harvard, Vancouver, ISO, and other styles
14

Zebadúa, Augusto. "Traitement du signal dans le domaine compressé et quantification sur un bit : deux outils pour les contextes sous contraintes de communication." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT085/document.

Full text
Abstract:
La surveillance de phénomènes physiques à l’aide d’un réseau de capteurs (autonomes mais communicants) est fortement contrainte en consommation énergétique, principalement pour la transmission de données. Dans ce cadre, cette thèse propose des méthodes de traitement du signal permettant de réduire les communications sans compromettre la précision des calculs ultérieurs. La complexité de ces méthodes est réduite, de façon à ne consommer que peu d’énergie supplémentaire. Deux éléments servent à leur synthèse : la compression dès l’acquisition (Acquisition compressive) et la quantification grossière (sur 1 bit). D’abord, on étudie le corrélateur compressé, un estimateur qui permet d’évaluer les fonctions de corrélation, temps de retard et densités spectrales en exploitant directement des signaux compressés. Ses performances sont comparées au corrélateur usuel. Si le signal à traiter possède un support spectral étroit, l’estimateur proposé s’avère sensiblement meilleur que l’usuel. Ensuite, inspirés par les corrélateurs à forte quantification des années 50 et 60, deux nouveaux corrélateurs sont étudiés : le compressé sur 1 bit et le compressé hybride, qui peuvent également surpasser les performances de leurs contreparties non-compressées. Finalement, on montre la pertinence de ces méthodes pour les applications envisagées à travers l’exploitation de données réelles<br>Monitoring physical phenomena by using a network of sensors (autonomous but interconnected) is highly constrained in energy consumption, mainly for data transmission. In this context, this thesis proposes signal processing tools to reduce communications without compromising computational accuracy in subsequent calculations. The complexity of these methods is reduced, so as to consume only little additional energy. Our two building blocks are compression during signal acquisition (Compressive Sensing) and CoarseQuantization (1 bit). We first study the Compressed Correlator, an estimator which allows for evaluating correlation functions, time-delay, and spectral densities directly from compressed signals. Its performance is compared with the usual correlator. As we show, if the signal of interest has limited frequency content, the proposed estimator significantly outperforms theconventional correlator. Then, inspired by the coarse quantization correlators from the 50s and 60s, two new correlators are studied: The 1-bit Compressed and the Hybrid Compressed, which can also outperform their uncompressed counterparts. Finally, we show the applicability of these methods in the context of interest through the exploitation of real data
APA, Harvard, Vancouver, ISO, and other styles
15

Khalaf, Ziad. "Contributions à l'étude de détection des bandes libres dans le contexte de la radio intelligente." Phd thesis, Supélec, 2013. http://tel.archives-ouvertes.fr/tel-00812666.

Full text
Abstract:
Les systèmes de communications sans fil ne cessent de se multiplier pour devenir incontournables de nos jours. Cette croissance cause une augmentation de la demande des ressources spectrales, qui sont devenues de plus en plus rares. Afin de résoudre ce problème de pénurie de fréquences, Joseph Mitola III, en 2000, a introduit l'idée de l'allocation dynamique du spectre. Il définit ainsi le terme " Cognitive Radio " (Radio Intelligente), qui est largement pressenti pour être le prochain Big Bang dans les futures communications sans fil [1]. Dans le cadre de ce travail on s'intéresse à la problématique du spectrum sensing qui est la détection de présence des Utilisateurs Primaires dans un spectre sous licence, dans le contexte de la radio intelligente. L'objectif de ce travail est de proposer des méthodes de détection efficaces à faible complexité et/ou à faible temps d'observation et ceci en utilisant le minimum d'information a priori sur le signal à détecter. Dans la première partie on traite le problème de détection d'un signal aléatoire dans le bruit. Deux grandes méthodes de détection sont utilisées : la détection d'énergie ou radiomètre et la détection cyclostationnaire. Dans notre contexte, ces méthodes sont plus complémentaires que concurrentes. Nous proposons une architecture hybride de détection des bandes libres, qui combine la simplicité du radiomètre et la robustesse des détecteurs cyclostationnaires. Deux méthodes de détection sont proposées qui se basent sur cette même architecture. Grâce au caractère adaptatif de l'architecture, la détection évolue au cours du temps pour tendre vers la complexité du détecteur d'énergie avec des performances proches du détecteur cyclostationnaire ou du radiomètre selon la méthode utilisée et l'environnement de travail. Dans un second temps on exploite la propriété parcimonieuse de la Fonction d'Autocorrelation Cyclique (FAC) pour proposer un nouvel estimateur aveugle qui se base sur le compressed sensing afin d'estimer le Vecteur d'Autocorrelation Cyclique (VAC), qui est un vecteur particulier de la Fonction d'Autocorrelation Cyclique pour un délai fixe. On montre par simulation que ce nouvel estimateur donne de meilleures performances que celles obtenues avec l'estimateur classique, qui est non aveugle et ceci dans les mêmes conditions et en utilisant le même nombre d'échantillons. On utilise l'estimateur proposé, pour proposer deux détecteurs aveugles utilisant moins d'échantillons que nécessite le détecteur temporel de second ordre de [2] qui se base sur l'estimateur classique de la FAC. Le premier détecteur exploite uniquement la propriété de parcimonie du VAC tandis que le second détecteur exploite en plus de la parcimonie la propriété de symétrie du VAC, lui permettant ainsi d'obtenir de meilleures performances. Ces deux détecteurs outre qu'ils sont aveugles sont plus performants que le détecteur non aveugle de [2] dans le cas d'un faible nombre d'échantillons.
APA, Harvard, Vancouver, ISO, and other styles
16

Wu, Yi-Ting, and 吳伊婷. "Fast Diffusion Spectrum MRI Technology using Dictionary-based Compressive Sensing." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/73249856285547040729.

Full text
Abstract:
碩士<br>國立交通大學<br>生醫工程研究所<br>103<br>Diffusion Spectrum Imaging (DSI) is one of the diffusion MRI techniques and has the highest accuracy of resolving complex fiber orientations in human brain. However, due to the large data sampling and resulting long scan time, its clinical feasibility has not been verified yet on clinical MRI applications. To reduce the data sampling and accelerate the scan time, a signal processing approach is highly needed without any additional cost of hardware improvement. Compressive Sensing (CS) technique can moderate huge data information well based on the theory that extracts all the high coefficients from signal bases. This technique has been widely employed in a variety of research fields, such as data mining, wireless network communication, video and image processing. Although implementation of CS technique on DSI has been proposed in previous studies, a systematic and quantitative analysis framework is still lacking. Therefore, this thesis aimed to establish a dictionary-based CS-DSI technique and quantitative evaluation framework. We developed a multiple-slice dictionary learning method and focused on investigating the improvement on white matter structures. We also discussed the influences of DSI sequence parameters on its performance, such as maximum b-value and signal-to-noise ratio. The framework of multiple-slice learning is verified to has higher accuracy of reconstructing probability distribution function and orientation distribution function. We expect this thesis could provide more useful information for facilitating the development of CS-DSI technology as well as utilizing this technique on neuroscience researches and clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
17

Yu, Zhuizhuan. "Digitally-Assisted Mixed-Signal Wideband Compressive Sensing." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9328.

Full text
Abstract:
Digitizing wideband signals requires very demanding analog-to-digital conversion (ADC) speed and resolution specifications. In this dissertation, a mixed-signal parallel compressive sensing system is proposed to realize the sensing of wideband sparse signals at sub-Nqyuist rate by exploiting the signal sparsity. The mixed-signal compressive sensing is realized with a parallel segmented compressive sensing (PSCS) front-end, which not only can filter out the harmonic spurs that leak from the local random generator, but also provides a tradeoff between the sampling rate and the system complexity such that a practical hardware implementation is possible. Moreover, the signal randomization in the system is able to spread the spurious energy due to ADC nonlinearity along the signal bandwidth rather than concentrate on a few frequencies as it is the case for a conventional ADC. This important new property relaxes the ADC SFDR requirement when sensing frequency-domain sparse signals. The mixed-signal compressive sensing system performance is greatly impacted by the accuracy of analog circuit components, especially with the scaling of CMOS technology. In this dissertation, the effect of the circuit imperfection in the mixed-signal compressive sensing system based on the PSCS front-end is investigated in detail, such as the finite settling time, the timing uncertainty and so on. An iterative background calibration algorithm based on LMS (Least Mean Square) is proposed, which is shown to be able to effectively calibrate the error due to the circuit nonideal factors. A low-speed prototype built with off-the-shelf components is presented. The prototype is able to sense sparse analog signals with up to 4 percent sparsity at 32 percent of the Nqyuist rate. Many practical constraints that arose during building the prototype such as circuit nonidealities are addressed in detail, which provides good insights for a future high-frequency integrated circuit implementation. Based on that, a high-frequency sub-Nyquist rate receiver exploiting the parallel compressive sensing is designed and fabricated with IBM90nm CMOS technology, and measurement results are presented to show the capability of wideband compressive sensing at sub-Nyquist rate. To the best of our knowledge, this prototype is the first reported integrated chip for wideband mixed-signal compressive sensing. The proposed prototype achieves 7 bits ENOB and 3 GS/s equivalent sampling rate in simulation assuming a 0.5 ps state-of-art jitter variance, whose FOM beats the FOM of the high speed state-of-the-art Nyquist ADCs by 2-3 times. The proposed mixed-signal compressive sensing system can be applied in various fields. In particular, its applications for wideband spectrum sensing for cognitive radios and spectrum analysis in RF tests are discussed in this work.
APA, Harvard, Vancouver, ISO, and other styles
18

Yen, Chi-Feng, and 顏基峯. "A Study on Compressive Sensing With Modulated Wideband Converter and Its Application for Wideband Spectrum Sensing." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/m86fpx.

Full text
Abstract:
碩士<br>國立交通大學<br>電信工程研究所<br>104<br>The rapid development of wireless communications results in a great demand for radio spectrum. Unfortunately, the resource of spectrum is limited; therefore, there is no doubt that to raise the utilization of spectrum is one of the most important subjects for next generation of cellular networks. To increase the utilization of spectrum, sensing the vacant bands in the spectrum for further use has been viewed as an effective solution in literature. However, for wideband spectrum sensing, the hardware cost may be unaffordable for practical applications as sampling at the full Nyquist rate is required for signal processing. Modulated wideband converter (MWC) is an analog front end which can support sub-Nyquist rate sampling with the aid of compressive sensing (CS) for the design of modulated waveform. In this thesis, we use MWC to sample wideband signals at a sub-Nyquist rate and find the vacant bands by the orthogonal matching pursuit (OMP). Based on the optimal design of sensing matrix for CS, we propose different modulated waveforms with amplitude of ±1 and fixed-point real/complex values to achieve high accuracy of spectrum sensing. Besides, the occurrence of side-bands may be observed as the sampling rate and the period of the modulated waveform in MWC are not well matched to wideband signals. Since the direct use of OMP to the MWC with side-bands reveals severe degradation of system performance, we also propose several modified schemes of OMP with respect to the MWC with side-bands for enhancement. Simulation results indicate that the proposed modulated waveforms can have a good tradeoff between the complexity and accuracy for spectrum sensing. The modified schemes of OMP are also verified to mitigate performance degradation due to side-bands.
APA, Harvard, Vancouver, ISO, and other styles
19

Sadiq, Sadiq Jafar. "Application of Compressive Sensing and Belief Propagation for Channel Occupancy Detection in Cognitive Radio Networks." Thesis, 2011. http://hdl.handle.net/1807/29609.

Full text
Abstract:
Wide-band spectrum sensing is an approach for finding spectrum holes within a wideband signal with less complexity/delay than the conventional approaches. In this thesis, we propose four different algorithms for detecting the holes in a wide-band spectrum and finding the sparsity level of compressive signals. The first algorithm estimates the spectrum in an efficient manner and uses this estimation to find the holes. The second algorithm detects the spectrum holes by reconstructing channel energies instead of reconstructing the spectrum itself. In this method, the signal is fed into a number of filters. The energies of the filter outputs are used as the compressed measurement to reconstruct the signal energy. The third algorithm employs two information theoretic algorithms to find the sparsity level of a compressive signal and the last algorithm employs belief propagation for detecting the sparsity level.
APA, Harvard, Vancouver, ISO, and other styles
20

Adhikari, Bijaya. "Architecture and Design of Wide Band Spectrum Sensing Receiver for Cognitive Radio Systems." Thesis, 2014. http://etd.iisc.ac.in/handle/2005/2953.

Full text
Abstract:
To explore spectral opportunities in wideband regime for cognitive radio we need a wideband spectrum sensing receiver. Current wideband receiver architectures need wideband analog to digital converter (ADC) to sample wideband signal. As current state-of-art ADC has limitation in terms of power and sampling rate, we need to explore some alternative solutions. Compressive sampling (CS) data acquisition method is one of the solutions. Cognitive Radio signal, which is sparse in frequency domain can be sampled at Sub-Nyquist rate using low rate ADC. To relax the receiver complexity in terms of performance requirement we can use Modulated Wideband Converter (MWC) architecture, a Sub-Nyquist sampling method. In this thesis circuit design of this architecture covers signal within a frequency range of 500 MHz to 2.1 GHz, with a channel bandwidth of 1600 MHz. By using 8 parallel lines with channel trading factor of 11, effective sampling rate of 550 MHz is achieved for successful support recovery of multi-band input signal of size N=12.
APA, Harvard, Vancouver, ISO, and other styles
21

Adhikari, Bijaya. "Architecture and Design of Wide Band Spectrum Sensing Receiver for Cognitive Radio Systems." Thesis, 2014. http://hdl.handle.net/2005/2953.

Full text
Abstract:
To explore spectral opportunities in wideband regime for cognitive radio we need a wideband spectrum sensing receiver. Current wideband receiver architectures need wideband analog to digital converter (ADC) to sample wideband signal. As current state-of-art ADC has limitation in terms of power and sampling rate, we need to explore some alternative solutions. Compressive sampling (CS) data acquisition method is one of the solutions. Cognitive Radio signal, which is sparse in frequency domain can be sampled at Sub-Nyquist rate using low rate ADC. To relax the receiver complexity in terms of performance requirement we can use Modulated Wideband Converter (MWC) architecture, a Sub-Nyquist sampling method. In this thesis circuit design of this architecture covers signal within a frequency range of 500 MHz to 2.1 GHz, with a channel bandwidth of 1600 MHz. By using 8 parallel lines with channel trading factor of 11, effective sampling rate of 550 MHz is achieved for successful support recovery of multi-band input signal of size N=12.
APA, Harvard, Vancouver, ISO, and other styles
22

Yazicigil, Rabia Tugce. "Compressive Sampling as an Enabling Solution for Energy-Efficient and Rapid Wideband RF Spectrum Sensing in Emerging Cognitive Radio Systems." Thesis, 2016. https://doi.org/10.7916/D8571BXM.

Full text
Abstract:
Wireless systems have become an essential part of every sector of the national and global economy. In addition to existing commercial systems including GPS, mobile cellular, and WiFi communications, emerging systems like video over wireless, the Internet of Things, and machine-to-machine communications are expected to increase mobile wireless data traffic by several orders of magnitude over the coming decades, while natural resources like energy and radio spectrum remain scarce. The projected growth of the number of connected nodes into the trillions in the near term and increasing user demand for instantaneous, over-the-air access to large volumes of content will require a 1000-fold increase in network wireless data capacity by 2020. Spectrum is the lifeblood of these future wireless networks and the ’data storm’ driven by emerging technologies will lead to a pressing ’artificial’ spectrum scarcity. Cognitive radio is a paradigm proposed to overcome the existing challenge of underutilized spectrum. Emerging cognitive radio systems employing multi-tiered, shared-spectrum access are expected to deliver superior spectrum efficiency over existing scheduled-access systems; they have several device categories (3 or more tiers) with different access privileges. We focus on lower tiered ’smart’ devices that evaluate the spectrum dynamically and opportunistically use the underutilized spectrum. These ’smart’ devices require spectrum sensing for incumbent detection and interferer avoidance. Incumbent detection will rely on database lookup or narrowband high-sensitivity sensing. Integrated interferer detectors, on the other hand, need to be fast, wideband, and energy efficient, while requiring only moderate sensitivity. These future 'smart' devices operating in small cell environments will need to rapidly (in 10s of μs) detect a few (e.g. 3 to 6) strong interferers within roughly a 1GHz span and accordingly reconfigure their hardware resources or request adjustments to their wireless connection consisting of primary and secondary links in licensed and unlicensed spectrum. Compressive sampling (CS), an evolutionary sensing/sampling paradigm that changes the perception of sampling, has been extensively used for image reconstruction. It has been shown that a single pixel camera that exploits CS has the ability to obtain an image with a single detection element, while measuring the image fewer times than the number of pixels with the prior assumption of sparsity. We exploited CS in the presented works to take a ’snapshot’ of the spectrum with low energy consumption and high frequency resolutions. Compressive sampling is applied to break the fixed trade-off between scan time, resolution bandwidth, hardware complexity, and energy consumption. This contrasts with traditional spectrum scanning solutions, which have constant energy consumption in all architectures to first order and a fixed trade-off between scan time and resolution bandwidth. Compressive sampling enables energy-efficient, rapid, and wideband spectrum sensing with high frequency resolutions at the expense of degraded instantaneous dynamic range due to the noise folding. We have developed a quadrature analog-to-information converter (QAIC), a novel CS rapid spectrum sensing technique for band-pass signals. Our first wideband, energy-efficient, and rapid interferer detector end-to-end system with a QAIC senses a wideband 1GHz span with a 20MHz resolution bandwidth and successfully detects up to 3 interferers in 4.4μs. The QAIC offers 50x faster scan time compared to traditional sweeping spectrum scanners and 6.3x the compressed aggregate sampling rate of traditional concurrent Nyquist-rate approaches. The QAIC is estimated to be two orders of magnitude more energy efficient than traditional spectrum scanners/sensors and one order of magnitude more energy efficient than existing low-pass CS spectrum sensors. We implemented a CS time-segmented quadrature analog-to-information converter (TS-QAIC) that extends the physical hardware through time segmentation (e.g. 8 physical I/Q branches to 16 I/Q through time segmentation) and employs adaptive thresholding to react to the signal conditions without additional silicon cost and complexity. The TS-QAIC rapidly detects up to 6 interferers in the PCAST spectrum between 2.7 and 3.7GHz with a 10.4μs sensing time for a 20MHz RBW with only 8 physical I/Q branches while consuming 81.2mW from a 1.2V supply. The presented rapid sensing approaches enable system scaling in multiple dimensions such as ADC bits, the number of samples, and the number of branches to meet user performance goals (e.g. the number of detectable interferers, energy consumption, sensitivity and scan time). We envision that compressive sampling opens promising avenues towards energy-efficient and rapid sensing architectures for future cognitive radio systems utilizing multi-tiered, shared spectrum access.
APA, Harvard, Vancouver, ISO, and other styles
23

Wagadarikar, Ashwin Ashok. "Compressive Spectral and Coherence Imaging." Diss., 2010. http://hdl.handle.net/10161/2452.

Full text
Abstract:
<p>This dissertation describes two computational sensors that were used to demonstrate applications of generalized sampling of the optical field. The first sensor was an incoherent imaging system designed for compressive measurement of the power spectral density in the scene (spectral imaging). The other sensor was an interferometer used to compressively measure the mutual intensity of the optical field (coherence imaging) for imaging through turbulence. Each sensor made anisomorphic measurements of the optical signal of interest and digital post-processing of these measurements was required to recover the signal. The optical hardware and post-processing software were co-designed to permit acquisition of the signal of interest with sub-Nyquist rate sampling, given the prior information that the signal is sparse or compressible in some basis.</p> <p>Compressive spectral imaging was achieved by a coded aperture snapshot spectral imager (CASSI), which used a coded aperture and a dispersive element to modulate the optical field and capture a 2D projection of the 3D spectral image of the scene in a snapshot. Prior information of the scene, such as piecewise smoothness of objects in the scene, could be enforced by numerical estimation algorithms to recover an estimate of the spectral image from the snapshot measurement.</p> <p>Hypothesizing that turbulence between the scene and CASSI would introduce spectral diversity of the point spread function, CASSI's snapshot spectral imaging capability could be used to image objects in the scene through the turbulence. However, no turbulence-induced spectral diversity of the point spread function was observed experimentally. Thus, coherence functions, which are multi-dimensional functions that completely determine optical fields observed by intensity detectors, were considered. These functions have previously been used to image through turbulence after extensive and time-consuming sampling of such functions. Thus, compressive coherence imaging was attempted as an alternative means of imaging through turbulence.</p> <p>Compressive coherence imaging was demonstrated by using a rotational shear interferometer to measure just a 2D subset of the 4D mutual intensity, a coherence function that captures the optical field correlation between all the pairs of points in the aperture. By imposing a sparsity constraint on the possible distribution of objects in the scene, both the object distribution and the isoplanatic phase distortion induced by the turbulence could be estimated with the small number of measurements made by the interferometer.</p><br>Dissertation
APA, Harvard, Vancouver, ISO, and other styles
24

Fernandez, Christy Ann. "Computational spectral microscopy and compressive millimeter-wave holography." Diss., 2010. http://hdl.handle.net/10161/2406.

Full text
Abstract:
<p>This dissertation describes three computational sensors. The first sensor is a scanning multi-spectral aperture-coded microscope containing a coded aperture spectrometer that is vertically scanned through a microscope intermediate image plane. The spectrometer aperture-code spatially encodes the object spectral data and nonnegative</p> <p>least squares inversion combined with a series of reconfigured two-dimensional (2D spatial-spectral) scanned measurements enables three-dimensional (3D) (x, y, &#955) object estimation. The second sensor is a coded aperture snapshot spectral imager that employs a compressive optical architecture to record a spectrally filtered projection</p> <p>of a 3D object data cube onto a 2D detector array. Two nonlinear and adapted TV-minimization schemes are presented for 3D (x,y,&#955) object estimation from a 2D compressed snapshot. Both sensors are interfaced to laboratory-grade microscopes and</p> <p>applied to fluorescence microscopy. The third sensor is a millimeter-wave holographic imaging system that is used to study the impact of 2D compressive measurement on 3D (x,y,z) data estimation. Holography is a natural compressive encoder since a 3D</p> <p>parabolic slice of the object band volume is recorded onto a 2D planar surface. An adapted nonlinear TV-minimization algorithm is used for 3D tomographic estimation from a 2D and a sparse 2D hologram composite. This strategy aims to reduce scan time costs associated with millimeter-wave image acquisition using a single pixel receiver.</p><br>Dissertation
APA, Harvard, Vancouver, ISO, and other styles
25

Holloway, Jason. "Increasing temporal, structural, and spectral resolution in images using exemplar-based priors." Thesis, 2013. http://hdl.handle.net/1911/71966.

Full text
Abstract:
In the past decade, camera manufacturers have offered smaller form factors, smaller pixel sizes (leading to higher resolution images), and faster processing chips to increase the performance of consumer cameras. However, these conventional approaches have failed to capitalize on the spatio-temporal redundancy inherent in images, nor have they adequately provided a solution for finding $3$D point correspondences for cameras sampling different bands of the visible spectrum. In this thesis, we pose the following question---given the repetitious nature of image patches, and appropriate camera architectures, can statistical models be used to increase temporal, structural, or spectral resolution? While many techniques have been suggested to tackle individual aspects of this question, the proposed solutions either require prohibitively expensive hardware modifications and/or require overly simplistic assumptions about the geometry of the scene. We propose a two-stage solution to facilitate image reconstruction; 1) design a linear camera system that optically encodes scene information and 2) recover full scene information using prior models learned from statistics of natural images. By leveraging the tendency of small regions to repeat throughout an image or video, we are able to learn prior models from patches pulled from exemplar images. The quality of this approach will be demonstrated for two application domains, using low-speed video cameras for high-speed video acquisition and multi-spectral fusion using an array of cameras. We also investigate a conventional approach for finding 3D correspondence that enables a generalized assorted array of cameras to operate in multiple modalities, including multi-spectral, high dynamic range, and polarization imaging of dynamic scenes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!