To see the other types of publications on this topic, follow the link: Computer architectures; Digital signal processing.

Dissertations / Theses on the topic 'Computer architectures; Digital signal processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer architectures; Digital signal processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dung, Lan-Rong. "VHDL-based conceptual prototyping of embedded DSP architectures." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/14780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pourbigharaz, Fariborz. "An investigation into efficient interfacing strategies for VLSI arithmetic processors based on residue number systems utilising diminished and augmented radix-2 moduli." Thesis, Brunel University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Song, William S. "A fault-tolerant multiprocessor architecture for digital signal processing applications." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14427.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1989.
Includes bibliographical references.
Partly funded by US Air Force Office of Scientific Research. AFOSR-86-0164 Partly funded by Draper Laboratories.
by William S. Song.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
4

Curtis, Bryce Allen. "A special instruction set multiple chip computer for DSP : architecture and compiler design." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/15736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hodges, Christopher Joseph Munroe. "Skewed single instruction multiple data computation." Thesis, Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/15693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mauersberger, Gary S. "The design and hardware evaluation of an advanced 16-bit, low-power, high performance microcomputer system for digital signal processing." Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/14006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dworaczyk, Wiltshire Austin Aaron. "CUDA ENHANCED FILTERING IN A PIPELINED VIDEO PROCESSING FRAMEWORK." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1072.

Full text
Abstract:
The processing of digital video has long been a significant computational task for modern x86 processors. With every video frame composed of one to three planes, each consisting of a two-dimensional array of pixel data, and a video clip comprising of thousands of such frames, the sheer volume of data is significant. With the introduction of new high definition video formats such as 4K or stereoscopic 3D, the volume of uncompressed frame data is growing ever larger. Modern CPUs offer performance enhancements for processing digital video through SIMD instructions such as SSE2 or AVX. However, even with these instruction sets, CPUs are limited by their inherently sequential design, and can only operate on a handful of bytes in parallel. Even processors with a multitude of cores only execute on an elementary level of parallelism. GPUs provide an alternative, massively parallel architecture. GPUs differ from CPUs by providing thousands of throughput-oriented cores, instead of a maximum of tens of generalized “good enough at everything” x86 cores. The GPU’s throughput-oriented cores are far more adept at handling large arrays of pixel data, as many video filtering operations can be performed independently. This computational independence allows for pixel processing to scale across hun- dreds or even thousands of device cores. This thesis explores the utilization of GPUs for video processing, and evaluates the advantages and caveats of porting the modern video filtering framework, Vapoursynth, over to running entirely on the GPU. Compute heavy GPU-enabled video processing results in up to a 108% speedup over an SSE2-optimized, multithreaded CPU implementation.
APA, Harvard, Vancouver, ISO, and other styles
8

Smith, Paul Devon. "An Analog Architecture for Auditory Feature Extraction and Recognition." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4839.

Full text
Abstract:
Speech recognition systems have been implemented using a wide range of signal processing techniques including neuromorphic/biological inspired and Digital Signal Processing techniques. Neuromorphic/biologically inspired techniques, such as silicon cochlea models, are based on fairly simple yet highly parallel computation and/or computational units. While the area of digital signal processing (DSP) is based on block transforms and statistical or error minimization methods. Essential to each of these techniques is the first stage of extracting meaningful information from the speech signal, which is known as feature extraction. This can be done using biologically inspired techniques such as silicon cochlea models, or techniques beginning with a model of speech production and then trying to separate the the vocal tract response from an excitation signal. Even within each of these approaches, there are multiple techniques including cepstrum filtering, which sits under the class of Homomorphic signal processing, or techniques using FFT based predictive approaches. The underlying reality is there are multiple techniques that have attacked the problem in speech recognition but the problem is still far from being solved. The techniques that have shown to have the best recognition rates involve Cepstrum Coefficients for the feature extraction and Hidden-Markov Models to perform the pattern recognition. The presented research develops an analog system based on programmable analog array technology that can perform the initial stages of auditory feature extraction and recognition before passing information to a digital signal processor. The goal being a low power system that can be fully contained on one or more integrated circuit chips. Results show that it is possible to realize advanced filtering techniques such as Cepstrum Filtering and Vector Quantization in analog circuitry. Prior to this work, previous applications of analog signal processing have focused on vision, cochlea models, anti-aliasing filters and other single component uses. Furthermore, classic designs have looked heavily at utilizing op-amps as a basic core building block for these designs. This research also shows a novel design for a Hidden Markov Model (HMM) decoder utilizing circuits that take advantage of the inherent properties of subthreshold transistors and floating-gate technology to create low-power computational blocks.
APA, Harvard, Vancouver, ISO, and other styles
9

Wells, Ian. "Digital signal processing architectures for speech recognition." Thesis, University of the West of England, Bristol, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.294705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Dalan. "Parallel architectures for signal processing." Thesis, University of Aberdeen, 1991. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU034219.

Full text
Abstract:
This thesis presents the development of parallel architectures and algorithms for signal processing techniques, particularly for application to ultrasonic surface texture measurement. The background and context of this project is the real need to perform high speed signal processing on ultrasonic echoes used to extract information on texture properties of surfaces. Earlier investigation provided a solution by the nonlinear Maximum Entropy Method (MEM) which needs to be implemented at high speed and high performance. A review of parallel architectures for signal processing and digital signal processors is given. The aim is to introduce ways in which signal processing algorithms can be implemented at high speed. Both hardware and software have been developed in the project, and the signal processing system and parallel implementations of the algorithms are presented in detail. The signal processing system employs a parallel architecture using transputers. A feature of the design is that a floating-point digital signal processor is incorporated into a transputer array so that the performance of the system can be significantly enhanced. The design, testing and construction of the hardware system are discussed in detail. An investigation of some parallel DSP algorithms, including matrix multiplication, the Discrete Fourier Transform (DFT) and the Fast Fourier Transform (FFT), and their implementations based on the transputer array are discussed in order to choose an appropriate FFT implementation for our application. Several implementations of the deconvolution algorithms, including the Wiener-Hopf filter, the Maximum Entropy Method (MEM) and Projection Onto Convex Sets (POCS) are developed, which can benefit from the use of concurrency. A development of the MEM implementation based on the transputer array is to use the DSP as a subsystem for FFT calculations; this dual-system environment provides a significant resourse to be used to process ultrasonic echoes to determine surface roughness. Finally, the performance of the Projection Onto Convex Sets (POCS) algorithm in the field of ultrasonic surface determination and comparison with the Wiener-Hopf filter and the MEM are presented using simulated and real data. It is concluded that the parallel architecture provides a valuable contribution to high speed implementations of signal processing techniques.
APA, Harvard, Vancouver, ISO, and other styles
11

Yung, H. C. "Recursive and concurrent VLSI architectures for digital signal processing." Thesis, University of Newcastle Upon Tyne, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.481423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wacey, Graham. "Algorithms and architectures for primitive operator digital signal processing." Thesis, University of Bristol, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.388043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Tell, Eric. "Design of Programmable Baseband Processors." Doctoral thesis, Linköping : Univ, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Acharyya, Amit. "Resource constrained signal processing algorithms and architectures." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/179167/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Armstrong, Richard Paul. "High-performance signal processing architectures for digital aperture array telescopes." Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.560917.

Full text
Abstract:
An instrument with the ability to image neutral atomic hydrogen (HI) to cosmic redshift will allow the fundamental properties of the Universe to be more precisely determined; in particular the distribution, composition, and evolutionary history of its matter and energy. The Square Kilometre Array (SKA) is a radio survey telescope conceived with this aim. It will have the observational potential for much further fundamental science, including strong field tests of gravity and general relativity, revealing the origin and history of cosmological re-ionisation and magnetism, direct measures of gravitational radiation, and surveys of the unmapped Universe. And it is the advance of instrumentation that will enable it. This thesis makes three central contributions to radio instrumentation. Digital aperture arrays are a collector technology proposed for the key low- and mid- frequency ranges targeted by the SKA that have the potential to provide both the collecting area and field of view required for deep, efficient all-sky surveys of HI. The 2-Polarisations, All Digital (2-PAD) aperture array is an instrumental pathfinder for the SKA, novel in being a densely-spaced, wide-band aperture array that performs discrete signal filtering entirely digitally. The digital design of the 2-PAD radio receiver and the deployment of the aperture array and signal processing system at Jodrell Bank Radio Observatory is detailed in this thesis. The problem of element anisotropy in small arrays, the atomic unit of the SKA station array, ultimately affects beam quality. Addressing this issue, a metaheuristic digital beam-shape optimisation technique is applied to a small beamformed array, and is shown to outperform traditional analytic solutions. Digital processing for aperture arrays is challenging. A qualitative framework shows that energy, computational and communication requirements demand optimised processing architectures. A quantitative model reveals the physical limitations on architecture choice. An energy-optimised architecture, the IBM BIT integer array processor, is investigated in detail; a cycle-accurate architectural simulator and programming language are developed and used to build signal processing algorithms on the array architecture.
APA, Harvard, Vancouver, ISO, and other styles
16

Woods, Roger. "High performance VLSI architectures for recursive filtering." Thesis, Queen's University Belfast, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Magrath, Anthony J. "Algorithms and architectures for high resolution Sigma-Delta converters." Thesis, King's College London (University of London), 1996. https://kclpure.kcl.ac.uk/portal/en/theses/algorithms-and-architectures-for-high-resolution-sigmadelta-converters(11c20a7d-f272-4cc9-8a77-eab246009f6f).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Benaissa, Mohammed. "VLSI algorithms, architectures and design for the Fermat Number Transform." Thesis, University of Newcastle Upon Tyne, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.254020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Linton, Ken N. "Digital mixing consoles : parallel architectures and taskforce scheduling strategies." Thesis, Durham University, 1995. http://etheses.dur.ac.uk/5371/.

Full text
Abstract:
This thesis is concerned specifically with the implementation of large-scale professional DMCs. The design of such multi-DSP audio products is extremely challenging: one cannot simply lash together n DSPs and obtain /7-times the performance of a sole device. M-P models developed here show that topology and IPC mechanisms have critical design implications. Alternative processor technologies are investigated with respect to the requirements of DMC architectures. An extensive analysis of M-P topologies is undertaken using the metrics provided by the TPG tool. Novel methods supporting DSP message-passing connectivity lead to the development of a hybrid audio M-P (HYMIPS) employing these techniques. A DMC model demonstrates the impact of task allocation on ASP M-P architectures. Five application-specific heuristics and four static-labelling schemes are developed for scheduling console taskforces on M-Ps. An integrated research framework and DCS engine enable scheduling strategies to be analysed with regard to the DMC problem domain. Three scheduling algorithms — CPM, DYN and AST — and three IPC mechanisms — FWE, NSL and NML — are investigated. Dynamic-labelling strategies and mix-bus granularity issues are further studied in detail. To summarise, this thesis elucidates those topologies, construction techniques and scheduling algorithms appropriate to professional DMC systems.
APA, Harvard, Vancouver, ISO, and other styles
20

Harris, Fred. "A Fresh View of Digital Signal Processing for Software Defined Radios: Part I." International Foundation for Telemetering, 2002. http://hdl.handle.net/10150/606343.

Full text
Abstract:
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California
Digital signal processing has inexorably been woven into the fabric of every function performed in a modern radio communication system. In the rush to the marketplace, we have fielded many DSP designs based on analog prototype solutions containing legacy compromises appropriate for the technology of a time past. As we design the next generation radio we pause to examine and review past solutions to past radio problems. In this review we discover a number of DSP design methods and perspectives that lead to cost and performance advantages for use in the next generation radio.
APA, Harvard, Vancouver, ISO, and other styles
21

Elnaggar, Ayman Ibrahim. "Scalable parallel VLSI architectures and algorithms for digital signal and video processing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0032/NQ27134.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Saleh, R. A. "Algorithms and architectures using the number theoretic transform for digital signal processing." Thesis, University of Kent, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Nibouche, O. "High performana computer arithmetic architectures for image and signal processing applications." Thesis, Queen's University Belfast, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ekstam, Ljusegren Hannes, and Hannes Jonsson. "Parallelizing Digital Signal Processing for GPU." Thesis, Linköpings universitet, Programvara och system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167189.

Full text
Abstract:
Because of the increasing importance of signal processing in today's society, there is a need to easily experiment with new ways to process signals. Usually, fast-performing digital signal processing is done with special-purpose hardware that are difficult to develop for. GPUs pose an alternative for fast performing digital signal processing. The work in this thesis is an analysis and implementation of a GPU version of a digital signal processing chain provided by SAAB. Through an iterative process of development and testing, a final implementation was achieved. Two benchmarks, both comprised of 4.2 M test samples, were made to compare the CPU implementation with the GPU implementation. The benchmark was run on three different platforms: a desktop computer, a NVIDIA Jetson AGX Xavier and a NVIDIA Jetson TX2. The results show that the parallelized version can reach several magnitudes higher throughput than the CPU implementation.
APA, Harvard, Vancouver, ISO, and other styles
25

Luo, Chenchi. "Non-uniform sampling: algorithms and architectures." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45873.

Full text
Abstract:
Modern signal processing applications emerging in telecommunication and instrumentation industries have placed an increasing demand for ADCs with higher speed and resolution. The most fundamental challenge in such a progress lies at the heart of the classic signal processing: the Shannon-Nyquist sampling theorem which stated that when sampled uniformly, there is no way to increase the upper frequency in the signal spectrum and still unambiguously represent the signal except by raising the sampling rate. This thesis is dedicated to the exploration of the ways to break through the Shannon-Nyquist sampling rate by applying non-uniform sampling techniques. Time interleaving is probably the most intuitive way to parallel the uniform sampling process in order to achieve a higher sampling rate. Unfortunately, the channel mismatches in the TIADC system make the system an instance of a recurrent non-uniform sampling system whose non-uniformities are detrimental to the performance of the system and need to be calibrated. Accordingly, this thesis proposed a flexible and efficient architecture to compensate for the channel mismatches in the TIADC system. As a key building block in the calibration architecture, the design of the Farrow structured adjustable fractional delay filter has been investigated in detail. A new modified Farrow structure is proposed to design the adjustable FD filters that are optimized for a given range of bandwidth and fractional delays. The application of the Farrow structure is not limited to the design of adjustable fractional delay filters. It can also be used to implement adjustable lowpass, highpass and bandpass filters as well as adjustable multirate filters. This thesis further extends the Farrow structure to the design of filters with adjustable polynomial phase responses. Inspired by the theory of compressive sensing, another contribution of this thesis is to use randomization as a means to overcome the limit of the Nyquist rate. This thesis investigates the impact of random sampling intervals or jitters on the power spectrum of the sampled signal. It shows that the aliases of the original signal can be well shaped by choosing an appropriate probability distribution of the sampling intervals or jitters such that aliases can be viewed as a source of noise in the signal power spectrum. A new theoretical framework has been established to associate the probability mass function of the random sampling intervals or jitters with the aliasing shaping effect. Based on the theoretical framework, this thesis proposes three random sampling architectures, i.e., SAR ADC, ramp ADC and level crossing ADC, that can be easily implemented based on the corresponding standard ADC architectures. Detailed models and simulations are established to verify the effectiveness of the proposed architectures. A new reconstruction algorithm called the successive sine matching pursuit has also been proposed to recover a class of spectrally sparse signals from a sparse set of non-uniform samples onto a denser uniform time grid so that classic signal processing techniques can be applied afterwards.
APA, Harvard, Vancouver, ISO, and other styles
26

Papenfuss, Frank. "Digital signal processing of nonuniform sampled signals contributions to algorithms & hardware architectures." Aachen Shaker, 2007. http://d-nb.info/987695959/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Assef, Amauri Amorin. "Arquitetura de hardware multicanal reconfigurável com excitação multinível para desenvolvimento e testes de novos métodos de geração de imagens por ultrassom." Universidade Tecnológica Federal do Paraná, 2013. http://repositorio.utfpr.edu.br/jspui/handle/1/889.

Full text
Abstract:
UTFPR; CNPq; CAPES; Fundação Araucária; Ministério da Saúde
Os sistemas de diagnóstico por imagem de ultrassom (US) figuram entre os mais sofisticados equipamentos de processamento de sinais na atualidade. Apesar da alta tecnologia envolvida, a maioria dos sistemas comerciais de imagem possui arquitetura típica “fechada”, não atendendo às exigências de flexibilidade e acesso aos dados de radiofrequência (RF) para desenvolvimento e teste de novas modalidades e técnicas do US. Este trabalho apresenta uma nova arquitetura modular de hardware (front-end), baseada em dispositivos FPGA (Field Programmable Gated Array), e software (back-end), baseada em PC ou DSP, totalmente programável, aberta e flexível, para pesquisa e investigação de técnicas inovadoras para geração de imagens médicas por US. A plataforma desenvolvida ULTRA-ORS (do inglês Ultrasound Open Research System) permite conexão com transdutores multielementos dos tipos lineares, convexos e phased array com frequência central entre 500 kHz e 20 MHz, e capacidade de expansão para operação com transdutores de até 1024 elementos multiplexados. O módulo eletrônico lógico para formação do feixe (beamformer transmitter) possibilita excitação simultaneamente, através de sinais PWM, de 128 canais com formas de ondas arbitrárias, abertura programável, e tensão de excitação de até 200 Vpp, permitindo controle individual de habilitação, amplitude de apodização com até 256 níveis, ângulo de fase e atraso temporal de disparo adequado para focalização na transmissão. O módulo de recepção (beamformer receiver) realiza a aquisição simultânea de 128 canais com taxa de amostragem programável até 50 MHz e resolução de 12 bits. Como item imprescindível deste trabalho, a plataforma proposta possibilita acesso e transferência dos dados de RF digitalizados para um computador através de interfaces seriais ou para kits de DSP para processamento das imagens. Como resultado do projeto de pesquisa, é apresentado um novo sistema digital de US que pode ser utilizado para avaliações das imagens geradas pela técnica beamforming, utilizando como referência a ferramenta de simulação Field II e comparações com as imagens geradas por equipamentos comerciais em phantom mimetizador de tecidos biológicos de US.
Medical ultrasound (US) scanners are amongst the most sophisticated signal processing machines in use today. Even with the recent advances in electronic technology, their typical architecture is often “closed” and does not fit the requirements of flexibility and RF data access to the development and test of new modalities and US techniques. This work presents the development of a novel modular hardware architecture (front-end), FPGA-based (Field Programmable Gated Array) and software (back-end), PC-based or DSP-based, fully programmable, open and flexible, for research and investigation of new techniques for medical US imaging. The proposed platform, ULTRA-ORS (Ultrasound Open Research System), allows connection to linear, convex and phased array transducers with center frequency between 500 kHz and 20 MHz, and expansion capability for operation with transducers up to 1024 multiplexed elements. The transmitter beamformer can excite simultaneously, using PWM signals, 128-channel with arbitrary waveform, programmable aperture, and 200 Vpp excitation voltage, allowing individual enable control, amplitude apodization up to 256 levels, phase angle and proper time delay for focusing on transmission. The receiver beamformer can handle simultaneous 128-channels acquisition with programmable sampling rate up to 50 MHz and 12-bit resolution. As essential item of this work, the platform enables access to the raw RF signals to be transferred to a computer through serial ports or DSP kits for imaging processing. As a result of the research project, we present a new digital US system that can be used for evaluation of images generated by the beamforming technique, using as reference the Field II simulation tool and comparisons with commercial equipment using US tissue-mimicking phantom.
5000
APA, Harvard, Vancouver, ISO, and other styles
28

Singer, Amy M. (Amy Michelle). "Top-down design of digital signal processing systems." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40000.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (leaves 45-46).
by Amy M. Singer.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
29

Sheet, Lenny. "Noise measurement to 40PPM using digital signal processing." Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/26832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Song, Zhiguo. "Systèmes de numérisation hautes performances - Architectures robustes adaptées à la radio cognitive." Phd thesis, Supélec, 2010. http://tel.archives-ouvertes.fr/tel-00589826.

Full text
Abstract:
Les futures applications de radio cognitive requièrent des systèmes de numérisation capables de convertir alternativement ou simultanément soit une bande très large avec une faible résolution soit une bande plus étroite avec une meilleure résolution, ceci de manière versatile (i.e. par contrôle logiciel). Pour cela, les systèmes de numérisation basés sur les Bancs de Filtres Hybrides (BFH) sont une solution attractive. Ils se composent d'un banc de filtres analogiques, un banc de convertisseurs analogique-numérique et un banc de filtres numériques. Cependant, ils sont très sensibles aux imperfections analogiques. L'objectif de cette thèse était de proposer et d'étudier une méthode de calibration qui permette de corriger les erreurs analogiques dans la partie numérique. De plus, la méthode devait être implémentable dans un système embarqué. Ce travail a abouti à une nouvelle méthode de calibration de BFH utilisant une technique d'Égalisation Adaptative Multi-Voies (EAMV) qui ajuste les coefficients des filtres numériques par rapport aux filtres analogiques réels. Cette méthode requiert d'injecter un signal de test connu à l'entrée du BFH et d'adapter la partie numérique afin de reconstruire le signal de référence correspondant. Selon le type de reconstruction souhaité (d'une large-bande, d'une sous-bande ou d'une bande étroite particulière), nous avons proposé plusieurs signaux de test et de référence. Ces signaux ont été validés en calculant les filtres numériques optimaux par la méthode de Wiener-Hopf et en évaluant leurs performances de ces derniers dans le domaine fréquentiel. Afin d'approcher les filtres numériques optimaux avec une complexité calculatoire minimum, nous avons implémenté un algorithme du gradient stochastique. La robustesse de la méthode a été évaluée en présence de bruit dans la partie analogique et de en tenant compte de la quantification dans la partie numérique. Un signal de test plus robuste au bruit analogique a été proposé. Les nombres de bits nécessaires pour coder les différentes données dans la partie numérique ont été dimensionnés pour atteindre les performances visées (à savoir 14 bits de résolution). Ce travail de thèse a permis d'avancer vers la réalisation des futurs systèmes de numérisation basés sur les BFH.
APA, Harvard, Vancouver, ISO, and other styles
31

Runyon, Ginger R. "Parallel processor architecture for a digital beacon receiver." Thesis, Virginia Tech, 1990. http://hdl.handle.net/10919/41422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hussain, A. "Novel artificial neural network architectures and algorithms for non-linear dynamical system modelling and digital communications applications." Thesis, University of Strathclyde, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Gruhl, Daniel F. "LibDsp, an object oriented C++ digital signal processing library." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/37539.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (leaves 194-195).
by Daniel F. Gruhl.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
34

Harris, Fred. "A Fresh View of Digital Signal Processing for Software Defined Radios: Part II." International Foundation for Telemetering, 2002. http://hdl.handle.net/10150/606322.

Full text
Abstract:
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California
A DSP modem is often designed as a set of processing blocks that replace the corresponding blocks of an analog prototype. Such a design is sub-optimal, inheriting legacy compromises made in the analog design while discarding important design options unique to the DSP domain. In part I of this two part paper, we used multirate processing to transform a digital down converter from an emulation of the standard analog architecture to a DSP based solution that reversed the order of frequency selection, filtering, and resampling. We continue this tack of embedding traditional processing tasks into multirate DSP solutions that perform multiple simultaneous processing tasks.
APA, Harvard, Vancouver, ISO, and other styles
35

Jarrah, Amin. "Development of Parallel Architectures for Radar/Video Signal Processing Applications." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1415806786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Phillips, Desmond Keith. "Algorithms and architectures for the multirate additive synthesis of musical tones." Thesis, Durham University, 1996. http://etheses.dur.ac.uk/5350/.

Full text
Abstract:
In classical Additive Synthesis (AS), the output signal is the sum of a large number of independently controllable sinusoidal partials. The advantages of AS for music synthesis are well known as is the high computational cost. This thesis is concerned with the computational optimisation of AS by multirate DSP techniques. In note-based music synthesis, the expected bounds of the frequency trajectory of each partial in a finite lifecycle tone determine critical time-invariant partial-specific sample rates which are lower than the conventional rate (in excess of 40kHz) resulting in computational savings. Scheduling and interpolation (to suppress quantisation noise) for many sample rates is required, leading to the concept of Multirate Additive Synthesis (MAS) where these overheads are minimised by synthesis filterbanks which quantise the set of available sample rates. Alternative AS optimisations are also appraised. It is shown that a hierarchical interpretation of the QMF filterbank preserves AS generality and permits efficient context-specific adaptation of computation to required note dynamics. Practical QMF implementation and the modifications necessary for MAS are discussed. QMF transition widths can be logically excluded from the MAS paradigm, at a cost. Therefore a novel filterbank is evaluated where transition widths are physically excluded. Benchmarking of a hypothetical orchestral synthesis application provides a tentative quantitative analysis of the performance improvement of MAS over AS. The mapping of MAS into VLSI is opened by a review of sine computation techniques. Then the functional specification and high-level design of a conceptual MAS Coprocessor (MASC) is developed which functions with high autonomy in a loosely-coupled master- slave configuration with a Host CPU which executes filterbanks in software. Standard hardware optimisation techniques are used, such as pipelining, based upon the principle of an application-specific memory hierarchy which maximises MASC throughput.
APA, Harvard, Vancouver, ISO, and other styles
37

Balraj, Navaneethakrishnan. "AUTOMATED ACCIDENT DETECTION IN INTERSECTIONS VIA DIGITAL AUDIO SIGNAL PROCESSING." MSSTATE, 2003. http://sun.library.msstate.edu/ETD-db/theses/available/etd-10212003-102715/.

Full text
Abstract:
The aim of this thesis is to design a system for automated accident detection in intersections. The input to the system is a three-second audio signal. The system can be operated in two modes: two-class and multi-class. The output of the two-class system is a label of ?crash? or ?non-crash?. In the multi-class system, the output is the label of ?crash? or various non-crash incidents including ?pile drive?, ?brake?, and ?normal-traffic? sounds. The system designed has three main steps in processing the input audio signal. They are: feature extraction, feature optimization and classification. Five different methods of feature extraction are investigated and compared; they are based on the discrete wavelet transform, fast Fourier transform, discrete cosine transform, real cepstrum transform and Mel frequency cepstral transform. Linear discriminant analysis (LDA) is used to optimize the features obtained in the feature extraction stage by linearly combining the features using different weights. Three types of statistical classifiers are investigated and compared: the nearest neighbor, nearest mean, and maximum likelihood methods. Data collected from Jackson, MS and Starkville, MS and the crash signals obtained from Texas Transportation Institute crash test facility are used to train and test the designed system. The results showed that the wavelet based feature extraction method with LDA and maximum likelihood classifier is the optimum design. This wavelet-based system is computationally inexpensive compared to other methods. The system produced classification accuracies of 95% to 100% when the input signal has a signal-to-noise-ratio of at least 0 decibels. These results show that the system is capable of effectively classifying ?crash? or ?non-crash? on a given input audio signal.
APA, Harvard, Vancouver, ISO, and other styles
38

Papenfuß, Frank [Verfasser]. "Digital Signal Processing of Nonuniform Sampled Signals : Contributions to Algorithms & Hardware Architectures / Frank Papenfuß." Aachen : Shaker, 2008. http://d-nb.info/116434207X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Tran, Merry Thi. "Applications of Digital Signal Processing with Cardiac Pacemakers." PDXScholar, 1992. https://pdxscholar.library.pdx.edu/open_access_etds/4582.

Full text
Abstract:
Because the voltage amplitude of a heart beat is small compared to the amplitude of exponential noise, pacemakers have difficulty registering the responding heart beat immediately after a pacing pulse. This thesis investigates use of digital filters, an inverse filter and a lowpass filter, to eliminate the effects of exponential noise following a pace pulse. The goal was to create a filter which makes recognition of a haversine wave less dependent on natural subsidence of exponential noise. Research included the design of heart system, pacemaker, pulse generation, and D sensor system simulations. The simulation model includes the following components: \ • Signal source, A MA TLAB generated combination of a haversine signal, exponential noise, and myopotential noise. The haversine signal is a test signal used to simulate the QRS complex which is normally recorded on an ECG trace as a representa tion of heart function. The amplitude is approximately 10 mV. Simulated myopotential noise represents a uniformly distributed random noise which is generated by skeletal muscle tissue. The myopotential noise has a frequency spectrum extending from 70 to 1000Hz. The amplitude varies from 2 to 5 mV. Simulated exponential noise represents the depolarization effects of a pacing pulse as seen at the active cardiac lead. The amplitude is about -1 volt, large in comparison with the haversine signal. • AID converter, A combination of sample & hold and quantizer functions translate the analog signal into a digital signal. Additionally, random noise is created during quantization. • Digital filters, An inverse filter removes the exponential noise, and a lowpass filter removes myopotential noise. • Threshold level detector, A function which detects the strength and amplitude of the output signal was created for robustness and as a data sampling device. The simulation program is written for operation in a DOS environment. The program generates a haversine signal, myopotential noise (random noise), and exponential noise. The signals are amplified and sent to an AID converter stage. The resultant digital signal is sent to a series of digital filters, where exponential noise is removed by an inverse digital filter, and myopotential noise is removed by the Chebyshev type I lowpass digital filter. The output signal is "detected" if its waveform exceeds the noise threshold level. To determine what kind of digital filter would remove exponential noise, the spectrum of exponential noise relative to a haversine signal was examined. The spectrum of the exponential noise is continuous because the pace pulse is considered a non-periodic signal (assuming the haversine signal occurs immediately after a pace pulse). The spectrum of the haversine is also continuous, existing at every value of frequency co. The spectrum of the haversine is overlapped by the spectrum of and amplitude of the exponential, which is several orders of magnitude larger. The exponential cannot be removed by conventional filters. Therefore, an inverse filter approach is used to remove exponential noise. The transfer function of the inverse filter of the model has only zeros. This type of filter is called FIR, all-zero, non recursive, or moving average. Tests were run using the model to investigate the behavior of the inverse filter. It was found that the haversine signal could be clearly detected within a 5% change in the time constant of the exponential noise. Between 5% and 15% of change in the time constant, the filtered exponential amplitude swamps the haversine signal. The sensitivity of the inverse filter was also studied: when using a fixed exponential time constant but changing the location of the transfer function, the effect of the exponential noise on the haversine is minimal when zeros are located between 0.75 and 0.85 of the unit circle. After the source signal passes the inverse filter, the signal consists only of the haversine signal, myopotential noise, and some random noise introduced during quantization. To remove these noises, a Chebyshev type I lowpass filter is used.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Jen Mei. "Multistage adaptive filtering in a multirate digital signal processing system." Thesis, Massachusetts Institute of Technology, 1993. https://hdl.handle.net/1721.1/127935.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1993.
Includes bibliographical references (leaves 101-104).
by Jen Mei Chen.
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1993.
APA, Harvard, Vancouver, ISO, and other styles
41

Katzir, Yoel. "PC software for the teaching of digital signal processing." Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/23346.

Full text
Abstract:
Approved for public release; distribution is unlimited
The Electrical and Computer Engineering Department at the Naval Postgraduate School has a need for additional software to be used in instructing students studying digital signal processing. This software will be used in a PC lab or at home. This thesis provides a set of disks written in APL (A Programming Language) which allows the user to input arbitrary signals from a disk, to perform various signal processing operations, to plot the results, and to save them without the need for complicated programming. The software is in the form of a digital signal processing toolkit. The user can select functions which can operate on the signals and interactively apply them in any order. The user can also easily develop new functions and include them in the toolkit. The thesis includes brief discussions about the library workspaces, a user manual, function listings with examples of their use, and an application paper. The software is modular and can be expanded by adding additional sets of functions.
http://archive.org/details/pcsoftwarefortea00katz
Major, Israeli Air Force
APA, Harvard, Vancouver, ISO, and other styles
42

Galindo, Guarch Francisco Javier. "Digital hardware architectures for beam synchronous processing and of synchronization of particle accelerators." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/672314.

Full text
Abstract:
In Particle Accelerators, the Low-Level RF (LLRF) is the control system of the RF, and in the end, of the purpose of the machine, that is the energy transfer and acceleration of particles. It implements algorithms synchronizing the RF conveying the energy to the beam and tailoring its longitudinal parameters. For this, the LLRF uses beam-related signals whose spectral content changes during the acceleration. The increase in energy results in an increase of the beam velocity, and for circular accelerators (Synchrotrons) a decrease in revolution period. This is especially relevant for Hadron machines whose injection energy is low resulting in a significant increase of their velocity before reaching relativistic speeds. Hence, the LLRF needs to continuously tune its processing to the beam; we call this technique Beam Synchronous Processing. One important task of the LLRF is the compensation of the beam-induced voltage in the accelerating cavities (Beam Loading). In the CERN SPS the regulation bandwidth must cover 5 MHz on each side of the 200 MHz RF. With a beam revolution period around 23 µs more than a hundred revolution frequency harmonics, present in the beam signal, fall in the RF sidebands. The variation in beam velocity changes the position and spacing of the harmonics in the spectrum. The large number of harmonics and their varying positions make the algorithm reconfiguration an undesirable option. To cope with this, the early digital implementations used a system clock derived from the sweeping RF. This locks the sampling and the processing to the beam, by design. This historical solution, that is still in use in several machines, is now a limiting factor for the use of modern technologies. The Thesis presents a novel Beam Synchronous Processing Architecture, using a fixed frequency clocking, and capable of treating periodic signals with known and varying fundamental frequency. The Architecture is an alternative to the burden of reconfiguration in processing algorithms; it tunes the spectrum to the processing by resampling the input data. Two Resamplers are combined in the so-called resampling sandwich. The application algorithm requiring synchronism with the input signal is placed in the middle. The key element is a novel All-Digital Farrow-based Resampler, that accepts arbitrary resampling ratios that can be modified in real-time. The hardware uses a single fixed frequency system clock, making its implementation feasible in State-Of-the-Art FPGAs, ASICs and systems such as the new uTCA platform currently being deployed in the CERN SPS LLRF system. The input and output ports of the Resampler, and all the processing within the Architecture, are synchronous to this fixed frequency clock and accept data streams whose sampling rate can be variable and modified in real time. The Architecture has been commissioned in a LLRF uTCA crate hosting the One Turn FeedBack algorithm to control a real SPS cavity. The algorithm compensates the Beam Loading. The Architecture has demonstrated its capability to track in real-time an energy ramp with an RF frequency following a linear sawtooth pattern ramped at 2.4 MHz per second. The complete uTCA implementation has successfully passed all the functional validation and qualitative tests. The Architecture suits seamless the two technological paradigm changes adopted for the new CERN SPS LLRF system; first, the instantaneous value of the RF frequency is transmitted as a numerical word (used to set the resampling ratio), via a deterministic network, the White Rabbit. And second, the reference signal is now the fixed frequency clock recovered from this network. Both paradigms benefit from the all-digital Resampler and the Beam Synchronous Architecture that fulfil the techniques and technological needs for its implementation enabling novel LLRF algorithms and solutions.
En un Acelerador de Partículas, el Low Level RF (LLRF) es el sistema de control de la RF, e implícitamente, de la transferencia de energía y aceleración de partículas, objetivo último de la máquina. El LLRF implementa algoritmos que sincronizan la transferencia de energía de RF hacia el haz, y controla sus parámetros longitudinales. Usa señales del haz, cuyo contenido espectral se modifica con la aceleración. El incremento en energía implica un incremento en velocidad del haz que, para aceleradores circulares (Sincrotrones), resulta en un decremento del periodo de revolución. Esto es relevante en aceleradores de Hadrones, en los cuales la baja energía de inyección favorece grandes incrementos de velocidad antes de alcanzar valores relativistas. El LLRF necesita por tanto sintonizar continuamente el procesado y el haz (Beam Synchronous Processing). Una misión del LLRF es la compensación de la tensión inducida por el haz en cavidades aceleradoras (Beam Loading). En el sincrotrón SPS del CERN, el ancho de banda de regulación cubre 5 MHz a cada lado de la RF (200 MHz). Con un periodo de revolución de aproximadamente 23 µs, más de cien harmónicos de la frecuencia de revolución, presentes en la señal del haz, aparecen en las bandas alrededor de la RF. La variación en velocidad del haz cambia la posición y espaciado de estos harmónicos en el espectro. Su número y posición cambiante hacen una opción poco deseable la reconfiguración en algoritmos de control. La solución histórica es un reloj de sistema derivado de la RF, por tanto variable, que liga por diseño el muestreo y procesado al haz. Aún en uso en varias máquinas, este reloj es ahora un factor limitante para el uso de nuevas tecnologías. Esta Tesis presenta una nueva Arquitectura para Tratamiento Síncrono de Señales derivadas del Haz, mediante un reloj de sistema con frecuencia fija, que posibilita el tratamiento de señales periódicas en las que el harmónico fundamental tiene una frecuencia variable y conocida. La Arquitectura es una alternativa válida al problema de reconfiguración de algoritmos de procesado; sintoniza el espectro al procesado mediante el re-muestreo de los datos. Dos Re-muestreadores (Resamplers) son combinados en el denominada sándwich de re-muestreo. El algoritmo requiriendo sincronismo con el haz, se sitúa en medio de este sándwich. El elemento clave es un novedoso Resampler digital que acepta relaciones de re-muestreo arbitrarias y modificables en tiempo real. El hardware usa un único reloj de sistema de frecuencia fija, facilitando la implementación en FPGAs, ASICs y sistemas de última generación, como los controladores uTCA en los sistemas LLRF del SPS en el CERN. Los puertos de entrada y salida del Resampler, y todo el procesado en la Arquitectura, son síncronos a este reloj, y aceptan señales con una frecuencia de muestreo variable en tiempo real.La Arquitectura ha sido implementada en un controlador uTCA de una cavidad del SPS albergando el algoritmo One Turn FeedBack. El algoritmo compensa el Beam Loading. La Arquitectura demuestra ser viable operando sintonizada a una rampa de aceleración del haz, con una RF cuya frecuencia varia linealmente a 2.4 MHz por segundo siguiendo un patrón en diente de sierra. La implementación de la Arquitectura ha pasado toda la validación funcional y test cualitativos. La Arquitectura se adapta de manera sin igual a dos cambios de paradigma tecnológico adoptados por el LLRF del SPS; primero, la distribución del valor instantáneo de la frecuencia de RF es ahora hecho mediante una palabra digital con una red determinista, White Rabbit. Y segundo, la señal de referencia es ahora un reloj con frecuencia fija extraído de esta red. La adopción de ambos paradigmas se ve beneficiada por el uso de la Arquitectura y Resampler, que satisfacen los requerimientos técnicos y tecnológicos para la implementación de nuevos algoritmos y soluciones LLRF.
Dans le monde des Accélérateurs de Particules, le Low-Level RF (LLRF) est le système de contrôle de la RF et, in-fine, du transfert d'énergie et de l'accélération des particules. Il met en oeuvre des algorithmes synchronisant la RF transférant l'énergie au faisceau et adaptant ses paramètres longitudinaux. Pour cela, le LLRF utilise des signaux liés au faisceau dont le contenu spectral est modifié par l'accélération. L'augmentation d'énergie se traduit par une augmentation de la vitesse du faisceau, et pour les accélérateurs circulaires (Synchrotrons), une diminution de la période de révolution. Cela est particulièrement pertinent pour les machines à Hadrons dont l’énergie d’injection est faible, avec la conséquence d’une augmentation significative de leur vitesse durant l’accélération. Le LLRF doit donc ajuster en permanence son traitement au faisceau ; nous appelons cette exigence Beam Synchronous Processing. Une tâche importante du LLRF est la compensation de la tension induite par le faisceau (Beam Loading). Dans le SPS au CERN, la régulation couvre 5 MHz de chaque côté de la RF (200 MHz). Avec une période de révolution autour de 23 μs, plus d'une centaine d’harmoniques de fréquence de révolution, présentes dans le spectre du faisceau, tombent dans la bande +- 5 MHz. La variation de vitesse du faisceau modifie la position et l'espacement des harmoniques dans le spectre. Le grand nombre de raies spectrales et leur position variable font de la reconfiguration de l'algorithme une option indésirable. Les solutions digitales existantes ont donc préféré changer l’horloge d’échantillonnage : Celle-ci est verrouillée sur la RF, ce qui synchronise par conception l'échantillonnage et le traitement du faisceau. Cette solution historique, toujours en usage dans plusieurs machines, est aujourd'hui un facteur limitant pour les technologies modernes. La Thèse présente une nouvelle Architecture de traitement synchrone de faisceau, utilisant une horloge fixe, et capable de traiter des signaux périodiques de fréquence fondamentale connue et possiblement variable. L'Architecture apporte une alternative au fardeau de la reconfiguration dans les algorithmes ; il ajuste le spectre au traitement en rééchantillonnant les données d'entrée. Deux Rééchantillonneurs ont été combinés dans le sandwich de rééchantillonnage. L'algorithme d'application nécessitant un synchronisme avec le signal d'entrée est placé au milieu. L'élément clé est un nouveau Ré-échantillonneur entièrement numérique basé sur une architecture Farrow, qui accepte des taux de rééchantillonnage arbitraires pouvant également être modifiés en temps réel. L’implémentation utilise une seule horloge système à fréquence fixe, ce qui rend sa mise en œuvre possible dans les FPGA, ASIC et systèmes de pointe comme la nouvelle plate-forme uTCA actuellement déployée dans le SPS du CERN. L’entrée et la sortie du Ré-échantillonneur, et tout le traitement dans l'Architecture, sont synchrones avec cette horloge et acceptent un taux d’échantillonnage variable que peut être modifiée en temps réel. L'Architecture a été déployée dans un châssis uTCA hébergeant l'algorithme One Turn FeedBack pour contrôler une véritable cavité SPS. L'algorithme compense le Beam Loading. L'Architecture a démontré sa capacité à suivre en temps réel une rampe d'énergie avec une fréquence RF suivant une modulation en dent de scie, à 2.4 MHz par seconde. L’implémentation complète sur uTCA a passé avec succès les tests de validation fonctionnelle et qualitative. L'Architecture convient parfaitement aux deux paradigmes technologiques adoptés pour le nouveau système LLRF du SPS ; premièrement, la valeur instantanée de la fréquence RF est transmise sous forme de mot numérique (qui donnera le taux de rééchantillonnage), via un réseau déterministe, le White Rabbit. Et deuxièmement, le signal de référence est maintenant l'horloge à fréquence fixe récupérée de ce réseau. La solution présentée respecte ces deux paradigmes grâce au Réchantillonneur entièrement numérique et à l'horloge fixe.
Ciència i enginyeria de materials
APA, Harvard, Vancouver, ISO, and other styles
43

Peng, Dongming. "Exploiting parallelism within multidimensional multirate digital signal processing systems." Diss., Texas A&M University, 2003. http://hdl.handle.net/1969/141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Frangakis, G. P. "Digital and microprocessor-based techniques in signal processing and system simulation." Thesis, University of Southampton, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Losh, Jonathan L. "Digital signal processing hardware for a fast fourier transform radio telescope." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77447.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references.
21-cm tomography is a devoloping technique for measuring the Epoch of Reionization in the universe's history. The nature of the signal measured in 21-cm tomography is such that a new kind of radio telescope is needed: one that scales well into very large numbers of antennas. The Omniscope, a Fast Fourier Transform telescope, is exactly such a telescope. I detail the implementation of the digital signal processing backend of a 32-channel interferometer designed to help characterize the non-digital parts of the system, starting at the point analog signal enters the FPGA and ending when it is written to a file on a computer. I also describe the accompanying subsystems, my implementation of a scaled-up, 64 channel design, and lay out a framework for expanding to 256 channels.
by Jonathan L. Losh.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
46

Moeller, Tyler J. (Tyler John) 1975. "Field programmable gate arrays for radar front-end digital signal processing." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80555.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 113-116).
by Tyler J. Moeller.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
47

Randeny, Tharindu D. "Multi-Dimensional Digital Signal Processing in Radar Signature Extraction." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1451944778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Al-Sharari, Hamed. "A high performance hardware implementation of the imbedded reference signal algorithm using a digital signal processing board." Ohio : Ohio University, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1177615609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hudik, Frank Edward. "A computer program package for introductory one-dimensional digital signal processing applications." Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/22975.

Full text
Abstract:
A need existed for a set of computer programs which could be used by students to solve elementary digital signal processing problems using a personal computer. This project involved the design and implementation of ten algorithms that solve such problems and an additional algorithms that creates plots of the various input and output sequences. The two primary goals of the programs were: 1) User friendliness, and, 2) Portability. With these goals in mind, the source code was written using FORTRAN-77 and compiled by a commercially available FORTRAN compiler specifically designed for personal computers. The plotting program uses a FORTRAN-compatible graphics package that is also commercially available. The programs, once compiled, can be distributed to users without the requirement to purchase either a FORTRAN compiler or a graphics package however, access to a FORTRAN compiler enhances the utility of the programs. (Author)
APA, Harvard, Vancouver, ISO, and other styles
50

De, Subrato Kumar. "Design of a retargetable compiler for digital signal processors." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/15740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography