To see the other types of publications on this topic, follow the link: Matrix array.

Dissertations / Theses on the topic 'Matrix array'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Matrix array.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rashedin, Razib. "Novel miniature matrix array transducer system for loudspeakers." Thesis, Cardiff University, 2007. http://orca.cf.ac.uk/56177/.

Full text
Abstract:
Conventional pistonic loudspeakers, by employing whole-body vibration of the diaphragm, can reproduce good quality sound at the low end of the audio spectrum. Flat panel speakers, on the other hand, are better at high frequency operation as the reproduced sound at high frequency from a flat panel speaker is not omni-directional as in the case of a conventional loudspeaker. Although flat-panel speakers are compact, small and have a better high frequency response the poor reproduction of bass sound limits its performance severely. In addition, the flat panel speakers have a poor impulse response. The reason for such poor bass and impulse response is that, unlike the whole body movement of a conventional loudspeaker diaphragm, different parts of the panel in a flat panel loudspeaker vibrates independently. A novel loudspeaker has been successfully designed, developed and operated using miniature electromagnetic transducers in a matrix array configuration. In this device, the whole body vibration of the panel reduces the poor bass and impulse response associated with present flat panel speakers. The multi-actuator approach combines the advantages of conventional whole body motion with that of modern flat panel speakers. An innovative miniature electromagnetic transducer for the proposed loudspeaker has been designed, modelled and built for analysis. Frequency Responses show that this novel transducer is suitable for loudspeaker application because of its steady and consistent output over the whole audible frequency range and for various excitation currents. Measurements on various device configurations of this novel miniature electromagnetic transducer show that a moving coil transducer configuration having a magnetic diaphragm is best suited for loudspeaker applications. Finite element modeling has been used to examine single transducer operation and the magnetic interaction between neighbouring transducers in a matrix array format. Experimental results show the correct positioning of the transducers in a matrix configuration reduces the effects of interferences on the magnetic transducers. In addition, experimental results from the pressure response measurement show an improvement in bass response for the longer array speaker.
APA, Harvard, Vancouver, ISO, and other styles
2

Le, Hai Van Dinh. "A new general purpose systolic array for matrix computations." PDXScholar, 1988. https://pdxscholar.library.pdx.edu/open_access_etds/3796.

Full text
Abstract:
In this thesis, we propose a new systolic architecture which is based on the Faddeev's algorithm. Because Faddeev's algorithm is inherently general purpose, our architecture is able to perform a wide class of matrix computations. And since the architecture is systolic based, it brings massive parallelism to all of its computations. As a result, many matrix operations including addition, multiplication, inversion, LU-decomposition, transpose, and solutions to linear systems of equations can now be performed extremely fast. In addition, our design introduces several concepts which are new to systolic architectures: - It can be re-configured during run time to perform different functions with the uses of various control signals propagating throughout the arrays. - It allows for maximum overlaps of processing between consecutive computations, thereby increasing system throughput.
APA, Harvard, Vancouver, ISO, and other styles
3

Hanson, Timothy B. "Cascade adaptive array structures." Ohio : Ohio University, 1990. http://www.ohiolink.edu/etd/view.cgi?ohiou1173207031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

LIU, HUAZHOU. "DIGITAL DIRECTION FINDING SYSTEM DESIGN AND ANALYSIS." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1060976413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Quintero, Badillo Jorge R. "Non-destructive Evaluation of Ceramic Matrix Composites at High Temperature using Laser Ultrasonics." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1511800640467908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lepkowski, Stefan. "An ultra-compact and low loss passive beamforming network integrated on chip with off chip linear array." Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53599.

Full text
Abstract:
The work here presents a review of beam forming architectures. As an example, the author presents an 8x8 Butler Matrix passive beam forming network including the schematic, design/modeling, operation, and simulated results. The limiting factor in traditional beam formers has been the large size dictated by transmission line based couplers. By replacing these couplers with transformer-based couplers, the matrix size is reduced substantially allowing for on chip compact integration. In the example presented, the core area, including the antenna crossover, measures 0.82mm×0.39mm (0.48% the size of a branch line coupler at the same frequency). The simulated beam forming achieves a peak PNR of 17.1 dB and 15dB from 57 to 63GHz. At the 60GHz center frequency the average insertion loss is simulated to be 3.26dB. The 8x8 Butler Matrix feeds into an 8-element antenna array to show the array patterns with single beam and adjacent beam isolation.
APA, Harvard, Vancouver, ISO, and other styles
7

Park, Edward S. "Microfluidic chamber arrays for testing cellular responses to soluble-matrix and gradient signals." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39471.

Full text
Abstract:
This work develops microfluidic technologies to advance the state-of-the-art in living cell-based assays. Current cell-based assay platforms are limited in their capabilities, particularly with respect to spatial and temporal control of external signaling factors, sample usage, and throughput. The emergence of highly quantitative, data-driven systems approaches to studying biology have added further challenges to develop assay technologies with greater throughput, content, and physiological relevance. The primary objectives of this research are to (i) develop a method to reliably fabricate 3-D flow networks and (ii) apply 3-D flow networks to the development and testing of microfluidic chamber arrays to query cellular response to soluble-matrix signal combinations and gradient signaling fields. An equally important objective is for the chamber arrays to be scaled efficiently for higher-throughput applications, which is another reason for 3-D flow networks. Two prototype chamber arrays are designed, modeled, fabricated, and characterized. Furthermore, tests are performed wherein cells are introduced into the chambers and microenvironments are presented to elicit complex responses. Specifically, soluble-matrix signaling combinations and soluble signal gradients are presented. The study of complex biological processes necessitates improved assay techniques to control the microenvironment and increase throughput. Quantitative morphological, migrational, and fluorescence readouts, along with qualitative observations, suggest that the chamber arrays elicit responses; however further experiments are required to confirm specific phenotypes. The experiments provide initial proof-of-concept that the developed arrays can one day serve as effective and versatile screening platforms. Understanding the integration of extracellular signals on complex cellular behaviors has significance in the study of embryonic development, tissue repair and regeneration, and pathological conditions such as cancer. The microfluidic chamber arrays developed in this work could form the basis for enhanced assay platforms to perform massively parallel interrogation of complex signaling events upon living cells. This could lead to the rapid identification of synergistic and antagonistic signaling mechanisms that regulate complex behaviors. In addition, the same technology could be used to rapidly screen potential therapeutic compounds and identify suitable candidates to regulate pathological processes, such as cancer and fibrosis.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Yuanli. "Development of Cross-reactive Sensors Array: Practical Approach for Ion Detection in Aqueous Media." Bowling Green State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1345428697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Diarra, Bakary. "Study and optimization of 2D matrix arrays for 3D ultrasound imaging." Thesis, Lyon 1, 2013. http://www.theses.fr/2013LYO10165/document.

Full text
Abstract:
L’imagerie échographique en trois dimensions (3D) est une modalité d’imagerie médicale en plein développement. En plus de ses nombreux avantages (faible cout, absence de rayonnement ionisant, portabilité) elle permet de représenter les structures anatomiques dansleur forme réelle qui est toujours 3D. Les sondes à balayage mécaniques, relativement lentes, tendent à être remplacées par des sondes bidimensionnelles ou matricielles qui sont unprolongement dans les deux directions, latérale et azimutale, de la sonde classique 1D. Cetagencement 2D permet un dépointage du faisceau ultrasonore et donc un balayage 3D del’espace. Habituellement, les éléments piézoélectriques d’une sonde 2D sont alignés sur unegrille et régulièrement espacés d’une distance (en anglais le « pitch ») soumise à la loi del’échantillonnage spatial (distance inter-élément inférieure à la demi-longueur d’onde) pour limiter l’impact des lobes de réseau. Cette contrainte physique conduit à une multitude d’éléments de petite taille. L’équivalent en 2D d’une sonde 1D de 128 éléments contient128x128=16 384 éléments. La connexion d’un nombre d’éléments aussi élevé constitue unvéritable défi technique puisque le nombre de canaux dans un échographe actuel n’excède querarement les 256. Les solutions proposées pour contrôler ce type de sonde mettent en oeuvredu multiplexage ou des techniques de réduction du nombre d’éléments, généralement baséessur une sélection aléatoire de ces éléments (« sparse array »). Ces méthodes souffrent dufaible rapport signal à bruit du à la perte d’énergie qui leur est inhérente. Pour limiter cespertes de performances, l’optimisation reste la solution la plus adaptée. La première contribution de cette thèse est une extension du « sparse array » combinéeavec une méthode d’optimisation basée sur l’algorithme de recuit simulé. Cette optimisation permet de réduire le nombre nécessaire d’éléments à connecter en fonction des caractéristiques attendues du faisceau ultrasonore et de limiter la perte d’énergie comparée à la sonde complète de base. La deuxième contribution est une approche complètement nouvelle consistant à adopter un positionnement hors grille des éléments de la sonde matricielle permettant de supprimer les lobes de réseau et de s’affranchir de la condition d’échantillonnage spatial. Cette nouvelles tratégie permet d’utiliser des éléments de taille plus grande conduisant ainsi à un nombre d’éléments nécessaires beaucoup plus faible pour une même surface de sonde. La surface active de la sonde est maximisée, ce qui se traduit par une énergie plus importante et donc unemeilleure sensibilité. Elle permet également de balayer un angle de vue plus important, leslobes de réseau étant très faibles par rapport au lobe principal. Le choix aléatoire de la position des éléments et de leur apodization (ou pondération) reste optimisé par le recuit simulé.Les méthodes proposées sont systématiquement comparées avec la sonde complète dansle cadre de simulations numériques dans des conditions réalistes. Ces simulations démontrent un réel potentiel pour l’imagerie 3D des techniques développées. Une sonde 2D de 8x24=192 éléments a été construite par Vermon (Vermon SA, ToursFrance) pour tester les méthodes de sélection des éléments développées dans un cadreexpérimental. La comparaison entre les simulations et les résultats expérimentaux permettentde valider les méthodes proposées et de prouver leur faisabilité
3D Ultrasound imaging is a fast-growing medical imaging modality. In addition to its numerous advantages (low cost, non-ionizing beam, portability) it allows to represent the anatomical structures in their natural form that is always three-dimensional. The relativelyslow mechanical scanning probes tend to be replaced by two-dimensional matrix arrays that are an extension in both lateral and elevation directions of the conventional 1D probe. This2D positioning of the elements allows the ultrasonic beam steering in the whole space. Usually, the piezoelectric elements of a 2D array probe are aligned on a regular grid and spaced out of a distance (the pitch) subject to the space sampling law (inter-element distancemust be shorter than a mid-wavelength) to limit the impact of grating lobes. This physical constraint leads to a multitude of small elements. The equivalent in 2D of a 1D probe of 128elements contains 128x128 = 16,384 elements. Connecting such a high number of elements is a real technical challenge as the number of channels in current ultrasound scanners rarely exceeds 256. The proposed solutions to control this type of probe implement multiplexing or elements number reduction techniques, generally using random selection approaches (« spars earray »). These methods suffer from low signal to noise ratio due to the energy loss linked to the small number of active elements. In order to limit the loss of performance, optimization remains the best solution. The first contribution of this thesis is an extension of the « sparse array » technique combined with an optimization method based on the simulated annealing algorithm. The proposed optimization reduces the required active element number according to the expected characteristics of the ultrasound beam and permits limiting the energy loss compared to the initial dense array probe.The second contribution is a completely new approach adopting a non-grid positioningof the elements to remove the grating lobes and to overstep the spatial sampling constraint. This new strategy allows the use of larger elements leading to a small number of necessaryelements for the same probe surface. The active surface of the array is maximized, whichresults in a greater output energy and thus a higher sensitivity. It also allows a greater scansector as the grating lobes are very small relative to the main lobe. The random choice of the position of the elements and their apodization (or weighting coefficient) is optimized by the simulated annealing.The proposed methods are systematically compared to the dense array by performing simulations under realistic conditions. These simulations show a real potential of the developed techniques for 3D imaging.A 2D probe of 8x24 = 192 elements was manufactured by Vermon (Vermon SA, Tours,France) to test the proposed methods in an experimental setting. The comparison between simulation and experimental results validate the proposed methods and prove their feasibility
L'ecografia 3D è una modalità di imaging medicale in rapida crescita. Oltre ai vantaggiin termini di prezzo basso, fascio non ionizzante, portabilità, essa permette di rappresentare le strutture anatomiche nella loro forma naturale, che è sempre tridimensionale. Le sonde ascansione meccanica, relativamente lente, tendono ad essere sostituite da quelle bidimensionali che sono una estensione in entrambe le direzioni laterale ed azimutale dellasonda convenzionale 1D. Questo posizionamento 2D degli elementi permette l'orientamentodel fascio ultrasonico in tutto lo spazio. Solitamente, gli elementi piezoelettrici di una sondamatriciale 2D sono allineati su una griglia regolare e separati da una distanza (detta “pitch”) sottoposta alla legge del campionamento spaziale (la distanza inter-elemento deve esseremeno della metà della lunghezza d'onda) per limitare l'impatto dei lobi di rete. Questo vincolo fisico porta ad una moltitudine di piccoli elementi. L'equivalente di una sonda 1D di128 elementi contiene 128x128 = 16.384 elementi in 2D. Il collegamento di un così grandenumero di elementi è una vera sfida tecnica, considerando che il numero di canali negliecografi attuali supera raramente 256. Le soluzioni proposte per controllare questo tipo disonda implementano le tecniche di multiplazione o la riduzione del numero di elementi, utilizzando un metodo di selezione casuale (« sparse array »). Questi metodi soffrono di unbasso rapporto segnale-rumore dovuto alla perdita di energia. Per limitare la perdita di prestazioni, l’ottimizzazione rimane la soluzione migliore. Il primo contributo di questa tesi è un’estensione del metodo dello « sparse array » combinato con un metodo di ottimizzazione basato sull'algoritmo del simulated annealing. Questa ottimizzazione riduce il numero degli elementi attivi richiesto secondo le caratteristiche attese del fascio di ultrasuoni e permette di limitare la perdita di energia.Il secondo contributo è un approccio completamente nuovo, che propone di adottare un posizionamento fuori-griglia degli elementi per rimuovere i lobi secondari e per scavalcare il vincolo del campionamento spaziale. Questa nuova strategia permette l'uso di elementi piùgrandi, riducendo così il numero di elementi necessari per la stessa superficie della sonda. La superficie attiva della sonda è massimizzata, questo si traduce in una maggiore energia equindi una maggiore sensibilità. Questo permette inoltre la scansione di un più grande settore,in quanto i lobi secondari sono molto piccoli rispetto al lobo principale. La scelta casualedella posizione degli elementi e la loro apodizzazione viene ottimizzata dal simulate dannealing. I metodi proposti sono stati sistematicamente confrontati con la sonda completaeseguendo simulazioni in condizioni realistiche. Le simulazioni mostrano un reale potenzialedelle tecniche sviluppate per l'imaging 3D.Una sonda 2D di 8x24 = 192 elementi è stata fabbricata da Vermon (Vermon SA, ToursFrance) per testare i metodi proposti in un ambiente sperimentale. Il confronto tra lesimulazioni e i risultati sperimentali ha permesso di convalidare i metodi proposti edimostrare la loro fattibilità
APA, Harvard, Vancouver, ISO, and other styles
10

Ozer, Erhan. "Application of the T-matrix method to the numerical modeling of a linear active sonar array." Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/34718.

Full text
Abstract:
Approved for public release; distribution is unlimited
Classically, the T-matrix method is a procedure to exactly compute the multiple scattering of an incident wave from a “cloud” of objects, given knowledge of the free-field scattering properties of a single object for an arbitrary incident wave. For acoustic waves, Profs. Baker and Scandrett have extended the T-matrix method to the case in which the radiation sources are also the scatterers, that is, to the case of an array of active transducers. This thesis is the first successful practical demonstration of the T-matrix method applied to an active sonar array for which a finite-element model was employed to compute the scattering properties of a single transducer. For validation, a T-matrix model of a linear array of piezoelectric spherical thin-shell transducers was modeled, for which analytical approximate values of the T-matrix element values are known. Subsequently, a T-matrix model of a linear array of piezoelectric class V flextensional “ring-shell” transducers was modeled. Beam patterns of the linear array models computed with the T-matrix method are compared with those of an array of point sources, demonstrating that the T-matrix method produces more realistic beam patterns, especially for end fire arrays.
APA, Harvard, Vancouver, ISO, and other styles
11

Baiotto, Ricardo. "Imaging methodologies applied on phased array ultrasonic data from austenitic welds and claddings." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/186162.

Full text
Abstract:
A crescente tendência de utilização de materiais austeníticos soldados e cladeados em componentes críticos em alguns setores industriais, como nas indústrias de óleo&gás e nuclear, leva a um aumento na demanda sobre ensaios não-destrutivos confiáveis na avaliação de sua integridade estrutural. Dentre os métodos utilizados na inspeção de soldas cladeados austeníticos estão os métodos de ultrassom por phased array, que são normalmente utilizados na detecção e localização de defeitos. No entanto, componentes com esse tipo de microestrutura são difíceis de inspecionar por phased array devido a anisotropia e inomogeneidade causadas pela microestrutura de grãos grosseiros que costumam levar ao aumento do nível de ruído, ao deslocamento de indicações e ao surgimento de indicações falsas. Sendo assim, a seleção de um método de phased array apropriado precisa levar em conta a habilidade do método em superar os problemas causados pela anisotropia e inomogeneidade. Esta tese apresenta dois métodos de imagem por phased array ultrassônico não-convencionais pensados como formas de ajudar na determinação da integridade de componentes onde soldas e cladeados austeníticos estão presentes. Ambos os métodos tem como base o método de foco total (TFM), sendo que o primeiro é uma extensão do método de leis de atraso adaptativas chamado Método de Foco Total de Atraso Adaptativo (ADTFM) e o segundo método usa fatores de coerência associado à imagens de TFM. A partir dos métodos de imagem aplicados é possível aumentar significativamente a qualidade das imagens por ultrassom em comparação com as imagens padrão obtidas por TFM, especialmente quando foi possível utilizar ambos os métodos combinados.
The increasing trend to use austenitic welded and cladded materials in critical components employed in some industrial sectors, such as the oil&gas and nuclear industries, leads to an increasing demand for their non-destructive assessment by reliable non-destructive methods. Among the methods used to access the integrity of austenitic welds and claddings are the Ultrasonic Phased Array methods, which are usually used to detect the presence and determine the position of defects. However, austenitic welds and claddings are challenging to inspect with Phased Array methods due to the anisotropy and inhomogeneity caused by their coarse grain microstructure, which is capable of increasing noise levels, misplace indications and create false indications. Therefore, the selection of an appropriate phased array method needs to take into account the method’s ability to overcome the impairment caused by anisotropy and inhomogeneity. This thesis presents two non-conventional methods based on ultrasonic phased array imaging techniques designed to assist the structural integrity assessment of components where austenitic welds and clads are present. Both proposed methods are based on the Total Focusing Method (TFM); the first approach is an expansion of the adaptive delay laws concept named Adaptive Delay Total Focusing Method (ADTFM), while the second method uses the coherence weights combined with the TFM images. From the imaging methods applied it was possible to significantly increase the quality of the ultrasonic images in comparison with the standard TFM, primarily when it was possible to combine both approaches.
APA, Harvard, Vancouver, ISO, and other styles
12

Herrero, Zaragoza Jose Ramón. "A framework for efficient execution of matrix computations." Doctoral thesis, Universitat Politècnica de Catalunya, 2006. http://hdl.handle.net/10803/5991.

Full text
Abstract:
Matrix computations lie at the heart of most scientific computational tasks. The solution of linear systems of equations is a very frequent operation in many fields in science, engineering, surveying, physics and others. Other matrix operations occur frequently in many other fields such as pattern recognition and classification, or multimedia applications. Therefore, it is important to perform matrix operations efficiently. The work in this thesis focuses on the efficient execution on commodity processors of matrix operations which arise frequently in different fields.

We study some important operations which appear in the solution of real world problems: some sparse and dense linear algebra codes and a classification algorithm. In particular, we focus our attention on the efficient execution of the following operations: sparse Cholesky factorization; dense matrix multiplication; dense Cholesky factorization; and Nearest Neighbor Classification.

A lot of research has been conducted on the efficient parallelization of numerical algorithms. However, the efficiency of a parallel algorithm depends ultimately on the performance obtained from the computations performed on each node. The work presented in this thesis focuses on the sequential execution on a single processor.


There exists a number of data structures for sparse computations which can be used in order to avoid the storage of and computation on zero elements. We work with a hierarchical data structure known as hypermatrix. A matrix is subdivided recursively an arbitrary number of times. Several pointer matrices are used to store the location of
submatrices at each level. The last level consists of data submatrices which are dealt with as dense submatrices. When the block size of this dense submatrices is small, the number of zeros can be greatly reduced. However, the performance obtained from BLAS3 routines drops heavily. Consequently, there is a trade-off in the size of data submatrices used for a sparse Cholesky factorization with the hypermatrix scheme. Our goal is that of reducing the overhead introduced by the unnecessary operation on zeros when a hypermatrix data structure is used to produce a sparse Cholesky factorization. In this work we study several techniques for reducing such overhead in order to obtain high performance.

One of our goals is the creation of codes which work efficiently on different platforms when operating on dense matrices. To obtain high performance, the resources offered by the CPU must be properly utilized. At the same time, the memory hierarchy must be exploited to tolerate increasing memory latencies. To achieve the former, we produce inner kernels which use the CPU very efficiently. To achieve the latter, we investigate nonlinear data layouts. Such data formats can contribute to the effective use of the memory system.

The use of highly optimized inner kernels is of paramount importance for obtaining efficient numerical algorithms. Often, such kernels are created by hand. However, we want to create efficient inner kernels for a variety of processors using a general approach and avoiding hand-made codification in assembly language. In this work, we present an alternative way to produce efficient kernels automatically, based on a set of simple codes written in a high level language, which can be parameterized at compilation time. The advantage of our method lies in the ability to generate very efficient inner kernels by means of a good compiler. Working on regular codes for small matrices most of the compilers we used in different platforms were creating very efficient inner kernels for matrix multiplication. Using the resulting kernels we have been able to produce high performance sparse and dense linear algebra codes on a variety of platforms.

In this work we also show that techniques used in linear algebra codes can be useful in other fields. We present the work we have done in the optimization of the Nearest Neighbor classification focusing on the speed of the classification process.

Tuning several codes for different problems and machines can become a heavy and unbearable task. For this reason we have developed an environment for development and automatic benchmarking of codes which is presented in this thesis.

As a practical result of this work, we have been able to create efficient codes for several matrix operations on a variety of platforms. Our codes are highly competitive with other state-of-art codes for some problems.
APA, Harvard, Vancouver, ISO, and other styles
13

Kapkar, Rohan Viren. "Modeling and Simulation of Altera Logic Array Block using Quantum-Dot Cellular Automata." University of Toledo / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1304616947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Finnell, Jordan Grant. "Anthrax, Matrix Biology, and Angiogenesis: Capillary Morphogenesis Gene 2 Mediates Activity and Uptake of Type IV Collagen-Derived Anti-Angiogenic Peptides." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6849.

Full text
Abstract:
Capillary Morphogenesis Gene 2 (CMG2) is a type I transmembrane, integrin-like receptor. It was originally identified as one of several genes upregulated during capillary formation. It was subsequently identified as one of two physiological anthrax toxin receptors, where CMG2 serves as a cell-surface receptor for anthrax toxin and mediates entry of the toxin into cells via clathrin-dependent endocytosis. Additionally, loss-of-function mutations in CMG2 cause the genetic disorder hyaline fibromatosis syndrome (HFS), where the core symptom is dysregulation of extracellular matrix homeostasis (ECM), including excessive accumulation of proteinaceous hyaline material; HFS clearly indicates that CMG2 plays an essential function in ECM homeostasis and repair. Most often, these situational roles have been evaluated as separate intellectual and experimental entities; consequently, whereas details have emerged for each respective situational role, there has been little attempt to synthesize knowledge from each situational role in order to model a holistic map of CMG2 function and mechanism of action in normal physiology.The work presented in this thesis is an example of such a synthesis. Interactions between CMG2 and type IV collagen (Col IV) were evaluated, to better understand this putative interaction and its effect on CMG2 function in angiogenesis. Using an overlapping library peptide array of the Col IV α1 and α2 chains, it was found that CMG2-binding peptides were enriched within the NC1 domains. This finding was corroborated via another epitope mapping peptide array, where we found a major epitope for CMG2-binding within the α2 NC1 domain (canstatin). Identification of CMG2 interactions with Col IV NC1 domains (including canstatin) was both surprising and intriguing, as these domains are potent endogenous inhibitors of angiogenesis. To further evaluate the physiological relevance of interactions with Col IV NC1 domains, a canstatin-derived peptide from the original array was synthesized and used for further studies. This peptide (here known as S16) binds with high affinity (KD = 440 ± 160 nM) to the extracellular, ligand-binding CMG2 vWA domain; specificity was confirmed through competition studies with anthrax toxin PA, and through demonstration of divalent cation-dependent binding. CMG2 was found to be the relevant endothelial receptor for S16. CMG2 in fact mediates endocytic uptake of peptide S16, as demonstrated by flow cytometry, and colocalization studies. S16 further inhibits migration of endothelial cells. These findings demonstrate that CMG2 is a functional receptor for Col IV NC1 domain fragments. CMG2 may exert a pro-angiogenic effect through endocytosis and clearance of anti-angiogenic NC1 domain fragments. Additionally, this is the first demonstration of CMG2-mediated uptake of an endogenous matrix fragment, and suggests a mechanism by which CMG2 regulates ECM and basement membrane homeostasis, thereby establishing a functional connection between the receptor's role in matrix biology and angiogenesis.
APA, Harvard, Vancouver, ISO, and other styles
15

Faye, Clément. "Le réseau d'interactions de l'endostatine, une matricryptine du collagène XVIII." Thesis, Lyon 1, 2009. http://www.theses.fr/2009LYO10176.

Full text
Abstract:
L’endostatine est le fragment C-terminal du collagène XVIII libéré dans la matrice extracellulaire par clivage enzymatique. C'est un inhibiteur endogène de l’angiogenèse et de la croissance tumorale. L'endostatine inhibe la prolifération et la migration des cellules endothéliales induite par le Fibroblast Growth Factor-2 ou le Vascular Endothelial Growth Factor et elle inhibe la croissance de 65 types de cellules tumorales. L’endostatine fait actuellement l’objet d’essais cliniques pour le traitement de différents cancers son mécanisme d’action est encore mal connu. Nous avons caractérisé par résonance plasmonique de surface (SPR) les interactions établies par l'endostatine avec les intégrines αvβ3 et α5β1 qui sont surexprimées à la surface des cellules endothéliales activée. Nous avons identifié le site de fixation l'endostatine sur les intégrines, proposé un modèle de structure du complexe formé par l'endostatine et l'intégrine αvβ3 et montré que l'endostatine ne peut pas se lier simultanément aux intégrines et aux chaînes d'héparane sulfate présentes à la surface cellulaire. Pour identifier des partenaires supplémentaires de l'endostatine, nous avons développé des puces à protéines et à glycosaminoglycanes basées sur la SPR et capables de suivre jusqu'à 400 interactions simultanément. Nous avons identifié neuf partenaires de l'endostatine (le dermatane sulfate, la transglutaminase-2, les collagènes I, IV et VI, le peptide amyloïde β1-42, et des protéines matricellulaires dont SPARC et thrombospondine-1). Nous avons montré que l’endostatine se fixe avec une forte affinité (KD ~ 6 nM) sur la transglutaminase-2 et que cette interaction nécessite la présence de calcium mais que l'endostatine n'est pas un substrat donneur d'acyle de l'enzyme. Nous avons montré que le réseau d'interactions de l'endostatine est enrichi en protéines contenant des modules EGF (Epidermal Growth Factor). Cela offre de nouvelles perspectives pour l'identification d'autres partenaires et donc de nouvelles fonctions de l'endostatine. Des protéines contenant des modules EGF comme la fibrilline-1, composant des fibres élastiques, et des protéines de l'immunité innée par exemple sont des partenaires potentiels de l'endostatine
Endostatin is the carboxyl-terminal fragment of collagen XVIII released in the extracellular matrix by proteolytic cleavage. It inhibits angiogenesis and tumor growth. Endostatin inhibits the proliferation and migration of endothelial cells induced by Fibroblast Growth Factor-2 and Vascular Endothelial Growth Factor and it inhibits 65 different tumor types. Endostatin is currently under clinical trials for several tumors. We have used surface plasmon resonance (SPR) binding assays to characterize interactions between endostatin and α5β1 or αvβ3 integrins which are over-expressed at cell surface of actived endothelial cell. We have identified the binding site of endostatin on those integrins, and we have built a molecular modeling of the endostatin/integrin αvβ3 complex. We have shown that endostatin can not bind simultaneously to integrins and to heparan sulfate. In order to identify new partners of endostatin we have developed glycosaminoglycan and protein arrays based on SPR detection. We have found nine new partners of endostatin include glycosaminoglycans (chondroitin and dermatan sulfate), matricellular proteins (thrombospondin-1 and SPARC), collagens (I, IV and VI), the amyloid peptide Aβ(1-42), and transglutaminase-2 (TG-2). We have shown that endostatin binds to transglutaminase-2 with an high affinity (KD ~ 6 nM) in a calcium-dependent manner. Enzymatic assays indicated that, in contrast to other extracellular matrix proteins, endostatin is not a glutaminyl substrate of TG-2, but would rather be an acyl acceptor. The endostatin network comprises a number of extracellular proteins containing EGF domains (Epidermal Growth Factor), and able to bind calcium. Depending on the trigger event, and on the availability of its members in a given tissue at a given time, the endostatin network might be involved either in the control of angiogenesis, and tumor growth, or in neurogenesis and neurodegenerative diseases
APA, Harvard, Vancouver, ISO, and other styles
16

Wan, Chunru. "Systolic algorithms and applications." Thesis, Loughborough University, 1996. https://dspace.lboro.ac.uk/2134/10479.

Full text
Abstract:
The computer performance has been improved tremendously since the development of the first allpurpose, all electronic digital computer in 1946. However, engineers, scientists and researchers keep making more efforts to further improve the computer performance to meet the demanding requirements for many applications. There are basically two ways to improve the computer performance in terms of computational speed. One way is to use faster devices (VLSI chips). Although faster and faster VLSI components have contributed a great deal on the improvement of computation speed, the breakthroughs in increasing switching speed and circuit densities of VLSI devices will be diflicult and costly in future. The other way is to use parallel processing architectures which employ multiple processors to perform a computation task. When multiple processors working together, an appropriate architecture is very important to achieve the maximum performance in a cost-effective manner. Systolic arrays are ideally qualified for computationally intensive applications with inherent massive parallelism because they capitalize on regular, modular, rhythmic, synchronous, concurrent processes that require intensive, repetitive computation. This thesis can be divided into three parts. The first part is an introductory part containing Chap. I and Chap. 2. The second part, composed of Chap. 3 and Chap. 4 concerns with the systolic design methodology. The third part deals with the several systolic array design for different applications.
APA, Harvard, Vancouver, ISO, and other styles
17

Leclerc, Céline. "Étude et conception de matrices d'alimentation multifaisceaux pour réseaux à rayonnement direct ou dans le plan focal d'un réflecteur." Phd thesis, Toulouse, INPT, 2013. http://oatao.univ-toulouse.fr/10907/1/leclerc.pdf.

Full text
Abstract:
Dans le cadre de cette thèse, on s'intéresse dans un premier temps à des matrices d'alimentation de type passif connues qui permettent de produire des faisceaux orthogonaux, et notamment à la matrice de Butler. On s'aperçoit qu'il n'existe qu'une méthode itérative permettant de déterminer les paramètres S d'une matrice de Butler symétrique. C'est pourquoi, on cherche à déterminer de manière analytique la matrice [S] d'une matrice de Butler symétrique à N = 2^n entrées et N sorties. On établit des formules de récurrence issues de l'étude de ce type de matrices pour plusieurs tailles. Des formules analytiques en sont déduites qui sont valables quelle que soit la valeur de N. On poursuit l'étude des matrices d'alimentation de réseau d'antennes en s'intéressant à la source focale d'une antenne multifaisceaux devant réflecteur basée sur une géométrie originale constituée de coupleurs directionnels tridimensionnels entrelacés. Cette structure est simulée. Ses nombreux paramètres sont optimisés afin d'aboutir à une solution répondant au cahier des charges. Une maquette est réalisée et testée. Les résultats obtenus sont prometteurs. Au final, cette source possède l'avantage d'avoir un fort niveau de réutilisation de ses éléments rayonnants et donc de limiter l'encombrement du système global qui est souvent un point critique, dans les satellites notamment. Pour finir, on s'intéresse à une structure dont le but est d'avoir une station au sol capable de suivre une cible sans dépointage. La poursuite en azimut est assurée par une partie mécanique, de type joint tournant ; celle en élévation par un module électronique. Le système se doit d'être compact, fiable, et de limiter les pertes ainsi que les coûts. Des compromis sont donc à effectuer. La partie rayonnante est réalisée en guide, la partie alimentation en technologie planaire. Une solution est proposée afin de faire la transition entre ces deux technologies : le circuit de répartition est directement relié à l'excitation des éléments rayonnants en mettant ces deux parties sur une même feuille de substrat. Ainsi, moins de câbles et de connecteurs sont nécessaires, ce qui diminue l'encombrement et les coûts.
APA, Harvard, Vancouver, ISO, and other styles
18

Ahmed, Mamun. "Adaptive Sub band GSC Beam forming using Linear Microphone-Array for Noise Reduction/Speech Enhancement." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-6174.

Full text
Abstract:
This project presents the description, design and the implementation of a 4-channel microphone array that is an adaptive sub-band generalized side lobe canceller (GSC) beam former uses for video conferencing, hands-free telephony etc, in a noisy environment for speech enhancement as well as noise suppression. The side lobe canceller evaluated with both Least Mean Square (LMS) and Normalized Least Mean Square (NLMS) adaptation. A testing structure is presented; which involves a linear 4-microphone array connected to collect the data. Tests were done using one target signal source and one noise source. In each microphone’s, data were collected via fractional time delay filtering then it is divided into sub-bands and applied GSC to each of the subsequent sub-bands. The overall Signal to Noise Ratio (SNR) improvement is determined from the main signal and noise input and output powers, with signal-only and noise-only as the input to the GSC. The NLMS algorithm significantly improves the speech quality with noise suppression levels up to 13 dB while LMS algorithm is giving up to 10 dB. All of the processing for this thesis is implemented on a computer using MATLAB and validated by considering different SNR measure under various types of blocking matrix, different step sizes, different noise locations and variable SNR with noise.
Mamun Ahmed E-mail: mamuncse99cuet@yahoo.com
APA, Harvard, Vancouver, ISO, and other styles
19

de, Sousa Emma Louise. "The use of novel xenografting methods to reveal differential gene expression between breast cancer at primary and metastatic sites." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:20c957a8-68c7-43f1-b0f6-722ae71dfb5a.

Full text
Abstract:
In developed countries, breast cancer is the commonest malignancy among women. Understanding the mechanisms involved in breast cancer progression and the influence of the microenvironment on cancer cell proliferation, results in better treatments. This study aimed to optimise breast cancer xenograft rates using a novel chamber developed for tissue engineering purposes. The established tumours were subjected to enzyme digestion, creating a single cell suspension, which was then injected into immunocompromised mice at primary, metastatic and intra-cardiac sites. The resulting tumours in the mammary fat pad (MFP) and bone were compared using species-specific reverse-transcription polymerase chain reaction (RT-PCR) and cDNA microarray, to examine the influence of the microenvironment on gene expression. The achieved xenograft graft rates of 25% were similar to those previously reported. The matrix metalloproteinase family of enzymes (MMPs) degrade extracellular matrix, influencing invasion and migration of malignant cells. RT-PCR results showed that the majority of the MMPs expressed in the cancers were stromal rather than tumour in origin. MT1-MMP, MMP-2 and MMP-11 had significantly higher expression levels in the MFP than in the bone, but MMP-9 was expressed more in the bone than MFP. There was also an up-regulation of stromal production of MT1-MMP and MMP-13 in the MFP in the presence of tumour. This may have significance when considering which MMPs are the most appropriate targets for inhibition during cancer treatment. The most significant of the differentially expressed genes on microarray analysis were trefoil factor 1 (TFF1) and insulin growth-factor binding protein 3 (IGFBP-3), both expressed significantly more in tumours from the MFP than the bone. The thesis presented demonstrates some of the complexities of tumour-stromal interactions and supports Paget’s seed-soil theory, confirming in several ways the variation in gene expression in breast cancer between primary and metastatic sites.
APA, Harvard, Vancouver, ISO, and other styles
20

McPhillips, Kenneth J. "Far field shallow water horizontal wave number estimation given a linear towed array using fast maximum likelihood, matrix pencil, and subspace fitting techniques /." View online ; access limited to URI, 2007. http://0-digitalcommons.uri.edu.helin.uri.edu/dissertations/AAI3276997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

PALMER, LUKE A. "DIVINUS TEMPUS: II. CHRISTMAS." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1091043457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Miller, William H. "Analog Implementation of DVM and Farrow Filter Based Beamforming Algorithms for Audio Frequencies." University of Akron / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=akron1531951902410037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Spalt, Taylor Brooke. "Constrained Spectral Conditioning for the Spatial Mapping of Sound." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/70868.

Full text
Abstract:
In aeroacoustic experiments of aircraft models and/or components, arrays of microphones are utilized to spatially isolate distinct sources and mitigate interfering noise which contaminates single-microphone measurements. Array measurements are still biased by interfering noise which is coherent over the spatial array aperture. When interfering noise is accounted for, existing algorithms which aim to both spatially isolate distinct sources and determine their individual levels as measured by the array are complex and require assumptions about the nature of the sound field. This work develops a processing scheme which uses spatially-defined phase constraints to remove correlated, interfering noise at the single-channel level. This is achieved through a merger of Conditioned Spectral Analysis (CSA) and the Generalized Sidelobe Canceller (GSC). A cross-spectral, frequency-domain filter is created using the GSC methodology to edit the CSA formulation. The only constraint needed is the user-defined, relative phase difference between the channel being filtered and the reference channel used for filtering. This process, titled Constrained Spectral Conditioning (CSC), produces single-channel Fourier Transform estimates of signals which satisfy the user-defined phase differences. In a spatial sound field mapping context, CSC produces sub-datasets derived from the original which estimate the signal characteristics from distinct locations in space. Because single-channel Fourier Transforms are produced, CSC's outputs could theoretically be used as inputs to many existing algorithms. As an example, data-independent, frequency-domain beamforming (FDBF) using CSC's outputs is shown to exhibit finer spatial resolution and lower sidelobe levels than FDBF using the original, unmodified dataset. However, these improvements decrease with Signal-to-Noise Ratio (SNR), and CSC's quantitative accuracy is dependent upon accurate modeling of the sound propagation and inter-source coherence if multiple and/or distributed sources are measured. In order to demonstrate systematic spatial sound mapping using CSC, it is embedded into the CLEAN algorithm which is then titled CLEAN-CSC. Simulated data analysis indicates that CLEAN-CSC is biased towards the mapping and energy allocation of relatively stronger sources in the field, which limits its ability to identify and estimate the level of relatively weaker sources. It is also shown that CLEAN-CSC underestimates the true integrated levels of sources in the field and exhibits higher-than-true peak source levels, and these effects increase and decrease respectively with increasing frequency. Five independent scaling methods are proposed for correcting the CLEAN-CSC total integrated output levels, each with their own assumptions about the sound field being measured. As the entire output map is scaled, these do not account for relative source level errors that may exist. Results from two airfoil tests conducted in NASA Langley's Quiet Flow Facility show that CLEAN-CSC exhibits less map noise than CLEAN yet more segmented spatial sound distributions and lower integrated source levels. However, using the same source propagation model that CLEAN assumes, the scaled CLEAN-CSC integrated source levels are brought into closer agreement with those obtained with CLEAN.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Prokš, Jiří. "Zákaznicky upravitelný modul zadní skupinové svítilny s HD rozlišením." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-318409.

Full text
Abstract:
This thesis deals with the design of LED matrix array contains 150 LEDs. In the first part, the thesis identifies source of light like OLED and LED and provide an overview of their lifetime, reliability and basic principle of design systems with LEDs. The thesis then describe design of LED matrix array, deals with power supply of this LED array and with cooling of LED. Finally the thesis describes a software for contol of LED matrix array.
APA, Harvard, Vancouver, ISO, and other styles
25

Krejčíř, Dominik. "Anténa s řiditelným svazkem." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442413.

Full text
Abstract:
The master s thesis deals with the design of the beamsteering antenna. Methods of the beamsteering and the final antenna design are described. This antenna operates in the ISM band with the central frequency of 5,8 GHz. The antenna is designed in the CST Studio Suite 2020. Butler matrix was designed as a feed network implemented as substrate integrated waveguide. An array of patch antennas was used for radiation.
APA, Harvard, Vancouver, ISO, and other styles
26

Lundström, Tomas. "Matched Field Beamforming applied to Sonar Data." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16338.

Full text
Abstract:

Two methods for evaluating and improving plane wave beamforming have beendeveloped. The methods estimate the shape of the wavefront and use theinformation in the beamforming. One of the methods uses estimates of the timedelays between the sensors to approximate the shape of the wavefront, and theother estimates the wavefront by matching the received wavefront to sphericalwavefronts of different radii. The methods are compared to a third more commonmethod of beamforming, which assumes that the impinging wave is planar. Themethods’ passive ranging abilities are also evaluated, and compared to a referencemethod based on triangulation.Both methods were evaluated with both real and simulated data. The simulateddata was obtained using Raylab, which is a simulation program based on ray-tracing. The real data was obtained through a field-test performed in the Balticsea using a towed array sonar and a stationary source emitted tones.The performance of the matched beamformers depends on the distance to the tar-get. At a distance of 600 m near broadside the power received by the beamformerincreases by 0.5-1 dB compared to the plane wave beamformer. At a distance of300 m near broadside the improvement is approximately 2 dB. In general, obtain-ing an accurate distance estimation proved to be difficult, and highly dependenton the noise present in the environment. A moving target at a distance of 600 mat broadside can be estimated with a maximum error of 150 m, when recursiveupdating of the covariance matrix with a updating constant of 0.25 is used. Whenrecursive updating is not used the margin of error increases to 400 m.

APA, Harvard, Vancouver, ISO, and other styles
27

Cardell, Sara D. "Constructions of MDS codes over extension alphabets." Doctoral thesis, Universidad de Alicante, 2012. http://hdl.handle.net/10045/27320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Weiser, Armin. "Amino acid substitutions in protein binding." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2009. http://dx.doi.org/10.18452/15962.

Full text
Abstract:
Die Modifizierung von Proteinsequenzen unter anderem durch den Austausch von Aminosäuren ist ein zentraler Aspekt in evolutionären Prozessen. Solche Prozesse ereignen sich nicht nur innerhalb großer Zeiträume und resultieren in der Vielfalt des Lebens, das uns umgibt, sondern sind auch täglich beobachtbar. Diese mikroevolutionären Prozesse bilden eine Grundlage zur Immunabwehr höherer Wirbeltiere und werden durch das humorale Immunsystem organisiert. Im Zuge einer Immunantwort werden Antikörper wiederholt der Diversifizierung durch somatische Hypermutation unterworfen. Ziele dieser Arbeit waren, neue Kenntnisse über die Mikroevolution von Antikörpern während der Immunantwort zu gewinnen und die Beziehung zwischen Aminosäureaustauschen und Affinitätsänderungen zu verstehen. Zu diesem Zweck wurde zunächst gezeigt, dass die SPOT Synthese eine präzise Methode ist, um Signalintensitäten drei verschiedenen Bindungsaffinitätsklassen zuzuordnen. Antikörper-Peptid Bindungsdaten, die aus SPOT Synthese Experimenten generiert wurden, bildeten die Grundlage zur Konstruktion der Substitutionsmatrix AFFI - der ersten Substitutionsmatrix, die ausschließlich auf Bindungsaffinitätsdaten beruht. Diese bildete die Grundlage für die Gewinnung eines reduzierten Aminosäuresatzes. Durch einen theoretischen Ansatz konnte gezeigt werden, dass der reduzierte Aminosäuresatz eine optimale Basis für die Epitopsuche darstellt. Für den Prozess der somatischen Hypermutation und Selektion wurde ein neuer Ansatz präsentiert, um für die Affinitätsreifung relevante Mutationen zu identifizieren. Die Analyse zeigte, dass das Spektrum der selektierten Mutationen viel umfangreicher ist als bisher angenommen wurde. Die Tatsache, dass auch einige stille Mutationen stark bevorzugt werden, deutet darauf hin, dass entweder die intrinsische Mutabilität stark unterschätzt wurde oder, dass Selektion nicht nur auf Affinitätsreifung von Antikörpern basiert sondern auch auf ihrer Expressionsrate.
A central task of the evolutionary process is the alteration of amino acid sequences, such as the substitution of one amino acid by another. Not only do these amino acid changes occur gradually over large time scales and result in the variety of life surrounding us, but they also happen daily within an organism. Such alterations take place rapidly for the purposes of defense, which in higher vertebrates, is managed by the humoral immune system. For an effective immune response, antibodies are subjected to a micro-evolutionary process that includes multiple rounds of diversification by somatic hypermutation resulting in increased binding affinity to a particular pathogen. The goal of this work was to provide insights into the microevolution of antibodies during the immune response, including the relationship between amino acid substitutions and binding affinity changes. A preliminary step in this work was to determine the accuracy of the SPOT synthesis technique, which could be shown to be an accurate method for assigning measured signal intensities to three different binding affinity classes. A substitution matrix based on data produced with these binding experiments was constructed and named AFFI. AFFI is the first substitution matrix that is based solely on binding affinity. A theoretical approach has additionally revealed that an AFFI-derived reduced set of amino acids constitutes an optimal basis for epitope searching. For the process of somatic hypermutation and selection, a novel approach to identify mutations relevant to affinity maturation was presented. The analysis revealed that the spectrum of mutations favored by the selection process is much broader than previously thought. The fact that particular silent mutations are strongly favored indicates either that intrinsic mutability has been grossly underestimated, or that selection acts not only on antibody affinity but also on their expression rates.
APA, Harvard, Vancouver, ISO, and other styles
29

Haider, Shahid Abbas. "Systolic arrays for the matrix iterative methods." Thesis, Loughborough University, 1993. https://dspace.lboro.ac.uk/2134/28173.

Full text
Abstract:
The systolic array research was pioneered by H. T. Kung and C. E. Leiserson. Systolic arrays are special purpose synchronous architectures consisting of simple, regular and modular processors which are regularly interconnected to form an array. Systolic arrays are well suited for computational bound problems in Linear Algebra. In this thesis, the numerical problems, especially iterative algorithms are chosen and implemented on the linear systolic array. same.
APA, Harvard, Vancouver, ISO, and other styles
30

Wirfält, Petter. "Exploiting Prior Information in Parametric Estimation Problems for Multi-Channel Signal Processing Applications." Doctoral thesis, KTH, Signalbehandling, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134034.

Full text
Abstract:
This thesis addresses a number of problems all related to parameter estimation in sensor array processing. The unifying theme is that some of these parameters are known before the measurements are acquired. We thus study how to improve the estimation of the unknown parameters by incorporating the knowledge of the known parameters; exploiting this knowledge successfully has the potential to dramatically improve the accuracy of the estimates. For covariance matrix estimation, we exploit that the true covariance matrix is Kronecker and Toeplitz structured. We then devise a method to ascertain that the estimates possess this structure. Additionally, we can show that our proposed estimator has better performance than the state-of-art when the number of samples is low, and that it is also efficient in the sense that the estimates have Cram\'er-Rao lower Bound (CRB) equivalent variance. In the direction of arrival (DOA) scenario, there are different types of prior information; first, we study the case when the location of some of the emitters in the scene is known. We then turn to cases with additional prior information, i.e.~when it is known that some (or all) of the source signals are uncorrelated. As it turns out, knowledge of some DOA combined with this latter form of prior knowledge is especially beneficial, giving estimators that are dramatically more accurate than the state-of-art. We also derive the corresponding CRBs, and show that under quite mild assumptions, the estimators are efficient. Finally, we also investigate the frequency estimation scenario, where the data is a one-dimensional temporal sequence which we model as a spatial multi-sensor response. The line-frequency estimation problem is studied when some of the frequencies are known; through experimental data we show that our approach can be beneficial. The second frequency estimation paper explores the analysis of pulse spin-locking data sequences, which are encountered in nuclear resonance experiments. By introducing a novel modeling technique for such data, we develop a method for estimating the interesting parameters of the model. The technique is significantly faster than previously available methods, and provides accurate estimation results.
Denna doktorsavhandling behandlar parameterestimeringsproblem inom flerkanals-signalbehandling. Den gemensamma förutsättningen för dessa problem är att det finns information om de sökta parametrarna redan innan data analyseras; tanken är att på ett så finurligt sätt som möjligt använda denna kunskap för att förbättra skattningarna av de okända parametrarna. I en uppsats studeras kovariansmatrisskattning när det är känt att den sanna kovariansmatrisen har Kronecker- och Toeplitz-struktur. Baserat på denna kunskap utvecklar vi en metod som säkerställer att även skattningarna har denna struktur, och vi kan visa att den föreslagna skattaren har bättre prestanda än existerande metoder. Vi kan också visa att skattarens varians når Cram\'er-Rao-gränsen (CRB). Vi studerar vidare olika sorters förhandskunskap i riktningsbestämningsscenariot: först i det fall då riktningarna till ett antal av sändarna är kända. Sedan undersöker vi fallet då vi även vet något om kovariansen mellan de mottagna signalerna, nämligen att vissa (eller alla) signaler är okorrelerade. Det visar sig att just kombinationen av förkunskap om både korrelation och riktning är speciellt betydelsefull, och genom att utnyttja denna kunskap på rätt sätt kan vi skapa skattare som är mycket noggrannare än tidigare möjligt. Vi härleder även CRB för fall med denna förhandskunskap, och vi kan visa att de föreslagna skattarna är effektiva. Slutligen behandlar vi även frekvensskattning. I detta problem är data en en-dimensionell temporal sekvens som vi modellerar som en spatiell fler-kanalssignal. Fördelen med denna modelleringsstrategi är att vi kan använda liknande metoder i estimatorerna som vid sensor-signalbehandlingsproblemen. Vi utnyttjar återigen förhandskunskap om källsignalerna: i ett av bidragen är antagandet att vissa frekvenser är kända, och vi modifierar en existerande metod för att ta hänsyn till denna kunskap. Genom att tillämpa den föreslagna metoden på experimentell data visar vi metodens användbarhet. Det andra bidraget inom detta område studerar data som erhålls från exempelvis experiment inom kärnmagnetisk resonans. Vi introducerar en ny modelleringsmetod för sådan data och utvecklar en algoritm för att skatta de önskade parametrarna i denna modell. Vår algoritm är betydligt snabbare än existerande metoder, och skattningarna är tillräckligt noggranna för typiska tillämpningar.

QC 20131115

APA, Harvard, Vancouver, ISO, and other styles
31

Mahmud, Rashad Hassan. "Synthesis of waveguide antenna arrays using the coupling matrix approach." Thesis, University of Birmingham, 2016. http://etheses.bham.ac.uk//id/eprint/6564/.

Full text
Abstract:
With the rapid development in communication systems recently, improvements in components of the systems such as antennas and bandpass filters are continuously required to provide improved performance. High gain, wide bandwidth, and small size are the properties of antennas which are demanded in many modern applications, and achieving these simultaneously is a challenge. This thesis presents a new design approach to address this challenge. The coupling matrix is an approach used to represent the circuits made of coupled resonators such as filters and multiplexers. The approach has been utilised here to integrate a single resonator-based antenna with an n\(^t\)\(^h\) order filter. The integrated component is capable of providing a controllable bandwidth and introduces the filtering functionality. The approach is further developed in order to integrate bandpass filters with N×N resonator-based antenna arrays. This is to increase the gain of the array as well. Six novel components have been fabricated for the purpose of validation. This thesis also looks at a 300 GHz communication system which is proposed at The University of Birmingham with the objective to build a 10 metre indoor communication link. A 300 GHz (8×8) waveguide antenna array has been designed and fabricated for the system.
APA, Harvard, Vancouver, ISO, and other styles
32

Osman, Ahmad. "Automated evaluation of three dimensional ultrasonic datasets." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00995119.

Full text
Abstract:
Non-destructive testing has become necessary to ensure the quality of materials and components either in-service or at the production stage. This requires the use of a rapid, robust and reliable testing technique. As a main testing technique, the ultrasound technology has unique abilities to assess the discontinuity location, size and shape. Such information play a vital role in the acceptance criteria which are based on safety and quality requirements of manufactured components. Consequently, an extensive usage of the ultrasound technique is perceived especially in the inspection of large scale composites manufactured in the aerospace industry. Significant technical advances have contributed into optimizing the ultrasound acquisition techniques such as the sampling phased array technique. However, acquisition systems need to be complemented with an automated data analysis procedure to avoid the time consuming manual interpretation of all produced data. Such a complement would accelerate the inspection process and improve its reliability. The objective of this thesis is to propose an analysis chain dedicated to automatically process the 3D ultrasound volumes obtained using the sampling phased array technique. First, a detailed study of the speckle noise affecting the ultrasound data was conducted, as speckle reduces the quality of ultrasound data. Afterward, an analysis chain was developed, composed of a segmentation procedure followed by a classification procedure. The proposed segmentation methodology is adapted for ultrasound 3D data and has the objective to detect all potential defects inside the input volume. While the detection of defects is vital, one main difficulty is the high amount of false alarms which are detected by the segmentation procedure. The correct distinction of false alarms is necessary to reduce the rejection ratio of safe parts. This has to be done without risking missing true defects. Therefore, there is a need for a powerful classifier which can efficiently distinguish true defects from false alarms. This is achieved using a specific classification approach based on data fusion theory. The chain was tested on several ultrasound volumetric measures of Carbon Fiber Reinforced Polymers components. Experimental results of the chain revealed high accuracy, reliability in detecting, characterizing and classifying defects.
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Peng. "Joint Estimation and Calibration for Motion Sensor." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286839.

Full text
Abstract:
In the thesis, a calibration method for positions of each accelerometer in an Inertial Sensor Array (IMU) sensor array is designed and implemented. In order to model the motion of the sensor array in the real world, we build up a state space model. Based on the model we use, the problem is to estimate the parameters within the state space model. In this thesis, this problem is solved using Maximum Likelihood (ML) framework and two methods are implemented and analyzed. One is based on Expectation Maximization (EM) and the other is to optimize the cost function directly using Gradient Descent (GD). In the EM algorithm, an ill-conditioned problem exists in the M step, which degrades the performance of the algorithm especially when the initial error is small, and the final Mean Square Error (MSE) curve will diverge in this case. The EM algorithm with enough data samples works well when the initial error is large. In the Gradient Descent method, a reformulation of the problem avoids the ill-conditioned problem. After the parameter estimation part, we analyze the MSE curve of these parameters through the Monte Carlo Simulation. The final MSE curves show that the Gradient Descent based method is more robust in handling the numerical issues of the parameter estimation. The Gradient Descent method is also robust to noise level based on the simulation result.
I denna rapport utvecklas och implementeras en kalibreringsmethod för att skatta positionen för en grupp av accelerometrar placerade i en så kallad IMU sensor array. För att beskriva rörelsen för hela sensorgruppen, härleds en dynamisk tillståndsmodell. Problemställningen är då att skatta parametrarna i tillståndsmodellen. Detta löses med hjälp av Maximum Likelihood-metoden (ML) där två stycken algoritmer implementeras och analyseras. En baseras på Expectation Maximization (EM) och i den andra optimeras kostnadsfunktionen direkt med gradientsökning. I EM-algoritmen uppstår ett illa konditionerat delproblem i M-steget, vilket försämrar algoritmens prestanda, speciellt när det initiala felet är litet. Den resulterande MSE-kurvan kommer att avvika i detta fall. Däremot fungerar EM-algoritmen väl när antalet datasampel är tillräckligt och det initiala felet är större. I gradientsökningsmetoden undviks konditioneringsproblemen med hjälp av en omformulering. Slutligen analyseras medelkvadratfelet (MSE) för parameterskattningarna med hjälp av Monte Carlo-simulering. De resulterande MSE-kurvorna visar att gradientsökningsmetoden är mer robust mot numeriska problem, speciellt när det initiala felet är litet. Simuleringarna visar även att gradientsökning är robust mot brus.
APA, Harvard, Vancouver, ISO, and other styles
34

Bárta, Jakub. "Implementace tvarování anténních příjmových svazků radaru v FPGA." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400718.

Full text
Abstract:
At the begining of this thesis radar theory and classification of radar systems is explained. Next part introduces antenna arrays with it’s parameters and possibilities. Main part contains design of digital beamformer on FPGA Cyclone V and it’s validation.
APA, Harvard, Vancouver, ISO, and other styles
35

Tekkouk, Karim. "Développement d'antennes multi-faisceaux multicouches de la bande Ku à la bande V." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S165.

Full text
Abstract:
Les travaux de cette thèse portent sur la conception d'antennes multi-faisceaux. Ces dernières permettent à plusieurs faisceaux de partager la même partie rayonnante et offrent la possibilité d'avoir simultanément un fort gain et une grande couverture angulaire. Pour leur fonctionnement, ces antennes se basent sur des réseaux à formation de faisceaux, qui peuvent être groupés en deux catégories : les réseaux formateurs de faisceaux de type quasi-optique et les réseaux formateurs de faisceaux de type circuit. Plusieurs structures antennaires reposant sur ces types de réseaux à formation de faisceaux sont proposés dans cette thèse : structures pillbox simples intégrant les deux variantes de la technique mono-pulse pour augmenter la résolution angulaire de l'antenne, lentilles de Rotman bicouche et multicouche, pour le cas quasi-optique ; réseaux phasés pour applications SATCOM (projet ANR) et matrice de Butler avec circuit de contrôle des niveaux de lobes secondaires pour le cas circuit. Les différents concepts ont été étudiés dans différentes bandes de fréquences : Ku, K et V. Pour des raisons de coût essentiellement, deux technologies ont été retenues : La technologie SIW (Substrate Integrated Waveguide), qui associe les avantages de la technologie des circuits imprimés et celles de la technologie guide d'ondes. Des efforts particuliers ont été faits pour l'implémentation de structures multicouches car nous arrivons à ce stade à la limite du savoir faire industriel national dans ce domaine. La technique de « Diffusion Bounding » développée au « Ando and Hirokawa lab » du TIT (Tokyo Institute of Technology) et qui consiste à assembler de fines couches métalliques sous haute température et haute pression. Cette technique permet le développement d'antennes en guides creux avec des efficacités supérieures à 80% en bande millimétrique
This PhD thesis deals with the design of multi-beam antennas. A single radiating aperture is used to generate several beams with high gain and a large field of view. The multi beam operation is achieved by using two topologies of Beam Forming Networks (BFN): quasi optical BFN, and circuit-based BFN. For each category, several solutions have been proposed and validated experimentally. In particular, for the quasi-optical configurations, pillbox structures, mono-pulse antennas in pillbox technology, and multi-layer Rotman lenses have been considered. On the other hand, for circuit-based multi-beam antennas, two solutions have been analyzed: a phased array for SATCOM applications in the framework of a national ANR project and a Butler matrix with controlled side-lobe levels for the radiated beams within a collaboration with the Tokyo Institute of Technology, Japan. The proposed concepts and antenna solutions have been considered in different frequency bands: Ku, K and V. Two technologies have been mainly adopted for the fabrication of the various prototypes: Substrate Integrated Waveguide technology (SIW) which combines the advantages in terms of cost of the printed circuit board (PCB) fabrication process with the efficiency of classical waveguide technology. Considerable efforts have been devoted to the implementation of multilayer SIW structures to overcome and go beyond the current state of the art at national level on PCB fabrication process. Diffusion Bounding Technique, developed at “Ando and Hirokawa lab” at the Tokyo Institute of Technology, which consists of bonding laminated thin metal plates under high temperature and high pressure. This technique allows the fabrication of planar hollow waveguide structures with efficiencies up to 80% in the millimeter wave-band
APA, Harvard, Vancouver, ISO, and other styles
36

Pesik, Lisa Josephine. "Practical investigation of Butler matrix application for beamforming with circular antenna arrays." Thesis, University of Bristol, 2007. http://hdl.handle.net/1983/650cf9d2-e075-4efb-b03d-6b5fa0b5dd67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kinder, Erich W. "Fabrication of All-Inorganic Optoelectronic Devices Using Matrix Encapsulation of Nanocrystal Arrays." Bowling Green State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1339719904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Statzer, Eric L. "Matrix Pencil Method for Direction of Arrival Estimation with Uniform Circular Arrays." University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1313427307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Davall, Rosemarie Anne Regina. "The application of algorithm-based fault tolerance to VLSI processor arrays." Thesis, University of Newcastle Upon Tyne, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Combernoux, Alice. "Détection et filtrage rang faible pour le traitement d'antenne utilisant la théorie des matrices aléatoires en grandes dimensions." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC016/document.

Full text
Abstract:
Partant du constat que dans plus en plus d'applications, la taille des données à traiter augmente, il semble pertinent d'utiliser des outils appropriés tels que la théorie des matrices aléatoires dans le régime en grandes dimensions. Plus particulièrement, dans les applications de traitement d'antenne et radar spécifiques STAP et MIMO-STAP, nous nous sommes intéressés au traitement d'un signal d'intérêt corrompu par un bruit additif composé d'une partie dite rang faible et d'un bruit blanc gaussien. Ainsi l'objet de cette thèse est d'étudier dans le régime en grandes dimensions la détection et le filtrage dit rang faible (fonction de projecteurs) pour le traitement d'antenne en utilisant la théorie des matrices aléatoires.La thèse propose alors trois contributions principales, dans le cadre de l'analyse asymptotique de fonctionnelles de projecteurs. Ainsi, premièrement, le régime en grandes dimensions permet ici de déterminer une approximation/prédiction des performances théoriques non asymptotiques, plus précise que ce qui existe actuellement en régime asymptotique classique (le nombre de données d'estimation tends vers l'infini à taille des données fixe). Deuxièmement, deux nouveaux filtres et deux nouveaux détecteurs adaptatifs rang faible ont été proposés et il a été montré qu'ils présentaient de meilleures performances en fonction des paramètres du système en terme de perte en RSB, probabilité de fausse alarme et probabilité de détection. Enfin, les résultats ont été validés sur une application de brouillage, puis appliqués aux traitements radar STAP et MIMO-STAP sparse. L'étude a alors mis en évidence une différence notable avec l'application de brouillage liée aux modèles de matrice de covariance traités dans cette thèse
Nowadays, more and more applications deal with increasing dimensions. Thus, it seems relevant to exploit the appropriated tools as the random matrix theory in the large dimensional regime. More particularly, in the specific array processing applications as the STAP and MIMO-STAP radar applications, we were interested in the treatment of a signal of interest corrupted by an additive noise composed of a low rang noise and a white Gaussian. Therefore, the aim of this thesis is to study the low rank filtering and detection (function of projectors) in the large dimensional regime for array processing with random matrix theory tools.This thesis has three main contributions in the context of asymptotic analysis of projector functionals. Thus, the large dimensional regime first allows to determine an approximation/prediction of theoretical non asymptotic performance, much more precise than the literature in the classical asymptotic regime (when the number of estimation data tends to infinity at a fixed dimension). Secondly, two new low rank adaptive filters and detectors have been proposed and it has been shown that they have better performance as a function of the system parameters, in terms of SINR loss, false alarm probability and detection probability. Finally, the results have been validated on a jamming application and have been secondly applied to the STAP and sparse MIMO-STAP processings. Hence, the study highlighted a noticeable difference with the jamming application, related to the covariance matrix models concerned by this thesis
APA, Harvard, Vancouver, ISO, and other styles
41

Chiniard, Renaud. "Contribution à la modélisation de la surface équivalente radar des grandes antennes réseaux par une approche multi domaine/Floquet." Toulouse 3, 2007. http://www.theses.fr/2007TOU30295.

Full text
Abstract:
Cette étude propose de traiter les grandes antennes réseaux par une approche originale permettant d’atteindre des problèmes dont les grandes dimensions les rendent inaccessibles aux méthodes classiques et rigoureuses. Ainsi, la première partie est consacrée à la description des méthodes employées. La première d’entre elles est la méthode multi domaine qui permet de découper en sous domaines le problème initial. Chaque domaine est alors calculé à l’aide de la méthode numérique (équations intégrales, éléments finis,…) la plus appropriée dans le but d’obtenir un opérateur condensé (matrice S) pour chacun des sous domaines de manière indépendante. La grande force de cette méthode réside dans son approche modulaire qui la rend très efficace dans les études paramétriques par réutilisation des matrices S des volumes inchangés. Nous présentons ensuite le développement en modes de Floquet utilisable dans le cadre de structures planes, infinies et périodiques. Il permet de réduire le problème que nous traitons à la taille de la cellule élémentaire. Dans la deuxième partie, nous présentons l’hybridation des deux méthodes précédemment introduites. Ce couplage de méthode est accompagné d’un cas canonique de validation (réseau de guide rectangulaire) qui, en plus de valider notre approche, a permis la mise en place d’indicateurs physiques. Nous apportons dans le chapitre suivant deux modularités supplémentaires pour traiter des problèmes plus réalistes de réseaux sur structure. Le dernier chapitre confronte notre code à des mesures effectuées sur une maquette d’antenne réseau réelle. Il permet de jauger le potentiel de la méthode présentée. Au terme de cette étude, un outil a été développé et il permet de calculer la Surface Équivalente Radar des grandes antennes réseaux insérées dans leurs supports et pour différentes conditions d’impédances au niveau des accès
This study proposes to treat large array antennas with an original approach. It makes it possible to reach problems whose great dimensions lead traditional and rigorous methods to be unsuccessful. The first part is devoted to the description of the employed methods. The first one is the multi domain method which makes it possible to split in sub domains the initial problem. Each domain is then calculated using the numerical method (integral equations, finite elements,…) adapted. It allows to obtain a condensed operator (S matrix) for each sub domain in an uncoupled way. Once they have been put together with the feeding, the problem can be solved. The main advantage lies in the modular approach which makes it very efficient in parametric studies by re-use of the matrices S of unchanged volumes. We further present the development in Floquet modes, assuming the planar, infinite and periodic hypothesis. It thus makes it possible to reduce the problem to the size of the elementary cell. In the second part, the hybridization of the two methods previously introduced is shown. It is discussed on a rectangular guide array case which, in addition to validating our approach, allows the installation of physical indicators. To enhance our tool, in the next chapter we couple the array to its mechanical support and propose an efficient modelling of this global realistic antenna. The final chapter makes it possible to confront our code with the measurements of one real array antenna. It allows to estimate all the capabilities of the method developed during this study. At the end of this study, a software tool has been developed and it makes it possible to calculate Radar Cross Section of the large antenna arrays inserted in their supports and for various conditions of feeding impedances
APA, Harvard, Vancouver, ISO, and other styles
42

Keung, Chi Wing. "Matrix-addressable III-nitride light emitting diode arrays on silicon substrates by flip-chip technology /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?ECE%202007%20KEUNG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Nemchinov, Alexander. "Using Colloidal Nanocrystal Matrix Encapsulation Technique for the Development of Novel Infrared Light Emitting Arrays." Bowling Green State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1339806993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Moroz, Pavel. "A Novel Approach for the Fabrication of All-Inorganic Nanocrystal Solids: Semiconductor Matrix Encapsulated Nanocrystal Arrays." Bowling Green State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1435324105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Lobato, Daniel Corrêa. "Proposta de um ambiente de simulação e aprendizado inteligente para RAID." Universidade de São Paulo, 2000. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-26072001-181852/.

Full text
Abstract:
O desempenho global dos sistemas computacionais é limitado, geralmente, pelo componente de menor desempenho. Os processadores e a memória principal têm experimentado um aumento de desempenho bem maior que o da memória secundária, como os discos magnéticos. Em 1984, Johnson introduziu o conceito de fragmentação, onde um dado é gravado em uma matriz de discos, de forma que os seus fragmentos podem ser recuperados em paralelo e, por conseqüência, de forma mais rápida. O principal problema da fragmentação é a redução da confiabilidade da matriz pois, a falha de um dos discos torna o dado inacessível. Patterson, Gibson e Katz propuseram, em 1988, 5 formas de armazenar informação redundante na matriz de discos e, dessa forma, aumentar sua confiabilidade. A essas formas foi dado o nome de RAID - Redundant Arrays of Independent Disks. Com o passar do tempo, outras formas de armazenamento de redundância foram criadas, tornando complexa a taxonomia da área. Além disso, alterações de parâmetros na matriz implicam em variações de desempenho nem sempre fáceis de se perceber em um primeiro momento. Com o objetivo de facilitar a compreensão da taxonomia e permitir que sejam feitos experimentos na matriz buscando um melhor desempenho, esta dissertação propõe um ambiente de simulação e aprendizado para RAID, onde o usuário pode interagir com diversos modelos de RAID, ou até criar o seu próprio, para avaliar seu desempenho em várias situações, além de oferecer ao usuário acesso ao conhecimento da área, agindo como um tutor. Esta dissertação apresenta, ainda, um protótipo de um simulador de discos magnéticos que pode ser utilizado como base para o desenvolvimento de um simulador de RAID para ser utilizado pelo ambiente.
The component with the worst performance usually limits the overall performance of a computing system. The performance of processors and main memory has improved faster than the secondary memory, such as magnetic disks. Johnson, in 1984, introduced the concept of fragmentation, in which a data file is written into a disk array, in a way that its stripes can be recovered in parallel and therefore, in a faster way. The main problem with fragmentation is the reduction of the reliability. If one disk fails, all data file becomes inaccessible. Patterson, Gibson and Katz proposed, in 1988, five ways to store redundant information in the array, increasing the reliability, comprising the main RAID (Redundant Array of Independent Disks) configurations. Some other ways to store the redundant information have been proposed over the years, making the RAID taxonomy more complex. Furthermore, changes in the array parameters takes to performance variations that are not always understood. With the purpose of facilitating the comprehension of the taxonomy and allowing the execution of experiments looking forward to improve performance, this MSc Dissertation proposes an Intelligent Simulation and Learning Environment for RAID, where the user can interact with several RAID models, or even create his/her own models, in order to evaluate their performance under different situations. The environment also allows the user to interact with the knowledge of the area, acting as a tutor. This Dissertation also presents a prototype of a magnetic disk simulator, that can be used as the kernel for the development of a RAID simulator to be used by the environment.
APA, Harvard, Vancouver, ISO, and other styles
46

Acquaro, Marcela Conde. "Regra-matriz de incidência tributária do imposto territorial rural." Pontifícia Universidade Católica de São Paulo, 2010. https://tede2.pucsp.br/handle/handle/9107.

Full text
Abstract:
Made available in DSpace on 2016-04-26T20:30:35Z (GMT). No. of bitstreams: 1 Marcela Conde Acquaro.pdf: 1148872 bytes, checksum: 73b757f3e3349b5c9ccc291e0de58fed (MD5) Previous issue date: 2010-06-29
This paper intends to undertake a study of legal standards in our legal system take care of the rural land tax, your institution, regulation, tax collection, among other aspects. For that, at baseline, we established assumptions and concepts, for, after, we dedicate to the topic itself. We decided to make the study of the ITR through rule-array of tax incidence. Thus, we analyzed all the elements contained in the rule-array, the antecedent of its standard criteria material, temporal and spatial, while the consequent, we saw the staff and quantitative criteria. How we treat the test material, we intend to define the scope to be taken to property taxation. At the time criterion, we study the moments of occurrence of the legal fact, and, finally, on the criteria of the antecedent of the rule, we verify the criterion space and its various conflicts, what is meant by urban and rural areas, where the parameters for imposition of property tax or ITR, among other issues. Already in relation to consequent of the norm in the criteria staff, established the subject of the relationship by checking who may appear as a taxpayer's tax liability and asset. No quantitative criterion, we did extensive research to go into current discussions about the exclusions from the calculation basis permitted by law, discussions covering the delivery of the ADA - Declaratory Act Environmental wetlands by hydroelectric plants, among others. Finally, we make our conclusions as to the doctrinal or jurisprudential discussions
O presente trabalho pretende realizar um estudo das normas jurídicas vigentes em nosso sistema jurídico que cuidam do imposto territorial rural, sua instituição, regulamentação, arrecadação, dentre outros aspectos. Para tanto, no início da pesquisa, estabelecemos premissas e conceitos fundamentais, para, após, nos dedicarmos ao tema em si. Decidimos fazer o estudo do ITR por meio da regra-matriz de incidência tributária. Assim, analisamos todos os elementos contidos na regra-matriz, no antecedente da norma seus critérios material, temporal e espacial, enquanto, no consequente, vimos o critério pessoal e quantitativo. Quanto tratamos do critério material, pretendemos delimitar o âmbito que se deve ter para tributação da propriedade. No critério temporal, estudamos os momentos da ocorrência do fato jurídico, e, por fim, quanto aos critérios do antecedente da norma, verificamos o critério espacial e seus diversos conflitos, o que deve ser entendido por zona urbana e zona rural, quais os parâmetros para a tributação do IPTU ou ITR, dentre outras questões. Já no que tange ao consequente da norma no critério pessoal, estabelecemos os sujeitos da relação, verificando quem poderá figurar como sujeito passivo e ativo da obrigação tributária. No critério quantitativo, fizemos um vasto estudo para adentrarmos em discussões atuais quanto às exclusões da base de cálculo permitidas pela legislação, discussões que abrangem a entrega do ADA Ato Declaratório Ambiental, as áreas alagadas pelas usinas hidrelétricas, dentre outros. Por fim, realizamos nossas conclusões quanto às discussões doutrinarias ou jurisprudenciais
APA, Harvard, Vancouver, ISO, and other styles
47

Tas, Idir. "Traitement d'antenne passif : détection et identification de sources." Grenoble INPG, 1987. http://www.theses.fr/1987INPG0076.

Full text
Abstract:
Le traitement d'antennes reseau sert a detecter et a localiser des sources rayonnantes. Sont presentees,ici, les diverses methodes de traitement d'antennes, basees sur l'information contenue dans la matrice spectrale des signaux recus sur les differents capteurs. Ces methodes sont regroupees en deux categories : solutions globales et solutions decouplees. Experimentation a longue distance en acoustique sousmarine
APA, Harvard, Vancouver, ISO, and other styles
48

Aktas, Metin. "Online Calibration Of Sensor Arrays Using Higher Order Statistics." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614054/index.pdf.

Full text
Abstract:
Higher Order Statistics (HOS) and Second Order Statistics (SOS) approaches have certain advantages and disadvantages in signal processing applications. HOS approach provides more statistical information for non-Gaussian signals. On the other hand, SOS approach is more robust to the estimation errors than the HOS approach, especially when the number of observations is small. In this thesis, HOS and SOS approaches are jointly used in order to take advantage of both methods. In this respect, the joint use of HOS and SOS approaches are introduced for online calibration of sensor arrays with arbitrary geometries. Three different problems in online array calibration are considered and new algorithms for each of these problems are proposed. In the first problem, the positions of the randomly deployed sensors are completely unknown except the two reference sensors and HOS and SOS approaches are used iteratively for the joint Direction of Arrival (DOA) and sensor position estimation. Iterative HOS-SOS algorithm (IHOSS) solves the ambiguity problem in sensor position estimation by observing the source signals at least in two different frequencies and hence it is applicable for wideband signals. The conditions on these frequencies are presented. IHOSS is the first algorithm in the literature which finds the DOA and sensor position estimations in case of randomly deployed sensors with unknown coordinates. In the second problem, narrowband signals are considered and it is assumed that the nominal sensor positions are known. Modified IHOSS (MIHOSS) algorithm uses the nominal sensor positions to solve the ambiguity problem in sensor position estimation. This algorithm can handle both small and large errors in sensor positions. The upper bound of perturbations for unambiguous sensor position estimation is presented. In the last problem, an online array calibration method is proposed for sensor arrays where the sensors have unknown gain/phase mismatches and mutual coupling coefficients. In this case, sensor positions are assumed to be known. The mutual coupling matrix is unstructured. The two reference sensors are assumed to be perfectly calibrated. IHOSS algorithm is adapted for online calibration and parameter estimation, and hence CIHOSS algorithm is obtained. While CIHOSS originates from IHOSS, it is fundamentally different in many aspects. CIHOSS uses multiple virtual ESPRIT structures and employs an alignment technique to order the elements of rows of the actual array steering matrix. In this thesis, a new cumulant matrix estimation technique is proposed for the HOS approach by converting the multi-source problem into a single source one. The proposed algorithms perform well even in the case of correlated source signals due to the effectiveness of the proposed cumulant matrix estimate. The iterative procedure in all the proposed algorithms is guaranteed to converge. Closed form expressions are derived for the deterministic Cram´
er-Rao bound (CRB) for DOA and unknown calibration parameters for non-circular complex Gaussian noise with unknown covariance matrix. Simulation results show that the performances of the proposed methods approach to the CRB for both DOA and unknown calibration parameter estimations for high SNR.
APA, Harvard, Vancouver, ISO, and other styles
49

Hütten, Moritz. "Prospects for Galactic dark matter searches with the Cherenkov Telescope Array (CTA)." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17766.

Full text
Abstract:
Die vorliegende Arbeit beschreibt einen semi-analytischen Ansatz zur Modellierung der Dichteverteilung von DM im Galaktischen Halo. Aus den verschiedenen Substrukturmodellen wird die γ-Strahlungsintensität, welche die Erde erreicht, berechnet. Eine Spannbreite plausibler γ-Strahlungsintensitäten aufgrund der Paarvernichtung Galaktischer DM wird vorgeschlagen, welche die Vorhersagen verschiedener früherer Studien umfasst, und es werden die durchschnittlichen Massen, Abstände und ausgedehnten Strahlungsprofile der γ-strahlungsintensivsten DM-Verdichtungen berechnet. Schließlich werden die DM-Modelle für eine umfassende Berechnung der Nachweismöglichkeit Galaktischer Substrukturen mit CTA verwendet. Die instrumentelle Sensitivität zum Nachweis der γ-strahlungsintensivsten DM-Substruktur wird für eine mit CTA geplanten großflächigen Himmelsdurchmusterung außerhalb der Galaktischen Ebene berechnet. Die Berechnung wird mit CTA Analyse- Software und einer Methode durchgeführt, welche auf einer Likelihood beruht. Eine alternative, ebenfalls Likelihood-basierte Analysemethode wird entwickelt, mit welcher DM-Substrukturen als äumliche Anisotropien im Multipolspektrum des Datensatzes einer Himmelsdurchmusterung nachgewiesen werden können. Die Analysen ergeben, dass eine Himmelsdurchmusterung mit CTA und eine anschließende Suche nach γ-Strahlung von DM-Substrukturen Wirkungsquerschnitte für eine Paarvernichtung in der Größenordnung von (σv) > 1 × 10−24 cm3 s−1 für eine DM-Teilchenmasse von mχ ∼ 500 GeV auf einem Vertrauensniveau von 95% ausschließen kann. Diese Sensitivität ist vergleichbar mit Langzeitbeobachtungen einzelner Zwerggalaxien mit CTA. Eine modellunabhängige Analyse ergibt, dass eine Himmelsdurchmusterung mit CTA Anisotropien im diffusen γ-Strahlungshintergrund oberhalb von 100 GeV für relative Schwankungen von CPF > 10−2 nachweisen kann.
In the current understanding of structure formation in the Universe, the Milky Way is embedded in a clumpy halo of dark matter (DM). Regions of high DM density are expected to emit enhanced γ-radiation from the DM relic annihilation. This γ-radiation can possibly be detected by γ-ray observatories on Earth, like the forthcoming Cherenkov Telescope Array (CTA). This dissertation presents a semi-analytical density modeling of the subclustered Milky Way DM halo, and the γ-ray intensity at Earth from DM annihilation in Galactic subclumps is calculated for various substructure models. It is shown that the modeling approach is able to reproduce the γ-ray intensities obtained from extensive dynamical DM simulations, and that it is consistent with the DM properties derived from optical observations of dwarf spheroidal galaxies. A systematic confidence margin of plausible γ-ray intensities from Galactic DM annihilation is estimated, encompassing a variety of previous findings. The average distances, masses, and extended emission profiles of the γ-ray-brightest DM clumps are calculated. The DM substructure models are then used to draw reliable predictions for detecting Galactic DM density clumps with CTA, using the most recent benchmark calculations for the performance of the instrument. A Likelihood-based calculation with CTA analysis software is applied to find the instrumental sensitivity to detect the γ-ray-brightest DM clump in the projected CTA extragalactic survey. An alternative Likelihood-based analysis method is developed, to detect DM substructures as anisotropies in the angular power spectrum of the extragalactic survey data. The analyses predict that the CTA extragalactic survey will be able to probe annihilation cross sections of ⟨σv⟩ > 1 × 10−24 cm3 s−1 at the 95% confidence level for a DM particle mass of mχ ∼ 500 GeV from DM annihilation in substructures. This sensitivity is compatible with long-term observations of single dwarf spheroidal galaxies with CTA. Independent of a particular source model, it is found that the CTA extragalactic survey will be able to detect anisotropies in the diffuse γ-ray background above 100 GeV at a relative amplitude of CP_F > 10−2.
APA, Harvard, Vancouver, ISO, and other styles
50

Amba, Prakhar. "Learning methods for digital imaging." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAS011/document.

Full text
Abstract:
Pour produire des images couleurs nous devons obtenir l'information relative aux trois couleurs primaires (généralement Rouge, Vert et Bleu) à chaque pixels de l'image. Pour capturer cette information la plupart des caméras numériques utilisent une matrice de filtres couleurs (CFA – Color Filter Array en anglais), c'est-à-dire qu'une mosaïque de couleurs recouvre le capteur de manière à ce qu'une seule couleur soit mesurée à chaque position dans l'image.Cette méthode de mesure est similaire à celle du système visuel humain (HVS – Human Visual System en anglais) pour lequel les cônes LMS (sensibles aux longues L, moyenne M et courte S (short en anglais)) forment également une mosaïque à la surface de la rétine. Pour le système visuel, l'arrangement est aléatoire et change entre les individus alors que pour les caméras nous utilisons des arrangements réguliers. Dans les caméras, on doit interpoler les couleurs manquantes pour retrouver une image couleur totalement résolue, méthode appelée démosaïçage. A cause de l'arrangement régulier ou périodique des filtres couleurs, l'image démosaïçée peut faire apparaître des fausses couleurs ou des artefacts. Dans la littérature, les algorithmes de démosaïçage adressent principalement les mosaïques régulières.Dans cette thèse, nous proposons un algorithme de démosaïçage par apprentissage statistique, qui peut être utilisé avec n’importe quelle mosaïque régulière ou aléatoire. De plus, nous optimisons l’arrangement des couleurs dans la mosaïque et proposons des mosaïques qui, avec notre méthode, offrent des performances supérieures aux meilleures méthodes appliquées aux mosaïques régulières. Les images démosaïçées à partir de ces mosaïques ne présentent pas de fausses couleurs ou artefacts.Nous avons étendu l’algorithme pour qu’il ne soit pas limité à trois couleurs mais puisse être utilisé pour un arrangement aléatoire d’un nombre quelconque de filtres spectraux. Avoir plus de trois couleurs permet non seulement de mieux représenter les images mais aussi de mesurer des signatures spectrales de la scène. Ces mosaïques sont appelées matrice de filtres spectraux (SFA – Spectral Filter Array en anglais). Les technologies récentes nous offrent une grande flexibilité pour définir les filtres spectraux et par démosaïçage nous pouvons obtenir des couleurs plus justes et une estimation de la radiance spectrale de la scène. Le substrat silicium dans lequel les photodiodes du capteur sont réalisées est sensible aux radiations proche infra-rouge et donc des filtres visibles et proche infra-rouge peuvent-être combinés dans la même mosaïque. Cette combinaison est particulièrement utile pour le nouveau challenge des caméras numérique d’obtenir des images couleurs en vision de nuit à basse lumière.Nous démontrons l'application de notre algorithme pour plusieurs exemples de cameras récentes équipées d'une matrice de filtres spectraux. Nous montrons que notre méthode est plus performante que les algorithmes actuels en terme de qualité d'image et de vitesse de calcul. Nous proposons également d'optimiser les transmissions des filtres et leur arrangement pour améliorer les résultats selon des métriques ou applications choisies.La méthode, basée sur la minimisation de l'erreur quadratique moyenne est linéaire et par conséquent rapide et applicable en temps réel. Finalement, pour défier la nature linéaire de notre algorithme, nous proposons un deuxième algorithme de démosaïçage par réseaux de neurones qui à des performances légèrement meilleures mais pour un coût de calcul supérieur
To produce color images we need information of three primary colors (notably Red, Green and Blue) at each pixel point. To capture this information most digital cameras utilize a Color Filter Array (CFA), i.e. a mosaic arrangement of these colors is overlaid on the sensor such that only one color is sampled at one pixel.This arrangement is similar to the Human Visual System (HVS) wherein a mosaic of LMS cones (for sensitivity to Long, Medium and Short wavelength) forms the surface of the retina. For HVS, the arrangement is random and differs between individuals, whereas for cameras we use a regular arrangement of color filters. For digital cameras one needs to interpolate the missing colors to recover the full color image and this process is known as demosaicing. Due to regular or periodic arrangement of color filters the output demosaiced image is susceptible to false colors and artifacts. In literature, the demosaicing algorithms proposed so far cater mainly to regular CFAs.In this thesis, we propose an algorithm for demosaicing which can be used to demosaic any random or regular CFA by learning statistics of an image database. Further, we optimize and propose CFAs such that they outperform even the state of art algorithms on regular CFAs. At the same time the demosaiced images from proposed CFAs are free from false colors and artifacts.We extend our algorithm such that it is not limited to only three colors but can be used for any random arrangement of any number of spectral filters. Having more than three colors allows us to not only record an image but to record a spectral signature of the scene. These mosaics are known as Spectral Filter Arrays (SFAs). Recent technological advances give us greater flexibility in designing the spectral filters and by demosaicing them we can get more accurate colors and also do estimation of spectral radiance of the scene. We know that silicon is inherently sensitive to Near-Infrared radiation and therefore both Visible and NIR filters can be combined on the same mosaic. This is useful for low light night vision cameras which is a new challenge in digital imaging.We demonstrate the applicability of our algorithm on several state of the art cameras using these novel SFAs. In this thesis, we demonstrate that our method outperforms the state of art algorithms in image quality and computational efficiency. We propose a method to optimize filters and their arrangement such that it gives best results depending on metrics and application chosen.The method based on minimization of mean square error is linear in nature and therefore very fast and suitable for real time applications. Finally to challenge the linear nature of LMMSE we propose a demosaicing algorithm using Neural Networks training on a small database of images which is slightly better than the linear demosaicing however, it is computationally more expensive
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography