To see the other types of publications on this topic, follow the link: Filter Algorithmus.

Dissertations / Theses on the topic 'Filter Algorithmus'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Filter Algorithmus.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Law, Ying Man. "Iterative algorithms for the constrained design of filters and filter banks /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202004%20LAW.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 108-111). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
2

Baicher, Gurvinder Singh. "Towards optimisation of digital filters and multirate filter banks through genetic algorithms." Thesis, University of South Wales, 2003. https://pure.southwales.ac.uk/en/studentthesis/towards-optimisation-of-digital-filters-and-multirate-filter-banks-through-genetic-algorithms(1ed2778b-e27b-4434-bc50-915f697a0d6b).html.

Full text
Abstract:
This thesis is concerned with the issues of design and optimisation of digital filters and multirate filter banks. The main focus and contribution of this thesis is to apply the genetic algorithm (GA) technique and to draw some comparison with the standard gradient and non-gradient based optimisation methods. The finite word length (FWL) constraint affects the accuracy of a real-time digital filter requency response. For the case of digital filters, this study is concerned with the optimisation of FWL coefficients using genetic algorithms. Some comparative study with the simple hill climber algorithms is also included. The outcome of this part of the study demonstrates a substantial improvement of the new results when compared with the simply rounded FWL coefficient frequency response. The FWL coefficient optimisation process developed in the earlier Chapters is extended to the field of multirate filter banks. All multirate filter banks suffer from the problems of amplitude, phase and aliasing errors and, therefore, constraints for perfect reconstruction (PR) of the input signal can be extensive. The problem, in general, is reduced to relaxing constraints at the expense of errors and finding methods for minimising the errors. Optimisation techniques are thus commonly used for the design and implementation of multirate filter banks. In this part of the study, GAs have been used in two distinct stages. Firstly, for the design optimisation so that the overall errors are minimised and secondly for FWL coefficient optimisation of digital filters that form the sub-band filters of the filter bank. This process leads to an optimal realisation of the filter bank that can be applied to specific applications such as telephony speech signal coding and compression. One example of the optimised QMF bank was tested on a real-time DSP target system and the results are reported. The multiple M-channel uniform and non-uniform filter banks have also been considered in this study for design optimisation. For a comparative study of the GA optimised results of the design stage of the filter bank, other standard methods such as the gradient based quasi-Newton and the non-gradient based downhill Simplex methods were also used. In general, the outcome of this part of study demonstrates that a hybrid approach of GA and standard method was the most efficient and effective process in generating the best results.
APA, Harvard, Vancouver, ISO, and other styles
3

Sridharan, M. K. "Subband Adaptive Filtering Algorithms And Applications." Thesis, Indian Institute of Science, 2000. http://hdl.handle.net/2005/266.

Full text
Abstract:
In system identification scenario, the linear approximation of the system modelled by its impulse response, is estimated in real time by gradient type Least Mean Square (LMS) or Recursive Least Squares (RLS) algorithms. In recent applications like acoustic echo cancellation, the order of the impulse response to be estimated is very high, and these traditional approaches are inefficient and real time implementation becomes difficult. Alternatively, the system is modelled by a set of shorter adaptive filters operating in parallel on subsampled signals. This approach, referred to as subband adaptive filtering, is expected to reduce not only the computational complexity but also to improve the convergence rate of the adaptive algorithm. But in practice, different subband adaptive algorithms have to be used to enhance the performance with respect to complexity, convergence rate and processing delay. A single subband adaptive filtering algorithm which outperforms the full band scheme in all applications is yet to be realized. This thesis is intended to study the subband adaptive filtering techniques and explore the possibilities of better algorithms for performance improvement. Three different subband adaptive algorithms have been proposed and their performance have been verified through simulations. These algorithms have been applied to acoustic echo cancellation and EEG artefact minimization problems. Details of the work To start with, the fast FIR filtering scheme introduced by Mou and Duhamel has been generalized. The Perfect Reconstruction Filter Bank (PRFB) is used to model the linear FIR system. The structure offers efficient implementation with reduced arithmetic complexity. By using a PRFB with non adjacent filters non overlapping, many channel filters can be eliminated from the structure. This helps in reducing the complexity of the structure further, but introduces approximation in the model. The modelling error depends on the stop band attenuation of the filters of the PRFB. The error introduced due to approximation is tolerable for applications like acoustic echo cancellation. The filtered output of the modified generalized fast filtering structure is given by (formula) where, Pk(z) is the main channel output, Pk,, k+1 (z) is the output of auxiliary channel filters at the reduced rate, Gk (z) is the kth synthesis filter and M the number of channels in the PRFB. An adaptation scheme is developed for adapting the main channel filters. Auxiliary channel filters are derived from main channel filters. Secondly, the aliasing problem of the classical structure is reduced without using the cross filters. Aliasing components in the estimated signal results in very poor steady state performance in the classical structure. Attempts to eliminate the aliasing have reduced the computation gain margin and the convergence rate. Any attempt to estimate the subband reference signals from the aliased subband input signals results in aliasing. The analysis filter Hk(z) having the following antialiasing property (formula) can avoid aliasing in the input subband signal. The asymmetry of the frequency response prevents the use of real analysis filters. In the investigation presented in this thesis, complex analysis filters and real'synthesis filters are used in the classical structure, to reduce the aliasing errors and to achieve superior convergence rate. PRFB is traditionally used in implementing Interpolated FIR (IFIR) structure. These filters may not be ideal for processing an input signal for an adaptive algorithm. As third contribution, the IFIR structure is modified using discrete finite frames. The model of an FIR filter s is given by Fc, with c = Hs. The columns of the matrix F forms a frame with rows of H as its dual frame. The matrix elements can be arbitrary except that the transformation should be implementable as a filter bank. This freedom is used to optimize the filter bank, with the knowledge of the input statistics, for initial convergence rate enhancement . Next, the proposed subband adaptive algorithms are applied to acoustic echo cancellation problem with realistic parameters. Speech input and sufficiently long Room Impulse Response (RIR) are used in the simulations. The Echo Return Loss Enhancement (ERLE)and the steady state error spectrum are used as performance measures to compare these algorithms with the full band scheme and other representative subband implementations. Finally, Subband adaptive algorithm is used in minimization of EOG (Electrooculogram) artefacts from measured EEG (Electroencephalogram) signal. An IIR filterbank providing sufficient isolation between the frequency bands is used in the modified IFIR structure and this structure has been employed in the artefact minimization scheme. The estimation error in the high frequency range has been reduced and the output signal to noise ratio has been increased by a couple of dB over that of the fullband scheme. Conclusions Efforts to find elegant Subband adaptive filtering algorithms will continue in the future. However, in this thesis, the generalized filtering algorithm could offer gain in filtering complexity of the order of M/2 and reduced misadjustment . The complex classical scheme offered improved convergence rate, reduced misadjustment and computational gains of the order of M/4 . The modifications of the IFIR structure using discrete finite frames made it possible to eliminate the processing delay and enhance the convergence rate. Typical performance of the complex classical case for speech input in a realistic scenario (8 channel case), offers ERLE of more than 45dB. The subband approach to EOG artefact minimization in EEG signal was found to be superior to their fullband counterpart. (Refer PDF file for Formulas)
APA, Harvard, Vancouver, ISO, and other styles
4

Langer, Max. "Design of Fast Multidimensional Filters by Genetic Algorithms." Thesis, Linköping University, Department of Biomedical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2704.

Full text
Abstract:

The need for fast multidimensional signal processing arises in many areas. One of the more demanding applications is real time visualization of medical data acquired with e.g. magnetic resonance imaging where large amounts of data can be generated. This data has to be reduced to relevant clinical information, either by image reconstruction and enhancement or automatic feature extraction. Design of fast-acting multidimensional filters has been subject to research during the last three decades. Usually methods for fast filtering are based on applying a sequence of filters of lower dimensionality acquired by e.g. weighted low-rank approximation. Filter networks is a method to design fast multidimensional filters by decomposing multiple filters into simpler filter components in which coefficients are allowed to be sparsely scattered. Up until now, coefficient placement has been done by hand, a procedure which is time-consuming and difficult. The aim of this thesis is to investigate whether genetic algorithms can be used to place coefficients in filter networks. A method is developed and tested on 2-D filters and the resulting filters have lower distortion values while still maintaining the same or lower number of coefficients than filters designed with previously known methods.

APA, Harvard, Vancouver, ISO, and other styles
5

Penberthy, Harris Stephen. "Natural algorithms in digital filter design." Thesis, University of Plymouth, 2001. http://hdl.handle.net/10026.1/2752.

Full text
Abstract:
Digital filters are an important part of Digital Signal Processing (DSP), which plays vital roles within the modern world, but their design is a complex task requiring a great deal of specialised knowledge. An analysis of this design process is presented, which identifies opportunities for the application of optimisation. The Genetic Algorithm (GA) and Simulated Annealing are problem-independent and increasingly popular optimisation techniques. They do not require detailed prior knowledge of the nature of a problem, and are unaffected by a discontinuous search space, unlike traditional methods such as calculus and hill-climbing. Potential applications of these techniques to the filter design process are discussed, and presented with practical results. Investigations into the design of Frequency Sampling (FS) Finite Impulse Response (FIR) filters using a hybrid GA/hill-climber proved especially successful, improving on published results. An analysis of the search space for FS filters provided useful information on the performance of the optimisation technique. The ability of the GA to trade off a filter's performance with respect to several design criteria simultaneously, without intervention by the designer, is also investigated. Methods of simplifying the design process by using this technique are presented, together with an analysis of the difficulty of the non-linear FIR filter design problem from a GA perspective. This gave an insight into the fundamental nature of the optimisation problem, and also suggested future improvements. The results gained from these investigations allowed the framework for a potential 'intelligent' filter design system to be proposed, in which embedded expert knowledge, Artificial Intelligence techniques and traditional design methods work together. This could deliver a single tool capable of designing a wide range of filters with minimal human intervention, and of proposing solutions to incomplete problems. It could also provide the basis for the development of tools for other areas of DSP system design.
APA, Harvard, Vancouver, ISO, and other styles
6

Gurrapu, Omprakash. "Adaptive filter algorithms for channel equalization." Thesis, Högskolan i Borås, Institutionen Ingenjörshögskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-19219.

Full text
Abstract:
Equalization techniques compensate for the time dispersion introduced bycommunication channels and combat the resulting inter-symbol interference (ISI) effect.Given a channel of unknown impulse response, the purpose of an adaptive equalizer is tooperate on the channel output such that the cascade connection of the channel and theequalizer provides an approximation to an ideal transmission medium. Typically,adaptive equalizers used in digital communications require an initial training period,during which a known data sequence is transmitted. A replica of this sequence is madeavailable at the receiver in proper synchronism with the transmitter, thereby making itpossible for adjustments to be made to the equalizer coefficients in accordance with theadaptive filtering algorithm employed in the equalizer design. This type of equalization isknown as Non-Blind equalization. However, in practical situations, it would be highlydesirable to achieve complete adaptation without access to a desired response. Clearly,some form of Blind equalization has to be built into the receiver design. Blind equalizerssimultaneously estimate the transmitted signal and the channel parameters, which mayeven be time-varying. The aim of the project is to study the performance of variousadaptive filter algorithms for blind channel equalization through computer simulations.
Uppsatsnivå: D
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Yuchen. "Adaptive Notch Filter." PDXScholar, 1994. https://pdxscholar.library.pdx.edu/open_access_etds/4802.

Full text
Abstract:
The thesis presents a new adaptive notch filter (ANF) algorithm that is more accurate and efficient and has a faster convergent rate than previous ANF algorithms. In 1985, Nehorai designed an infinite impulse response (UR) ANF algorithm that has many advantages over previous ANF algorithms. It requires a minimal number of parameters with constrained poles and zeros. It has higher stability and sharper notches than any ANF algorithm until now. Because of the special filter structure and the recursive prediction error (RPE) method, however, the algorithm is very sensitive to the initial estimate of the filter coefficient and its covariance. Furthermore, convergence to the true filter coefficient is not guaranteed since the error-performance surface of the filter has its global minimum lying on a fairly flat region. We propose a new ANF algorithm that overcomes the convergence problem. By choosing a smaller notch bandwidth control parameter that makes the error-performance surface less flat, we can more easily detect a global minimum. We also propose a new convergence criterion to be used with the algorithm and a self-adjustment feature to reset the initial estimate of the filter coefficient and its covariance. This results in guaranteed convergence with more accurate results and more efficient computations than previous ANF algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Clark, Matthew David. "Electronic Dispersion Compensation For Interleaved A/D Converters in a Standard Cell ASIC Process." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16269.

Full text
Abstract:
The IEEE 802.3aq standard recommends a multi-tap decision feedback equalizer be implemented to remove inter-symbol interference and additive system noise from data transmitted over a 10 Gigabit per Second (10 Gbps) multi-mode fiber-optic link (MMF). The recommended implementation produces a design in an analog process. This design process is difficult, time consuming, and is expensive to modify if first pass silicon success is not achieved. Performing the majority of the design in a well-characterized digital process with stable, evolutionary tools reduces the technical risk. ASIC design rule checking is more predictable than custom tools flows and produces regular, repeatable results. Register Transfer Language (RTL) changes can also be relatively quickly implemented when compared to the custom flow. However, standard cell methodologies are expected to achieve clock rates of roughly one-tenth of the corresponding analog process. The architecture and design for a parallel linear equalizer and decision feedback equalizer are presented. The presented design demonstrates an RTL implementation of 10 GHz filters operating in parallel at 625 MHz. The performance of the filters is characterized by testing the design against a set of 324 reference channels. The results are compared against the IEEE standard group s recommended implementation. The linear equalizer design of 20 taps equalizes 88 % of the reference channels. The decision feedback equalizer design of 20 forward and 1 reverse tap equalizes 93 % of the reference channels. Analysis of the unequalized channels in performed, and areas for continuing research are presented.
APA, Harvard, Vancouver, ISO, and other styles
9

Tseng, Chien H. "Iterative algorithms for envelope-constrained filter design." Curtin University of Technology, Australian Telecommunications Research Institute, 1999. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=10453.

Full text
Abstract:
The design of envelope-constrained (EC) filters is considered for the time-domain synthesis of filters for signal processing problems. The objective is to achieve minimal noise enhancement where the shape of the filter output to a specified input signal is constrained to lie within prescribed upper and lower bounds. Traditionally, problems of this type were treated by using the least-square (LS) approach. However, in many practical signal processing problems, this "soft" least-square approach is unsatisfactory because large narrow excursions from the desired shape occur so that the norm of the filter can be large and the choice of an appropriate weighting function is not obvious. Moreover, the solution can be sensitive to the detailed structure of the desired pulse, and it is usually not obvious how the shape of the desired pulse should be altered in order to improve on the solution. The "hard" EC filter formulation is more relevant than the "soft" LS approach in a variety of signal processing fields such as robust antenna and filter design, communication channel equalization, and pulse compression in radar and sonar. The distinctive feature is the set of inequality constraints on the output waveform: rather than attempting to match a specific desired pulse, we deal with a whole set of allowable outputs and seek an optimal point of that set.The EC optimal filter design problems involve a convex quadratic cost function and a number of linear inequality constraints. These EC filtering problems are classified into: discrete-time EC filtering problem, continuous-time EC filtering problem, and adaptive discrete-time EC filtering problem.The discrete-time EC filtering problem is handled using the discrete Lagrangian duality theory in combination with the space transformation function. The optimal solution of the dual problem can be computed by finding the limiting point of ++
an ordinary differential equation given in terms of the gradient flow. Two iterative algorithms utilizing the simple structure of the gradient flow are developed via discretizing the differential equations. Their convergence properties are derived for a deterministic environment. From the primal-dual relationship, the corresponding sequence of approximate solutions to the original discrete-time EC filtering problem is obtained.The continuous-time EC filtering problem (semi-infinite convex programming problem) is handled using the continuous Lagrangian duality theory and Caratheodory's dimensionality theory. Several important properties are derived and discussed in relation to practical engineering requirements. These include the observation that the continuous-time optimal filter via orthonormal filters has the structure of a matched filter in cascade with another filter. Furthermore, the semi-infinite convex programming problem is converted into an equivalent finite dual optimization problem, which can be solved by the optimization methods developed. Another issue, which relates to the continuous-time optimal filter design problem, is the design of robust optimal EC filters. The robustness issue arises because the solution of the EC filtering problem lies on the boundary of the feasible region. Thus, any disturbance in the prescribed input signal or errors in the implementation of the optimal filter are likely to result in the output constraints being violated. A detailed formulation and a corresponding design method for improving the robustness of optimal EC filters are given.Finally, an adaptive algorithm suitable for a stochastic environment is presented. The convergence properties of the algorithm in a stochastic environment are established.
APA, Harvard, Vancouver, ISO, and other styles
10

Carcanague, Sébastien. "Low-cost GPS/GLONASS Precise Positioning algorithm in Constrained Environment." Thesis, Toulouse, INPT, 2013. http://www.theses.fr/2013INPT0004/document.

Full text
Abstract:
Le GNSS (Global Navigation Satellite System), et en particulier sa composante actuelle le système américain GPS et le système russe GLONASS, sont aujourd'hui utilisés pour des applications géodésiques afin d'obtenir un positionnement précis, de l'ordre du centimètre. Cela nécessite un certain nombre de traitements complexes, des équipements coûteux et éventuellement des compléments au sol des systèmes GPS et GLONASS. Ces applications sont aujourd'hui principalement réalisées en environnement « ouvert » et ne peuvent fonctionner en environnement plus contraint. L'augmentation croissante de l'utilisation du GNSS dans des domaines variés va voir émerger de nombreuses applications où le positionnement précis sera requis (par exemple des applications de transport/guidage automatique ou d'aide à la conduite nécessitant des performances importantes en terme de précision mais aussi en terme de confiance dans la position –l'intégrité- et de robustesse et disponibilité). D'autre part, l'arrivée sur le marché de récepteurs bas-coûts (inférieur à 100 euros) capables de poursuivre les signaux provenant de plusieurs constellations et d'en délivrer les mesures brutes laisse entrevoir des avancées importantes en termes de performance et de démocratisation de ces techniques de positionnement précis. Dans le cadre d'un utilisateur routier, l'un des enjeux du positionnement précis pour les années à venir est ainsi d'assurer sa disponibilité en tout terrain, c'est-à-dire dans le plus grand nombre d'environnements possibles, dont les environnements dégradés (végétation dense, environnement urbain, etc.) Dans ce contexte, l'objectif de la thèse a été d'élaborer et d'optimiser des algorithmes de positionnement précis (typiquement basés sur la poursuite de la phase de porteuse des signaux GNSS) afin de prendre en compte les contraintes liées à l'utilisation d'un récepteur bas coût et à l'environnement. En particulier, un logiciel de positionnement précis (RTK) capable de résoudre les ambiguïtés des mesures de phase GPS et GLONASS a été développé. La structure particulière des signaux GLONASS (FDMA) requiert notamment un traitement spécifiques des mesures de phase décrit dans la thèse afin de pouvoir isoler les ambiguïtés de phase en tant qu'entiers. Ce traitement est compliqué par l'utilisation de mesures provenant d'un récepteur bas coût dont les canaux GLONASS ne sont pas calibrés. L'utilisation d'une méthode de calibration des mesures de code et de phase décrite dans la thèse permet de réduire les biais affectant les différentes mesures GLONASS. Il est ainsi démontré que la résolution entière des ambiguïtés de phase GLONASS est possible avec un récepteur bas coût après calibration de celui-ci. La faible qualité des mesures, du fait de l'utilisation d'un récepteur bas coût en milieu dégradé est prise en compte dans le logiciel de positionnement précis en adoptant une pondération des mesures spécifique et des paramètres de validation de l'ambiguïté dépendant de l'environnement. Enfin, une méthode de résolution des sauts de cycle innovante est présentée dans la thèse, afin d'améliorer la continuité de l'estimation des ambiguïtés de phase. Les résultats de 2 campagnes de mesures effectuées sur le périphérique Toulousain et dans le centre-ville de Toulouse ont montré une précision de 1.5m 68% du temps et de 3.5m 95% du temps dans un environnement de type urbain. En milieu semi-urbain type périphérique, cette précision atteint 10cm 68% du temps et 75cm 95% du temps. Finalement, cette thèse démontre la faisabilité d'un système de positionnement précis bas-coût pour un utilisateur routier
GNSS and particularly GPS and GLONASS systems are currently used in some geodetic applications to obtain a centimeter-level precise position. Such a level of accuracy is obtained by performing complex processing on expensive high-end receivers and antennas, and by using precise corrections. Moreover, these applications are typically performed in clear-sky environments and cannot be applied in constrained environments. The constant improvement in GNSS availability and accuracy should allow the development of various applications in which precise positioning is required, such as automatic people transportation or advanced driver assistance systems. Moreover, the recent release on the market of low-cost receivers capable of delivering raw data from multiple constellations gives a glimpse of the potential improvement and the collapse in prices of precise positioning techniques. However, one of the challenge of road user precise positioning techniques is their availability in all types of environments potentially encountered, notably constrained environments (dense tree canopy, urban environments…). This difficulty is amplified by the use of low-cost receivers and antennas, which potentially deliver lower quality measurements. In this context the goal of this PhD study was to develop a precise positioning algorithm based on code, Doppler and carrier phase measurements from a low-cost receiver, potentially in a constrained environment. In particular, a precise positioning software based on RTK algorithm is described in this PhD study. It is demonstrated that GPS and GLONASS measurements from a low-cost receivers can be used to estimate carrier phase ambiguities as integers. The lower quality of measurements is handled by appropriately weighting and masking measurements, as well as performing an efficient outlier exclusion technique. Finally, an innovative cycle slip resolution technique is proposed. Two measurements campaigns were performed to assess the performance of the proposed algorithm. A horizontal position error 95th percentile of less than 70 centimeters is reached in a beltway environment in both campaigns, whereas a 95th percentile of less than 3.5 meters is reached in urban environment. Therefore, this study demonstrates the possibility of precisely estimating the position of a road user using low-cost hardware
APA, Harvard, Vancouver, ISO, and other styles
11

Nerger, Lars. "Parallel filter algorithms for data assimilation in oceanography." [S.l.] : [s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=975524844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hall, M. C. "Adaptive IIR filter algorithms for real-time applications." Thesis, University of Liverpool, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.234800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hu, Zhijian. "Improved multidimensional digital filter algorithms using systolic structures." Thesis, Southampton Solent University, 1996. http://ssudl.solent.ac.uk/2426/.

Full text
Abstract:
This work begins by explaining the issues in systolic array design. It continues by defining the criteria used in evaluating the quality of a design and its performance. An important feature of the approach taken in seeking to improve systolic systems has been the choice of target funtions. The rationale for these choices is explained and an underlying set of unifying key criteria are outlined which have been the basis of the design objectives in every case. In order to quantify improvements it is necessary to fully explore and document the current state of the art. This has been done by considering the best performing systems in each area of interest. One of the unifying principles for the research has been the derivation of all original and new designs from transfer functions. The detailed methods for mapping DSP algorithms systolic arrays are explored in word and bit level systems for multi-dimensional and median filters. The potential for improvement in the performance of systolic system implementation resides in two areas: improvement in the architectural structures of the arrays; and improvements in the speed and throughput of the processing elements. The programme of research has resulted in both these areas being addressed. In all, six new relaisatiions of two dimensional FIR and IIR filters are presented along with two new structures for the median filter. Additionally, a hybrid opto-electronic processing element has been devised which applies Fabry-Perrot resonators in a novel way. The basic adder structure is fully developed to demonstrate a high speed multiplier capability. An important issue for this research has been the verification of the correctness of designs and a confirmation of the efficacy of the theoretical calculated performances. The approach taken has been a two stage one in which a new circuit is first modelled at the behavioural level using the ELLA hardware description language. Having verified behavioural compliance the next stage is to model the system as a low level logic structure. This verifies the precise structures. The Mentor graphics architectural design tools were used for this purpose. In final impelementation as VLSI there would be a need to take into account chip layout related issues and these are discussed. The verification strategy of identifying and testing key structures is justified and evidence of successful stimulation is provided. The results are discussed in the context of comparing parameters of the new cirsuits with those of the previously best existing designs. The parameters tabulated are: data throughput rate; circuit latency; and circuit size (area). It is concluded that improvements are evident in the new designs and that they are highly regular structures with simple timing and control thus making them attractive for VLSI implementation. In summary, the new and original structures provide a better balance between cost and complexity. The new processing element system is theoretically capabale of operation in region of 4 nanoseconds per addition and new algorithm for median filtering promises a sharp improvement in speed.
APA, Harvard, Vancouver, ISO, and other styles
14

Rossi, Michel. "Iterative least squares algorithms for digital filter design." Thesis, University of Ottawa (Canada), 1996. http://hdl.handle.net/10393/10099.

Full text
Abstract:
In this thesis, we propose new algorithms to simplify and improve the design of IIR digital filters and M-band cosine modulated filter banks. These algorithms are based on the Iterative Least Squares (ILS) approach. We first review the various Iterative Reweighted Least Squares (IRLS) methods used to design Chebyshev and $L\sb{p}$ linear phase FIR filters. Then we focus on the ILS design of IIR filters and filter banks. For the design of Chebyshev IIR filters in the log magnitude sense, we propose a Remez-type IRLS algorithm. This novel approach accelerates significantly Kobayashi's and Lim's IRLS methods and simplifies the traditional rational Remez algorithm. For the design of M-band cosine modulated filter banks, we propose three new ILS algorithms. These algorithms are specific to the design of Pseudo Quadrature Mirror Filter (QMF) banks, Near Perfect Reconstruction (NPR) Pseudo QMF banks and Perfect Reconstruction (PR) QMF banks. They are fast convergent, simple to implement and flexible compared to traditional nonlinear optimization methods. Short MATLAB programs implementing the proposed algorithms are included.
APA, Harvard, Vancouver, ISO, and other styles
15

Barac, Daniel. "Localization algorithms for indoor UAVs." Thesis, Linköpings universitet, Reglerteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-72217.

Full text
Abstract:
The increased market for navigation, localization and mapping system has encouraged the research to dig deeper into these new and challenging areas. The remarkable development of computer soft- and hardware have also opened up many new doors. Things which more or less where impossible ten years ago are now reality. The possibilities of using a mathematical approach to compensate for the need of expensive sensors has been one of the main objectives in this thesis. Here you will find the basic principles of localization of indoor UAVs using particle filter (PF) and Octomaps, but also the procedures of implementing 2D scanmatching algorithms and quaternions. The performance of the algorithms is evaluated using a high precision motion capture system. The UAV which forms the basis for this thesis is equipped with a 2D laser and an inertial measurement unit (IMU). The results show that it is possible to perform localization in 2D with centimetre precision only by using information from a laser and a predefined Octomap.
APA, Harvard, Vancouver, ISO, and other styles
16

Skoglund, Martin. "Evaluating SLAM algorithms for Autonomous Helicopters." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12282.

Full text
Abstract:

Navigation with unmanned aerial vehicles (UAVs) requires good knowledge of the current position and other states. A UAV navigation system often uses GPS and inertial sensors in a state estimation solution. If the GPS signal is lost or corrupted state estimation must still be possible and this is where simultaneous localization and mapping (SLAM) provides a solution. SLAM considers the problem of incrementally building a consistent map of a previously unknown environment and simultaneously localize itself within this map, thus a solution does not require position from the GPS receiver.

This thesis presents a visual feature based SLAM solution using a low resolution video camera, a low-cost inertial measurement unit (IMU) and a barometric pressure sensor. State estimation in made with a extended information filter (EIF) where sparseness in the information matrix is enforced with an approximation.

An implementation is evaluated on real flight data and compared to a EKF-SLAM solution. Results show that both solutions provide similar estimates but the EIF is over-confident. The sparse structure is exploited, possibly not fully, making the solution nearly linear in time and storage requirements are linear in the number of features which enables evaluation for a longer period of time.

APA, Harvard, Vancouver, ISO, and other styles
17

Iacoviello, Vincenzo. "Genetic algorithms and decision feedback filters." Thesis, McGill University, 1993. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=69599.

Full text
Abstract:
A decision feedback (DFB) equalizer is used to correct for the effects of inter-symbol interference in digital communications systems. The order of the DFB filter is reduced to a bare minimum and studied when it is insufficient to equalize the channel, i.e., when the filter does not have enough poles to cancel all the zeroes of the channel. The error surfaces produced by the DFB filter in the symbol-by-symbol, frame-by-frame, and aggregate sense are investigated. A genetic algorithm is then applied to the problem of adapting the DFB filter coefficients. The performance of the genetic algorithm is compared to that of the conventional gradient search algorithm for both the sufficient and insufficient order cases with varying levels of noise. It is found that the genetic algorithm outperforms the gradient algorithm in the insufficient cases.
APA, Harvard, Vancouver, ISO, and other styles
18

Kasturi, Nitin. "Power reducing algorithms in FIR filters." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/42710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lashley, Matthew Bevly David M. Hung John Y. "Kalman filter based tracking algorithms for software GPS receivers." Auburn, Ala., 2006. http://repo.lib.auburn.edu/2006%20Fall/Theses/LASHLEY_MATTHEW_34.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Lawrie, David Ian. "Parallel processing algorithms and architectures for the Kalman filter." Thesis, Bangor University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.254737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kwan, Hing-kit, and 關興杰. "Design algorithms for delta-sigma modulator loop filter topologies." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B4150883X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kwan, Hing-kit. "Design algorithms for delta-sigma modulator loop filter topologies." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B4150883X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Amayo, Esosa O. "Construction of nonlinear filter algorithms using the saddlepoint approximation." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/42222.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (leaves 75-76).
In this thesis we propose the use of the saddlepoint method to construct nonlinear filtering algorithms. To our knowledge, while the saddlepoint approximation has been used very successfully in the statistics literature (as an example the saddlepoint method provides a simple, highly accurate approximation to the density of the maximum likelihood estimator of a non-random parameter given a set of measurements), its potential for use in the dynamic setting of the nonlinear filtering problem has yet to be realized. This is probably because the assumptions on the form of the integrand that is typical in the asymptotic analysis literature do not necessarily hold in the filtering context. We show that the assumptions typical in asymptotic analysis (and which are directly applicable in statistical inference since the statistics applications usually involve estimating the density of a function of a sequence of random variables) can be modified in a way that is still relevant in the nonlinear filtering context while still preserving a property of the saddlepoint approximation that has made it very useful in statistical inference, namely, that the shape of the desired density is accurately approximated. As a result, the approximation can be used to calculate estimates of the mean and confidence intervals and also serves as an excellent choice of proposal density for particle filtering. We will show how to construct filtering algorithms based on the saddle point approximation.
by Esosa O. Amayo.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
24

Mulgrew, Bernard. "On adaptive filter structure and performance." Thesis, University of Edinburgh, 1987. http://hdl.handle.net/1842/11865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Volkova, Anastasia. "Towards reliable implementation of digital filters." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066579/document.

Full text
Abstract:
Dans cette thèse nous essayons d'améliorer l'évaluation de filtres numériques en nous concentrant sur la précision de calcul nécessaire.Ce travail est réalisé dans le contexte d'un générateur de code matériel/logiciel fiable pour des filtres numériques linéaires, en particulier filtres à Réponse Impulsionnelle Infinie (IIR). Avec ce travail, nous mettons en avant les problèmes liés à l'implémentation de filtres linéaires en arithmétique Virgule Fixe tout en prenant en compte la précision finie des calculs nécessaires à la transformation de filtres vers code. Ce point est important dans le cadre de filtres utilisés dans des systèmes embarqués critique comme les véhicules autonomes. Nous fournissons une nouvelle méthodologie pour l'analyse d'erreur lors de l'étude d'algorithmes de filtres linéaires du point de vue de l'arithmétique des ordinateurs. Au cœur de cette méthodologie se trouve le calcul fiable de la mesure Worst Case Peak Gain d'un filtre qui est la norme l1 de sa réponse impulsionnelle. L'analyse d'erreur proposée est basée sur la combinaison de techniques telles que l'analyse d'erreur en Virgule Flottante, l'arithmétique d'intervalles et les implémentations multi-précisions. Cette thèse expose également la problématique de compromis entre les coûts matériel (e.g. la surface) et la précision de calcul lors de l'implémentation de filtres numériques sur FPGA. Nous fournissons des briques de bases algorithmiques pour une solution automatique de ce problème. Finalement, nous intégrons nos approches dans un générateur de code pour les filtres au code open-source afin de permettre l'implémentation automatique et fiable de tout algorithme de filtre linéaire numérique
In this thesis we develop approaches for improvement of the numerical behavior of digital filters with focus on the impact of accuracy of the computations. This work is done in the context of a reliable hardware/software code generator for Linear Time-Invariant (LTI) digital filters, in particular with Infinite Impulse Response (IIR). With this work we consider problems related to the implementation of LTI filters in Fixed-Point arithmetic while taking into account finite precision of the computations necessary for the transformation from filter to code. This point is important in the context of filters used in embedded critical systems such as autonomous vehicles. We provide a new methodology for the error analysis when linear filter algorithms are investigated from a computer arithmetic aspect. In the heart of this methodology lies the reliable evaluation of the Worst-Case Peak Gain measure of a filter, which is the l1 norm of its impulse response. The proposed error analysis is based on a combination of techniques such as rigorous Floating-Point error analysis, interval arithmetic and multiple precision implementations. This thesis also investigates the problematic of compromise between hardware cost (e.g. area) and the precision of computations during the implementation on FPGA. We provide basic brick algorithms for an automatic solution of this problem. Finally, we integrate our approaches into an open-source unifying framework to enable automatic and reliable implementation of any LTI digital filter algorithm
APA, Harvard, Vancouver, ISO, and other styles
26

Soleit, E. A. A. "Adaptive digital filter algorithms and their application to echo cancellation." Thesis, University of Kent, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Alvarez-Tinoco, Antonio Mario. "Adaptive algorithms for the active attenuation of acoustic noise." Thesis, Heriot-Watt University, 1985. http://hdl.handle.net/10399/1607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Filip, Silviu-Ioan. "Robust tools for weighted Chebyshev approximation and applications to digital filter design." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEN063/document.

Full text
Abstract:
De nombreuses méthodes de traitement du signal reposent sur des résultats puissants d'approximation numérique. Un exemple significatif en est l'utilisation de l'approximation de type Chebyshev pour l'élaboration de filtres numériques.En pratique, le caractère fini des formats numériques utilisés en machine entraîne des difficultés supplémentaires pour la conception de filtres numériques (le traitement audio et le traitement d'images sont deux domaines qui utilisent beaucoup le filtrage). La majorité des outils actuels de conception de filtres ne sont pas optimisés et ne certifient pas non plus la correction de leurs résultats. Notre travail se veut un premier pas vers un changement de cette situation.La première partie de la thèse traite de l'étude et du développement de méthodes relevant de la famille Remez/Parks-McClellan pour la résolution de problèmes d'approximation polynomiale de type Chebyshev, en utilisant l'arithmétique virgule-flottante.Ces approches sont très robustes, tant du point de vue du passage à l'échelle que de la qualité numérique, pour l'élaboration de filtres à réponse impulsionnelle finie (RIF).Cela dit, dans le cas des systèmes embarqués par exemple, le format des coefficients du filtre qu'on utilise en pratique est beaucoup plus petit que les formats virgule flottante standard et d'autres approches deviennent nécessaires.Nous proposons une méthode (quasi-)optimale pour traîter ce cas. Elle s'appuie sur l'algorithme LLL et permet de traiter des problèmes de taille bien supérieure à ceux que peuvent traiter les approches exactes. Le résultat est ensuite utilisé dans une couche logicielle qui permet la synthèse de filtres RIF pour des circuits de type FPGA.Les résultats que nous obtenons en sortie sont efficaces en termes de consommation d'énergie et précis. Nous terminons en présentant une étude en cours sur les algorithmes de type Remez pour l'approximation rationnelle. Ce type d'approches peut être utilisé pour construire des filtres à réponse impulsionnelle infinie (RII) par exemple. Nous examinons les difficultés qui limitent leur utilisation en pratique
The field of signal processing methods and applications frequentlyrelies on powerful results from numerical approximation. One suchexample, at the core of this thesis, is the use of Chebyshev approximationmethods for designing digital filters.In practice, the finite nature of numerical representations adds an extralayer of difficulty to the design problems we wish to address using digitalfilters (audio and image processing being two domains which rely heavilyon filtering operations). Most of the current mainstream tools for thisjob are neither optimized, nor do they provide certificates of correctness.We wish to change this, with some of the groundwork being laid by thepresent work.The first part of the thesis deals with the study and development ofRemez/Parks-McClellan-type methods for solving weighted polynomialapproximation problems in floating-point arithmetic. They are veryscalable and numerically accurate in addressing finite impulse response(FIR) design problems. However, in embedded and power hungry settings,the format of the filter coefficients uses a small number of bits andother methods are needed. We propose a (quasi-)optimal approach basedon the LLL algorithm which is more tractable than exact approaches.We then proceed to integrate these aforementioned tools in a softwarestack for FIR filter synthesis on FPGA targets. The results obtainedare both resource consumption efficient and possess guaranteed accuracyproperties. In the end, we present an ongoing study on Remez-type algorithmsfor rational approximation problems (which can be used for infinite impulseresponse (IIR) filter design) and the difficulties hindering their robustness
APA, Harvard, Vancouver, ISO, and other styles
29

Lettsome, Clyde Alphonso. "Fixed-analysis adaptive-synthesis filter banks." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28143.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Smith, Mark J. T.; Committee Co-Chair: Mersereau, Russell M.; Committee Member: Anderson, David; Committee Member: Lanterman, Aaron; Committee Member: Rosen, Gail; Committee Member: Wardi, Yorai.
APA, Harvard, Vancouver, ISO, and other styles
30

Sriranganathan, Sivakolundu. "Genetic synthesis of video coding algorithms and architectures." Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jeanvoine, Arnaud. "Intérêt des algorithmes de réduction de bruit dans l’implant cochléaire : Application à la binauralité." Thesis, Lyon 1, 2012. http://www.theses.fr/2012LYO10338/document.

Full text
Abstract:
Les implants cochléaires sont des appareils destinés à la réhabilitationdes surdités profondes et totales. Ils assurent la stimulation du nerf auditif en plaçant des électrodes dans la cochlée. Différentes études ont été établis afin d’améliorer l’intelligibilité de la parole dans le bruit chez le patientporteur de cet appareil. Les techniques bilatérales et binaurales permettent dereproduire une audition binaurale, car les deux oreilles sont simulées (commepour les personnes normo-entendantes). Ainsi la localisation et la perceptiondes sons environnants sont améliorées par rapport à une implantationmonaurale. Toutefois, les capacit´es de reconnaissances des mots sont trèsvite limitées en pr´esence de bruits. Nous avons d´evelopp´es des techniquesde r´eduction de bruit afin d’augmenter les performances de reconnaissance.Des améliorations de 10% à 15% suivant les conditions ont été observées. Néanmoins, si la perception est améliorée par les algorithmes, ils focalisent sur une direction, et ainsi, la localisation est alors réduite à l’angle delocalisation. Une seconde étude a alors été effectuée pour mesurer l’effetdes algorithmes sur la localisation. Ainsi, le beamformer donne les meilleurs résultats de compréhension mais la moins bonne localisation. La ré-injectiond’un pourcentage du signal d’entrée sur la sortie a permis de compenser laperte de la localisation sans détériorer l’intelligibilité. Le résultat de ces deux expériences montre qu’il faut un compromis entre laperception et la localisation des sons pour obtenir les meilleures performances
Cochlear implants are to sail for the rehabilitation of deep and totaldeafness. They provide stimulation of the auditory nerve by placing electrodesin the cochlea. Various studies have been established to improve thespeech intelligibility in noise in the patient of this device. Bilateral andbinaural techniques allow reproducing a binaural hearing, since both earsare simulated (as for normal hearing people). Thus localization and theperception of the surrounding sounds are improved from a monauralimplantation. However, the recognition of the words capabilities are limitedvery quickly in the presence of noise.We developed noise reduction techniquesto increase the performance of recognition. Improvements of 10% to 15%depending on the conditions were observed. Nevertheless, if the perception isenhanced by algorithms, they focus on a direction, and thus the location isthen reduced at the corner of localization. Then, a second study was madeto measure the effect of localization algorithms. Thus, the beamformer givesthe best results of understanding but the less good location. The re-injectionof a percentage of the input to the output signal helped offset the loss of thelocation without damaging the intelligibility.The result of these two experiments shows that it takes a compromisebetween perception and sound localization for best performance
APA, Harvard, Vancouver, ISO, and other styles
32

Schipman, Kurt. "Control systems and algorithms for active filters." Thesis, Staffordshire University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Alandry, Boris. "Intégration de systèmes multi-capteurs CMOS-MEMS : application à une centrale d’attitude." Thesis, Montpellier 2, 2010. http://www.theses.fr/2010MON20152/document.

Full text
Abstract:
Les systèmes électroniques actuels intègrent de plus en plus de fonctionnalités nécessitant l'intégration de capteurs très variés. Ces systèmes hétérogènes sont complexes à intégrer notamment lorsque différentes technologies de fabrication sont nécessaires pour les capteurs.Les technologies de fabrication de MEMS avec un procédé CMOS-FSBM offrent un coût de production réduit et permettent d'intégrer sur un même substrat différents types de capteurs (magnétomètres et accéléromètres notamment). Ce procédé de fabrication implique cependant une détection résistive des capteurs avec tous les problèmes qui lui sont associés (faible sensibilité, offset important, bruit de l'électronique). A travers la réalisation de la première centrale inertielle sur une puce, cette thèse renforce l'intérêt d'une approche « CMOS-MEMS » pour la conception de systèmes multi-capteurs. Le système est basé sur une mesure incomplète du champ magnétique terrestre (axes X et Y) et sur la mesure complète du champ gravitationnel. Une électronique de conditionnement des capteurs performante a été développée adressant les principaux problèmes relatifs à une détection résistive permettant ainsi une optimisation de la résolution de chaque capteur. Enfin, deux algorithmes ont été développés pour la détermination de l'attitude à partir de la mesure des cinq capteurs montrant la faisabilité et l'intérêt d'un tel système
Current electronic systems integrate more and more applications that require the integration of various kinds of sensors. The integration of such heterogeneous systems is complex especially when sensor fabrication processes differ from one to another. MEMS manufacturing processes based on CMOS-FSBM process promote a low-cost production and allow the integration of various types of sensors on the same die (e.g., magnetometers and accelerometers). However, this manufacturing process requires that sensors make use of resistive transduction with its associated drawbacks (low sensitivity, offset, electronic noise). Through the design and the implementation of the first inertial measurement unit (IMU) on a chip, this thesis demonstrates the interest of a “CMOS-MEMS” approach for the design of multi-sensor systems. The IMU is based on the incomplete measurement of the Earth magnetic field (X and Y axis) and the complete measurement of the gravity. An efficient front-end electronic has been developed addressing the most important issues of resistive transduction and thus allowing an optimization of sensor resolution. Finally, two attitude determination algorithms have been developed from the five sensor measurements showing the feasibility and the interest of such a system
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Li. "Design of linear phase paraunitary filter banks and finite length signal processing /." Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18678233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Carrière, Sébastien. "Synthèse croisée de régulateurs et d'observateurs pour le contrôle robuste de la machine synchrone." Thesis, Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0018/document.

Full text
Abstract:
Cette étude se concentre sur la synthèse de lois de commande de servo-entraînements accouplés à une charge flexible à paramètres incertains avec l’unique mesure de la position du moteur. La loi de commande a pour but de minimiser les effets de ces variations tout en gardant la maîtrise d’un cahier des charges de type industriel (temps de réponse, dépassement, simplicité d’implantation et de synthèse). De ce fait, un contrôleur et un observateur sont implantés. Un contrôleur de type retour d’état avec une minimisation d’un critère linéaire quadratique assurant un placement du pôle dominant est associé à un observateur de type Kalman. Ces deux structures utilisent des méthodologies classiques de synthèse : placement de pôles et choix de pondération des matrices de Kalman. Pour ce dernier, deux stratégies sont abordées. La première utilise les matrices de pondération diagonale standard. De nombreux degrés de liberté sont disponibles et donnent de bons résultats. La seconde défini la matrice des bruits d’état avec la variation de la matrice dynamique du système. Le nombre de degrés de liberté est réduit, les résultats restent similaires à la stratégie précédente, mais la synthèse est simplifiée. Ceci permet d’obtenir une méthode n’exigeant que peu d’investissement théorique de la part d’un ingénieur mais non robuste. Pour ceci, la méthode de micro-analyse caractérisant la stabilité robuste est appliquée en parallèle à un algorithme évolutionnaire autorisant une synthèse, plus rapide et plus précise qu’un opérateur humain. Cette méthode complète permet de voir les avantages d’une synthèse croisée de l’observateur et du correcteur au lieu d’une synthèse séparée. En effet, le placement optimal des dynamiques de commande et d’observation dans le cadre des systèmes à paramètres variants ne suit plus une stratégie classique découplée. Ici, les dynamiques se retrouvent couplées voire meme inversées (dynamique de la commande inférieure à celle de l’observateur). Des résultats expérimentaux corroborent les simulations et permettent d’expliquer les effets des observateurs et régulateurs sur le comportement du système
This thesis is performing a study on the law control synthesis for PMSM direct driving to a load having its mechanical parameters variant. Furthermore, only the motor position is sensored. The control law aim is to minimize the eects of these variations while keeping the performance inside industrial specifications (response time at 5%, overshoot, implementation and synthesis simplicity). As a result, an observer is programmed jointly with a controller. A state feedback controller deduced from a linear quadratic minimization is associated with a Kalman observer. These both structures employ standard method definitions : poles placement and arbitrary weight of Kalman matrices choice. Two definitions strategies are employed for the observer. The first is the classical arbitrary weights choice. A lot of degrees of freedom are accessible and allow this observer to impose a good behaviour to the system. The second defines the system dynamic matrix variation as the state space noise matrix. The number of degrees of freedom decreases dramatically. However the behaviour is kept as well as the previous case. This method is then easy to understand for an engineer, gives good result but non robust in an automatic sense. Consequently, an automatic study on robustness, the micro- analysis, is added to this control definition for theoretically checking. In parallel with the study robustness, an evolutionnary algorithm leads to a quicker and more accurate synthesis than a human operator. Indeed, in the case of systems having variant parameters, the optimal dynamics choice for the controller and the observer is not following the classical way. The dynamics are coupled or even mirrored ( the controller dynamic is slower than the observer one). At the end, experimental results allow to understand the way that observer or controller operate on the system
APA, Harvard, Vancouver, ISO, and other styles
36

Getachew, Sileshi Biruk. "Algorithmic and Architectural optimization techniques in particle filtering for FPGA-Based navigation applications." Doctoral thesis, Universitat Autònoma de Barcelona, 2016. http://hdl.handle.net/10803/393935.

Full text
Abstract:
Els filtres de partícules (FPs) són una tipologia de tècniques d'estimació bayesiana basades en simulacions Monte Carlo que es troben entre els sistemes d'estimació que ofereixen millors rendiments i major flexibilitat en la resolució de problemes d’estimació no lineals i no gaussians. No obstant això, aquest millor rendiment i major flexibilitat es contraposa amb la major complexitat computacional del sistema, motiu pel que fins ara la seva aplicació a problemes de temps real ha estat limitada. La majoria de les aplicacions en temps real, en particular en el camp de la robòtica mòbil, com ara el seguiment, la localització i mapatge simultani (SLAM) i la navegació, tenen limitacions en el rendiment, l'àrea, el cost, la flexibilitat i el consum d'energia. La implementació software de FPs en plataformes d’execució seqüencial en aquestes aplicacions és sovint prohibitiu per l’elevat cost computacional. Per tant per aproximar els FPs a aplicacions en temps real és necessària l'acceleració de les operacions de còmput utilitzant plataformes hardware. Donat que la major part de les operacions es poden realitzar de forma independent, el pipeline i el processament en paral·lel poden ser explotats de manera efectiva mitjançant l'ús de hardware apropiat, com ara utilitzant Field Programmable Gate Arrays (FPGAs). La flexibilitat que tenen per introduir la paral·lelització fa que puguin ser emprades en aplicacions de temps real. Amb aquest enfocament, aquesta tesis doctoral s’endinsa en el difícil repte d’atacar la complexitat computacional dels filtres de partícules introduint tècniques d’acceleració hardware i implementació sobre FPGAs, amb l’objectiu d’incrementar el seu rendiment en aplicacions de temps real. Per tal d’implementar filtres de partícules d’alt rendiment en hardware,aquesta tesis ataca la identificació dels colls d’ampolla computacionals en FPs i proposa, dissenya i implementa tècniques d’acceleració hardware per a FPs. Emprant tècniques d’acceleració hardware per a filtres de partícules primer es dissenya i implementa una arquitectura HW/SW per a FPs. Després, es dissenya un processador hardware per a FPs en el que es detallen totes les etapes del FP aplicant-lo a un algorisme de mapatge i localització simultània i implementant-lo sobre FPGA. També es dissenya i implementa un co-processador paral·lel per a un escàner làser emprat en FPs emprant un algorisme de Bresenham. Aquesta proposta hardware ha conduit al desenvolupament del primer prototip totalment hardware (sobre FPGA) per a filtres de partícules emprats en SLAM.
Los filtros de partículas (FPs) son una tipología de técnicas de estimación bayesiana basadas en simulaciones Monte Carlo que se encuentran entre los sistemas de estimación que ofrecen mejores rendimientos y mayor flexibilidad en la resolución de problemas de estimación no lineales y no gausianos . Sin embargo, este mejor rendimiento y mayor flexibilidad se contrapone con la mayor complejidad computacional del sistema, por lo que hasta ahora su aplicación a problemas de tiempo real ha sido limitada. La mayoría de las aplicaciones en tiempo real, en particular en el campo de la robótica móvil, aplicaciones tales como el seguimiento, la localización y mapeo simultáneo (SLAM) y la navegación, tienen limitaciones en el rendimiento, el área, el coste, la flexibilidad y el consumo de energía. La implementación software de FPs en plataformas de ejecución secuencial en estas aplicaciones es a menudo prohibitivo por el elevado coste computacional. Por lo tanto para aproximar los FPs a aplicaciones en tiempo real es necesaria la aceleración de las operaciones de cómputo utilizando plataformas hardware. Dado que la mayor parte de las operaciones se pueden realizar de forma independiente, el pipeline y el procesamiento en paralelo pueden ser explotados de manera efectiva mediante el uso de hardware apropiado, como utilizando Field Programmable Gate Arrays (FPGAs). La flexibilidad que tienen para introducir la paralelización hace que puedan ser utilizadas en aplicaciones de tiempo real. Con este enfoque, esta tesis doctoral se adentra en el difícil reto de atacar la complejidad computacional de los filtros de partículas introduciendo técnicas de aceleración hardware e implementación sobre FPGAs, con el objetivo de incrementar su rendimiento en aplicaciones de tiempo real. Para implementar filtros de partículas de alto rendimiento en hardware, esta tesis ataca la identificación de los cuellos de botella computacionales en FPs y propone, diseña e implementa técnicas de aceleración hardware para FPs. Empleando técnicas de aceleración hardware para filtros de partículas primero se diseña e implementa una arquitectura HW/SW para FPs. Después, se diseña un procesador hardware para FPs en el que se detallan todas las etapas del FP aplicándolo a un algoritmo de mapeo y localización simultánea y implementándose en FPGA. También se diseña e implementa un co-procesador paralelo para un escáner láser empleado en FPs empleando un algoritmo de Bresenham. Esta propuesta hardware ha conducido al desarrollo del primer prototipo totalmente hardware (FPGA) para filtros de partículas empleados en SLAM.
Particle filters (PFs) are a class of Bayesian estimation techniques based on Monte Carlo simulations that are among the estimation frameworks that offer superior performance and flexibility on addressing non-linear and non Gaussian estimation problems. However, such superior performance and flexibility of PFs comes at the cost of higher computational complexity that has so far limited their applications in real time problems. Most real time applications, in particular in the field of mobile robotics, such as tracking, simultaneous localization and mapping (SLAM) and navigation, have constraints on performance, area, cost, flexibility and power consumption. Software implementation of the PFs on sequential platforms for such applications is often prohibitive for real time applications. Thus to make PFs more feasible to such real-time applications, the acceleration of PFs computations using hardware circuitry is essential. As most of the operations in PFs can be performed independently, pipelining and parallel processing can be effectively exploited by use of an appropriate hardware platform, like field programmable gate arrays (FPGA), which offer the flexibility to introduce parallelization and lead to a wide range of applications of PFs in real time systems. Thus the focus of this phD thesis is to address the challenge of to deal with the computational complexity of PFs introducing FPGA hardware acceleration for improving their real-time performance and make its use feasible in these applications. For a high throughput hardware realization of the PFs, some of the issues addressed in this thesis include: the identification in the computational bottlenecks of the PFs and the proposal and design of PF hardware acceleration techniques. Based on the PF hardware acceleration techniques, the design and implementation of a PF HW/SW architecture is presented. In addition, a new approach for full parallelization of the PFs is presented which leads to a distributed particle filtering implementation with simplified parallel architecture. Finally, the design of a fully hardware PF processor is provided where the whole particle filtering steps applied to the SLAM problem are proposed for an implementation on FPGA. As part of the PF processor design, important problems for PF in SLAM are also solved. Also, the design and implementation of a parallel laser scanner as a PF co-processor using a Bresenham line drawing algorithm is realized. The proposed hardware architecture has led to the development of the first fully hardware (FPGA) prototype for the PF applied to the SLAM problem.
APA, Harvard, Vancouver, ISO, and other styles
37

Samek, Michal. "Optimization of Aircraft Tracker Parameters." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234937.

Full text
Abstract:
Diplomová práce se zabývá optimalizací systému pro sledování letadel, využívaného pro řízení letového provozu. Je popsána metodika vyhodnocování přesnosti sledovacího systému a přehled relevantních algoritmů pro sledování objektů. Dále jsou navrženy tři přístupy k řešení problému. První se pokouší identifikovat parametry filtrovacích algoritmů pomocí algoritmu Expectation-Maximisation, implementací metody maximální věrohodnosti. Druhý přístup je založen na prostých odhadech parametrů normálního rozložení z naměřených a referenčních dat. Nakonec je zkoumána možnost řešení pomocí optimalizačního algoritmu Evoluční strategie. Závěrečné vyhodnocení ukazuje, že třetí přístup je pro daný problém nejvhodnější.
APA, Harvard, Vancouver, ISO, and other styles
38

Eason, John P. "A Trust Region Filter Algorithm for Surrogate-based Optimization." Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1145.

Full text
Abstract:
Modern nonlinear programming solvers can efficiently handle very large scale optimization problems when accurate derivative information is available. However, black box or derivative free modeling components are often unavoidable in practice when the modeled phenomena may cross length and time scales. This work is motivated by examples in chemical process optimization where most unit operations have well-known equation oriented representations, but some portion of the model (e.g. a complex reactor model) may only be available with an external function call. The concept of a surrogate model is frequently used to solve this type of problem. A surrogate model is an equation oriented approximation of the black box that allows traditional derivative based optimization to be applied directly. However, optimization tends to exploit approximation errors in the surrogate model leading to inaccurate solutions and repeated rebuilding of the surrogate model. Even if the surrogate model is perfectly accurate at the solution, this only guarantees that the original problem is feasible. Since optimality conditions require gradient information, a higher degree of accuracy is required. In this work, we consider the general problem of hybrid glass box/black box optimization, or gray box optimization, with focus on guaranteeing that a surrogate-based optimization strategy converges to optimal points of the original detailed model. We first propose an algorithm that combines ideas from SQP filter methods and derivative free trust region methods to solve this class of problems. The black box portion of the model is replaced by a sequence of surrogate models (i.e. surrogate models) in trust region subproblems. By carefully managing surrogate model construction, the algorithm is guaranteed to converge to true optimal solutions. Then, we discuss how this algorithm can be modified for effective application to practical problems. Performance is demonstrated on a test set of benchmarks as well as a set of case studies relating to chemical process optimization. In particular, application to the oxycombustion carbon capture power generation process leads to significant efficiency improvements. Finally, extensions of surrogate-based optimization to other contexts is explored through a case study with physical properties.
APA, Harvard, Vancouver, ISO, and other styles
39

Sprungk, Björn. "Numerical Methods for Bayesian Inference in Hilbert Spaces." Doctoral thesis, Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-226748.

Full text
Abstract:
Bayesian inference occurs when prior knowledge about uncertain parameters in mathematical models is merged with new observational data related to the model outcome. In this thesis we focus on models given by partial differential equations where the uncertain parameters are coefficient functions belonging to infinite dimensional function spaces. The result of the Bayesian inference is then a well-defined posterior probability measure on a function space describing the updated knowledge about the uncertain coefficient. For decision making and post-processing it is often required to sample or integrate wit resprect to the posterior measure. This calls for sampling or numerical methods which are suitable for infinite dimensional spaces. In this work we focus on Kalman filter techniques based on ensembles or polynomial chaos expansions as well as Markov chain Monte Carlo methods. We analyze the Kalman filters by proving convergence and discussing their applicability in the context of Bayesian inference. Moreover, we develop and study an improved dimension-independent Metropolis-Hastings algorithm. Here, we show geometric ergodicity of the new method by a spectral gap approach using a novel comparison result for spectral gaps. Besides that, we observe and further analyze the robustness of the proposed algorithm with respect to decreasing observational noise. This robustness is another desirable property of numerical methods for Bayesian inference. The work concludes with the application of the discussed methods to a real-world groundwater flow problem illustrating, in particular, the Bayesian approach for uncertainty quantification in practice
Bayessche Inferenz besteht daraus, vorhandenes a-priori Wissen über unsichere Parameter in mathematischen Modellen mit neuen Beobachtungen messbarer Modellgrößen zusammenzuführen. In dieser Dissertation beschäftigen wir uns mit Modellen, die durch partielle Differentialgleichungen beschrieben sind. Die unbekannten Parameter sind dabei Koeffizientenfunktionen, die aus einem unendlich dimensionalen Funktionenraum kommen. Das Resultat der Bayesschen Inferenz ist dann eine wohldefinierte a-posteriori Wahrscheinlichkeitsverteilung auf diesem Funktionenraum, welche das aktualisierte Wissen über den unsicheren Koeffizienten beschreibt. Für Entscheidungsverfahren oder Postprocessing ist es oft notwendig die a-posteriori Verteilung zu simulieren oder bzgl. dieser zu integrieren. Dies verlangt nach numerischen Verfahren, welche sich zur Simulation in unendlich dimensionalen Räumen eignen. In dieser Arbeit betrachten wir Kalmanfiltertechniken, die auf Ensembles oder polynomiellen Chaosentwicklungen basieren, sowie Markowketten-Monte-Carlo-Methoden. Wir analysieren die erwähnte Kalmanfilter, indem wir deren Konvergenz zeigen und ihre Anwendbarkeit im Kontext Bayesscher Inferenz diskutieren. Weiterhin entwickeln und studieren wir einen verbesserten dimensionsunabhängigen Metropolis-Hastings-Algorithmus. Hierbei weisen wir geometrische Ergodizität mit Hilfe eines neuen Resultates zum Vergleich der Spektrallücken von Markowketten nach. Zusätzlich beobachten und analysieren wir die Robustheit der neuen Methode bzgl. eines fallenden Beobachtungsfehlers. Diese Robustheit ist eine weitere wünschenswerte Eigenschaft numerischer Methoden für Bayessche Inferenz. Den Abschluss der Arbeit bildet die Anwendung der diskutierten Methoden auf ein reales Grundwasserproblem, was insbesondere den Bayesschen Zugang zur Unsicherheitsquantifizierung in der Praxis illustriert
APA, Harvard, Vancouver, ISO, and other styles
40

Lai, Ching-An. "Global optimization algorithms for adaptive infinite impulse response filters." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE0000558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Rich, Thomas H. "Algorithms for computer aided design of digital filters." Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/22867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lou, Shan. "Discrete algorithms for morphological filters in geometrical metrology." Thesis, University of Huddersfield, 2013. http://eprints.hud.ac.uk/id/eprint/18103/.

Full text
Abstract:
In geometrical metrology, morphological filters are useful tools for the surface texture analysis and functional prediction. Although they are generally accepted and regarded as the complement to mean-line based filters, they are not universally adopted in practice due to a number of fatal limitations in their implementations —they are restricted to planar surfaces, uniform sampled surfaces, time-consuming and suffered from end distortions and limited sizes of structuring elements. A novel morphological method is proposed based on the alpha shape with the advantages over traditional methods that it enables arbitrary large ball radii, and applies to freeform surfaces and non-uniform sampled surfaces. A practical algorithm is developed based on the theoretical link between the alpha hull and morphological envelopes. The performance bottleneck due to the costly 3D Delaunay triangulation is solved by the divide-and-conquer optimization. Aiming to overcome the deficits of the alpha shape method that the structuring element has to be circular and the computation relies on the Delaunay triangulation, a set of definitions, propositions and comments for searching contact points is proposed and mathematically proved based on alpha shape theory, followed by the construction of a recursive algorithm. The algorithm could precisely capture contact points without performing the Delaunay triangulation. By correlating the convex hull and morphological envelopes, the Graham scan algorithm, originally developed for the convex hull, is modified to compute morphological profile envelopes with an excellent performance achieved. The three novel methods along with the two traditional methods are compared and analyzed to evaluate their advantages and disadvantages. The end effects of morphological filtration on open surfaces are discussed and four end effect correction methods are explored. Case studies are presented to demonstrate the feasibility and capabilities of using the proposed discrete algorithms.
APA, Harvard, Vancouver, ISO, and other styles
43

Sankaran, Sundar G. "On Ways to Improve Adaptive Filter Performance." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/30198.

Full text
Abstract:
Adaptive filtering techniques are used in a wide range of applications, including echo cancellation, adaptive equalization, adaptive noise cancellation, and adaptive beamforming. The performance of an adaptive filtering algorithm is evaluated based on its convergence rate, misadjustment, computational requirements, and numerical robustness. We attempt to improve the performance by developing new adaptation algorithms and by using "unconventional" structures for adaptive filters. Part I of this dissertation presents a new adaptation algorithm, which we have termed the Normalized LMS algorithm with Orthogonal Correction Factors (NLMS-OCF). The NLMS-OCF algorithm updates the adaptive filter coefficients (weights) on the basis of multiple input signal vectors, while NLMS updates the weights on the basis of a single input vector. The well-known Affine Projection Algorithm (APA) is a special case of our NLMS-OCF algorithm. We derive convergence and tracking properties of NLMS-OCF using a simple model for the input vector. Our analysis shows that the convergence rate of NLMS-OCF (and also APA) is exponential and that it improves with an increase in the number of input signal vectors used for adaptation. While we show that, in theory, the misadjustment of the APA class is independent of the number of vectors used for adaptation, simulation results show a weak dependence. For white input the mean squared error drops by 20 dB in about 5N/(M+1) iterations, where N is the number of taps in the adaptive filter and (M+1) is the number of vectors used for adaptation. The dependence of the steady-state error and of the tracking properties on the three user-selectable parameters, namely step size, number of vectors used for adaptation (M+1), and input vector delay D used for adaptation, is discussed. While the lag error depends on all of the above parameters, the fluctuation error depends only on step size. Increasing D results in a linear increase in the lag error and hence the total steady-state mean-squared error. The optimum choices for step size and M are derived. Simulation results are provided to corroborate our analytical results. We also derive a fast version of our NLMS-OCF algorithm that has a complexity of O(NM). The fast version of the algorithm performs orthogonalization using a forward-backward prediction lattice. We demonstrate the advantages of using NLMS-OCF in a practical application, namely stereophonic acoustic echo cancellation. We find that NLMS-OCF can provide faster convergence, as well as better echo rejection, than the widely used APA. While the first part of this dissertation attempts to improve adaptive filter performance by refining the adaptation algorithm, the second part of this work looks at improving the convergence rate by using different structures. From an abstract viewpoint, the parameterization we decide to use has no special significance, other than serving as a vehicle to arrive at a good input-output description of the system. However, from a practical viewpoint, the parameterization decides how easy it is to numerically minimize the cost function that the adaptive filter is attempting to minimize. A balanced realization is known to minimize the parameter sensitivity as well as the condition number for Grammians. Furthermore, a balanced realization is useful in model order reduction. These properties of the balanced realization make it an attractive candidate as a structure for adaptive filtering. We propose an adaptive filtering algorithm based on balanced realizations. The third part of this dissertation proposes a unit-norm-constrained equation-error based adaptive IIR filtering algorithm. Minimizing the equation error subject to the unit-norm constraint yields an unbiased estimate for the parameters of a system, if the measurement noise is white. The proposed algorithm uses the hyper-spherical transformation to convert this constrained optimization problem into an unconstrained optimization problem. It is shown that the hyper-spherical transformation does not introduce any new minima in the equation error surface. Hence, simple gradient-based algorithms converge to the global minimum. Simulation results indicate that the proposed algorithm provides an unbiased estimate of the system parameters.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
44

Semko, David A. "Optical flow analysis and Kalman Filter tracking in video surveillance algorithms." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion-image.exe/07Jun%5FSemko.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, June 2007.
Thesis Advisor(s): Monique P. Fargues. "June 2007." Includes bibliographical references (p. 69). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
45

Greenwood, Aaron Blake. "Implementation of Adaptive Filter Algorithms for the Suppression of Thermoacoustic Instabilities." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/31299.

Full text
Abstract:
The main goal of this work was to develop adaptive filter algorithms and test their performance in active combustion control. Several algorithms were incorporated, which are divided into gradient descent algorithms and pattern searches. The algorithms were tested on three separate platforms. The first was an analog electronic simulator, which uses a second order acoustics model and a first order low pass filter to simulate the flame dynamics of an unstable tube combustor. The second was a flat flame, methane-air Rijke tube. The third can be considered a quasi-LDI liquid fuel combustor with a thermal output of approximately 30 kW. Actuation included the use of an acoustic actuator for the Rijke tube and a proportional throttling valve for the liquid fuel rig. Proportional actuation, pulsed actuation, and subharmonic control were all investigated throughout this work. The proportional actuation tests on the Rijke tube combustor have shown that, in general, the gradient descent algorithms outperformed the pattern search algorithms. Although, the pattern search algorithms were able to suppress the pressure signal to levels comparable to the gradient descent algorithms, the convergence time was lower for the gradient descent algorithms. The gradient algorithms were also superior in the presence of actuator authority limitations. The pulsed actuation on the Rijke tube showed that the convergence time is decreased for this type of actuation. This is due to the fact that there is a fixed amplitude control signal and algorithms did not have to search for sufficient magnitude. It was shown that subharmonic control could be used in conjunction with the algorithms. Control was achieved at the second and third subharmonic, and control was maintained for much higher subharmonics. The cost surface of the liquid fuel rig was obtained as the mean squared error of the combustor pressure as a function of the magnitude and phase of the controller. The adaptive algorithms were able to achieve some suppression of the pressure oscillations but did not converge to the optimal phase as shown in the cost surface. Simulations using the data from this cost surface were also performed. With the addition of a probing function, the algorithms were able to converge to a near-optimal condition.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
46

Vadnerkar, Sarang. "An Algorithm for the design of a programmable current mode filter cell." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1261601029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Herman, Ivo. "Algoritmy odhadu stavových veličin elektrických pohonů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219775.

Full text
Abstract:
This thesis deals with state estimation methods for AC drives sensorless control and with possibilities of the estimation. Conditions for observability for a synchronous drive were derived, as well as conditions for the moment of inertia and the load torque observability for both drive types - synchronous and asynchronous. The possibilities of the estimation were confirmed by experimental results. The covariance matrices for all filters were found using an EM algorithm. Both drives were also identified. The algoritms used for state estimation are Extended Kalman Filter, Unscented Kalman Filter, Particle Filters and Moving Horizon Estimator.
APA, Harvard, Vancouver, ISO, and other styles
48

陳力 and Li Chen. "Design of linear phase paraunitary filter banks and finite length signal processing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31235608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

McWhorter, Francis LeRoy. "Novel structures for very fast adaptive filters." Ohio University / OhioLINK, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1173322289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Jalůvková, Lenka. "Eliminace zkreslení obrazů duhovky." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-220862.

Full text
Abstract:
This master`s thesis is focused on a suppression of a distorsion in iris images. The aim of this work is to study and describe existing degradation methods (1D motion blur, uniform 2D motion blur, Gaussian blur, atmospheric turbulence blur, and out of focus blur). Furthermore, these methods are implemented and tested on a set of images. Then, we designed methods for suppression of these distorsions - inverse filtration, Wiener filtration and iterative deconvolution. All of these methods were tested and evaluated. Based on the experimental results, we can conclude that the Wiener-filter restoration is the most accurate approach from our test set. It achieves the best results in both normal and iterative mode.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography