To see the other types of publications on this topic, follow the link: VARION algorithm.

Dissertations / Theses on the topic 'VARION algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 dissertations / theses for your research on the topic 'VARION algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chowuraya, Tawanda. "Online content clustering using variant K-Means Algorithms." Thesis, Cape Peninsula University of Technology, 2019. http://hdl.handle.net/20.500.11838/3089.

Full text
Abstract:
Thesis (MTech)--Cape Peninsula University of Technology, 2019<br>We live at a time when so much information is created. Unfortunately, much of the information is redundant. There is a huge amount of online information in the form of news articles that discuss similar stories. The number of articles is projected to grow. The growth makes it difficult for a person to process all that information in order to update themselves on a subject matter. There is an overwhelming amount of similar information on the internet. There is need for a solution that can organize this similar information into specific themes. The solution is a branch of Artificial intelligence (AI) called machine learning (ML) using clustering algorithms. This refers to clustering groups of information that is similar into containers. When the information is clustered people can be presented with information on their subject of interest, grouped together. The information in a group can be further processed into a summary. This research focuses on unsupervised learning. Literature has it that K-Means is one of the most widely used unsupervised clustering algorithm. K-Means is easy to learn, easy to implement and is also efficient. However, there is a horde of variations of K-Means. The research seeks to find a variant of K-Means that can be used with an acceptable performance, to cluster duplicate or similar news articles into correct semantic groups. The research is an experiment. News articles were collected from the internet using gocrawler. gocrawler is a program that takes Universal Resource Locators (URLs) as an argument and collects a story from a website pointed to by the URL. The URLs are read from a repository. The stories come riddled with adverts and images from the web page. This is referred to as a dirty text. The dirty text is sanitized. Sanitization is basically cleaning the collected news articles. This includes removing adverts and images from the web page. The clean text is stored in a repository, it is the input for the algorithm. The other input is the K value. All K-Means based variants take K value that defines the number of clusters to be produced. The stories are manually classified and labelled. The labelling is done to check the accuracy of machine clustering. Each story is labelled with a class to which it belongs. The data collection process itself was not unsupervised but the algorithms used to cluster are totally unsupervised. A total of 45 stories were collected and 9 manual clusters were identified. Under each manual cluster there are sub clusters of stories talking about one specific event. The performance of all the variants is compared to see the one with the best clustering results. Performance was checked by comparing the manual classification and the clustering results from the algorithm. Each K-Means variant is run on the same set of settings and same data set, that is 45 stories. The settings used are, • Dimensionality of the feature vectors, • Window size, • Maximum distance between the current and predicted word in a sentence, • Minimum word frequency, • Specified range of words to ignore, • Number of threads to train the model. • The training algorithm either distributed memory (PV-DM) or distributed bag of words (PV-DBOW), • The initial learning rate. The learning rate decreases to minimum alpha as training progresses, • Number of iterations per cycle, • Final learning rate, • Number of clusters to form, • The number of times the algorithm will be run, • The method used for initialization. The results obtained show that K-Means can perform better than K-Modes. The results are tabulated and presented in graphs in chapter six. Clustering can be improved by incorporating Named Entity (NER) recognition into the K-Means algorithms. Results can also be improved by implementing multi-stage clustering technique. Where initial clustering is done then you take the cluster group and further cluster it to achieve finer clustering results.
APA, Harvard, Vancouver, ISO, and other styles
2

Lattarulo, Valerio. "Development of a multi-objective variant of the alliance algorithm." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/270076.

Full text
Abstract:
Optimization methodologies are particularly relevant nowadays due to the ever-increasing power of computers and the enhancement of mathematical models to better capture reality. These computational methods are used in many different fields and some of them, such as metaheuristics, have often been found helpful and efficient for the resolution of practical applications where finding optimal solutions is not straightforward. Many practical applications are multi-objective optimization problems: there is more than one objective to optimize and the solutions found represent trade-offs between the competing objectives. In the last couple of decades, several metaheuristics approaches have been developed and applied to practical problems and multi-objective versions of the main single-objective approaches were created. The Alliance Algorithm (AA) is a recently developed single-objective optimization algorithm based on the metaphorical idea that several tribes, with certain skills and resource needs, try to conquer an environment for their survival and try to ally together to improve the likelihood of conquest. The AA method has yielded reasonable results in several fields to which it has been applied, thus the development in this thesis of a multi-objective variant to handle a wider range of problems is a natural extension. The first challenge in the development of the Multi-objective Alliance Algorithm (MOAA) was acquiring an understanding of the modifications needed for this generalization. The initial version was followed by other versions with the aim of improving MOAA performance to enable its use in solving real-world problems: the most relevant variations, which led to the final version of the approach, have been presented. The second major contribution in this research was the development and combination of features or the appropriate modification of methodologies from the literature to fit within the MOAA and enhance its potential and performance. An analysis of the features in the final version of the algorithm was performed to better understand and verify their behavior and relevance within the algorithm. The third contribution was the testing of the algorithm on a test-bed of problems. The results were compared with those obtained using well-known baseline algorithms. Moreover, the last version of the MOAA was also applied to a number of real-world problems and the results, compared against those given by baseline approaches, are discussed. Overall, the results have shown that the MOAA is a competitive approach which can be used `out-of-the-box' on problems with different mathematical characteristics and in a wide range of applications. Finally, a summary of the objectives achieved, the current status of the research and the work that can be done in future to further improve the performance of the algorithm is provided.
APA, Harvard, Vancouver, ISO, and other styles
3

Andersson, Mathias. "Image processing algorithms for compensation of spatially variant blur." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2947.

Full text
Abstract:
<p>This report adresses the problem of software correction of spatially variant blur in digital images. The problem arises when the camera optics contains flaws, when the scene contains multiple moving objects with different relative motion or the camera itself is i.e. rotated. Compensation through deconvolving is impossible due to the shift-variance in the PSF hence alternative methods are required. There are a number of suggested methods published. This report evaluates two methods</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Belmonte, Mula Irene. "Actualización del algoritmo de diagnóstico de laboratorio del déficit de alfa-1-antitripsina: incorporación de la detección genotípica de la variante deficitaria Mmalton y utilización de muestras alternativas a la sangre total." Doctoral thesis, Universitat Autònoma de Barcelona, 2017. http://hdl.handle.net/10803/457763.

Full text
Abstract:
El déficit de alfa-1-antitripsina (DAAT) es una enfermedad genética que se caracteriza por la presencia de niveles bajos de la proteína alfa-1-antitripsina (AAT) en suero y causa principalmente desarrollo temprano de enfisema pulmonar y hepatopatía. Se han descrito alrededor de 125 variantes alélicas del gen de la AAT. El alelo normal más común en la población es el M, y las variantes deficitarias más frecuentes son la S y la Z, siendo esta última causante de un déficit severo. Sin embargo, otra variante deficitaria rara, llamada Mmalton, causante de un déficit similar al alelo Z, es considerada la segunda causa de DAAT en España. Esta variante es difícil de caracterizar mediante las técnicas de diagnóstico habituales (cuantificación de AAT en suero y fenotipado). Este hecho, ha contribuido a la clasificación errónea de esta variante y con ello, a la subestimación de su prevalencia real en la población. Por ello, en esta tesis se diseñó una técnica de genotipado alelo-específico para la detección de la variante Mmalton y se testó su aplicabilidad utilizando muestras de sangre total, DBS (gota de sangre seca) y de suero. Los resultados mostraron la utilidad de esta técnica para la caracterización rápida y coste-efectiva de una variante deficitaria de difícil detección. Así mismo, este método puede ser adaptado para la detección de las variantes raras más prevalentes de cada región. Cuando se ha de realizar el genotipado, es necesario ADN procedente de sangre total o DBS. En ocasiones estas muestras no están disponibles en el laboratorio, por lo que se debe realizar una nueva extracción, generando un retraso en el diagnóstico del DAAT. Para evitar esto, se desarrollaron protocolos de genotipado alelo-específico y secuenciación exónica del gen de la AAT utilizando muestra de suero (muestra utilizada en los primeros pasos del diagnóstico). Estas técnicas fueron incorporadas al algoritmo de diagnóstico del laboratorio, permitiendo realizar un diagnóstico completo utilizando un solo tipo de muestra y por tanto, evitando el retraso generado cuando el genotipado era necesario. Por otro lado, con el fin de incrementar los programas de cribado para la identificación de individuos con DAAT, se presentó la muestra de frotis bucal como una alternativa al DBS. Esta muestra es fácil de obtener, almacenar y enviar y permite disponer de gran cantidad de ADN para la realización del genotipado. Los resultados mostraron que las metodologías descritas ayudan a promover la expansión de los programas de detección del DAAT y mejorar la rapidez de su diagnóstico. El algoritmo propuesto en el presente trabajo, permite realizar un diagnóstico completo del DAAT evitando dos problemas principales del diagnóstico de esta enfermedad, como son el significativo retraso producido una vez el clínico realiza la petición de diagnóstico y el infradiagnóstico de las variantes raras de la AAT.<br>Alpha-1-antitrypsin deficiency (AATD) is a genetic disorder characterized by low serum levels of alpha-1-antitrypsin protein (AAT) and a high risk of developing early onset emphysema and liver disease. About 125 allelic variants of the AAT gene have been described. The most common normal AAT allele is the M variant, and the most frequent deficient variants are Z and S. However, another rare deficient variant, called Mmalton, which causes a deficiency similar to variant Z, is considered to be the second cause of severe AATD in Spain. This variant is difficult to detect with the common diagnostic techniques (serum AAT measurement and phenotype characterization). This fact has contributed to a misclassification of this variant and the subsequent underestimation of its real prevalence in the population. Thus, we designed an allele-specific genotyping technique for the detection of the Mmalton variant and we tested its applicability using whole blood, DBS (dried blood spot) and serum samples. Results showed the utility of this technique for the rapid and cost-effective characterization of a deficiency variant with a difficult detection. Moreover, this method could be adapted for the study of the most prevalent rare variants in each region. When genotyping is required, DNA from whole blood or DBS sample is necessary. Occasionally these kinds of samples are not available in the laboratory and additional new extraction is required, causing a significant delay in AATD diagnosis. To avoid this, we developed allele-specific genotyping and exonic sequencing of AAT gene protocols using DNA present in serum samples (sample used at the first steps of the diagnosis). These techniques were incorporated to the laboratory diagnostic algorithm, allowing the complete diagnosis using an only kind of sample and thus, avoiding the delay generated when genotyping is necessary. With the purpose of promoting the expansion of screening programs for the identification of AATD individuals, buccal swab samples were assessed as an alternative to DBS samples. This kind of sample is easy to collect, store and deliver and it recovers a large amount of DNA, resulting in an easy genotyping. Results showed these methodologies may be useful to expand AATD screening programs and complete the diagnostic without delay. The algorithm proposed in this study allows a complete AATD diagnosis avoiding two main problems: significant delay when the doctor performs the diagnostic request and the underdiagnosis of the rare AAT variants.
APA, Harvard, Vancouver, ISO, and other styles
5

Farver, Jennifer M. (Jennifer Margaret) 1976. "Continuous time algorithms for a variant of the dynamic traffic assignment problem." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/84247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Emde, Anne-Katrin [Verfasser]. "Next-generation sequencing algorithms : from read mapping to variant detection / Anne-Katrin Emde." Berlin : Freie Universität Berlin, 2013. http://d-nb.info/1045194964/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dowling, John F. "Algorithmic techniques for the acoustical analysis of exhaust systems." Thesis, Loughborough University, 2005. https://dspace.lboro.ac.uk/2134/12936.

Full text
Abstract:
One dimensional, linear, plane-wave modelling of silencer systems in the frequency domain provides an efficient means to analyse their acoustic performance. Software packages are available to analyse silencers within these modelling parameters; however, they are heavily restricted. The thesis develops an algorithm that increases the computational efficiency of the silencer analysis. The thesis concentrates on how data, within a software package, is stored, retrieved and analysed. The computational efficiency is increased as a result of the predictable patterns caused by the repetitive nature of exhaust system analysis. The work uses the knowledge gained from the construction of two previous algorithms of similar parameters; it isolates and maximises their advantages whilst minimising their associated disadvantages. The new algorithm is dependent on identifying consecutively sequenced exhaust components and sub-systems of such components within the whole exhaust system. The algorithm is further generalised to include multiple time-variant sources, multiple radiation points and exhaust systems that have a balance pipe. Another feature of the improved algorithm encompasses the option of modelling secondary noise sources such as might arise from flow generated noise or be included for active noise cancellation systems. The validation of these algorithmic techniques is demonstrated by comparison of the theoretical noise predictions with experimental or known results. These predictions are achieved by writing computational code using object orientated programming techniques in the language of c++ to implement the algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Saleh, Sherine. "A novel dynamic feature selection and prediction algorithm for clinical decision involving high-dimensional and varied patient data." Thesis, Aston University, 2016. http://publications.aston.ac.uk/30072/.

Full text
Abstract:
Predicting suicide risk for mental health patients is a challenging task performed by practitioners on a daily basis. Failure to perform proper evaluation of this risk could have a direct effect on the patient's quality of life and possibly even lead to fatal outcomes. Risk predictions are based on data that are difficult to analyse because they involve a heterogeneous set of patients’ records from a high-dimensional set of potential variables. Patient heterogeneity forces the need for various types and numbers of questions to be asked regarding the individual profile and perceived level of risk. It also results in records having different combinations of present variables and a large percentage of missing ones. Another problem is that the data collected consist of risk judgements given by several thousand assessors for a large number of patients. The problem is how to use the associations between patient profiles and clinical judgements to generate a model that reflects the agreement across all practitioners. In this thesis, a novel dynamic feature selection algorithm is proposed which can predict the risk level based only on the most influential answers provided by the patient. The feature selection optimises the vector for predictions by selecting variables that maximise correlation with the assessors’ risk judgement and minimise mutual information within the ones already selected. The final vector is then classified using a linear regression equation learned for all patients with a matching set of variables. The overall approach has been named the Dynamic Feature Selection and Prediction algorithm, DFSP. The results show that the DFSP is at least as accurate or more accurate than alternative gold-standard approaches such as random forest classification trees. The comparison was based on accuracy and error measures applied to each risk level separately ensuring no preference to one risk over the other.
APA, Harvard, Vancouver, ISO, and other styles
9

Yavaş, Gökhan. "Algorithms for Characterizing Structural Variation in Human Genome." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1279345476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rohr, Andreas. "Ein Algorithmus zur Bestimmung zweifacher ASN-optimaler Variablenprüfpläne für normalverteilte Merkmale mit unbekannter Varianz /." Berlin : Mensch- & -Buch-Verl, 2009. http://d-nb.info/998778443/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Nachtigal, Noël Maurice. "A look-ahead variant of the Lanczos algorithm and its application to the quasi-minimal residual method for non-Hermitian linear systems." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Dimogianopoulos, Dimitrios. "Commande adaptative des systèmes linéaires à paramètres constants et lentement variant dans le temps." Compiègne, 1999. http://www.theses.fr/1999COMP1230.

Full text
Abstract:
Cette thèse concerne la commande adaptative des systèmes linéaires pouvant garantir une performance élevée du processus en boucle fermée. Dans un premier temps nous proposons un schéma combinant un algorithme standard d'identification par Moindres Carrés (MC) accouplé à un contrôleur inédit. Ce schéma s'applique à des systèmes à paramètres constants dans le temps (CT). Ce contrôleur adaptatif a été appliqué à la commande d'un processus réel (un générateur de vapeur). Ensuite, nous étudions une classe de systèmes à paramètres variant dans le temps (MT). Dans le but d'obtenir une identification réussie, nous proposons un algorithme MC récursif pouvant s'accommoder des particularités des systèmes VT. L'usage de cet algorithme assure que la matrice de covariance reste toujours bornée. Alors, elle peut être utilisée pour modifier des paramètres estimés ne correspondant pas à des modèles commandables, Le schéma de commande est complété par l'adoption d'un contrôleur continu par placement des pôles. La stabilité du système en boucle fermée a été démontrée. Finalement, nous proposons un algorithme MC non récursif pour l'estimation des paramètres MT. Cet algorithme conserve l'optimalité de la solution du problème d'estimation même quand une modification des paramètres estimés est nécessaire. A cause du temps de traitement des données hors ligne, les paramètres sont obtenus périodiquement. Ainsi, nous complétons cet algorithme avec un contrôleur hybride Celui-ci fonctionne en continu mais ses paramètres sont obtenus, eux aussi, périodiquement. L'analyse de stabilité de la boucle fermée est effectuée et les signaux restent toujours bornés.
APA, Harvard, Vancouver, ISO, and other styles
13

Campanini, Alessandro. "Online Parameters Estimation in Battery Systems for EV and PHEV Applications." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
The main target of this thesis is to assess whether two among of the most advanced algorithms are able to perform an online parameters estimation. Starting from a current profile generated by a real driving cycle and applied to an Electric Circuit Model (ECM) with known parameters, a voltage profile is generated. Then, Extended Kalman Filter (EKF) and Varied-Parameters Approach (VPA) will be employed both to the known system and to a real battery cell profile with unknown parameters. The research has led to the result that even if the two algorithms present opposite characteristics in terms of accuracy and computational effort, the are some common results. Convergence and accuracy are strictly dependent on the prior knowledge of the ECM parameter curves and on the hypothesis done to simplify the model, such as variables dependences, circuital complexity etc. Therefore, when applying the algorithms to a known system, perfect correspondence between estimated and real parameters is found, whereas when they are applied to an unknow system the converge in not reached. Therefore, for future researches might be recommend introducing Temperature, Current and Aging dependence in the system model, as well as generating voltage profiles from more complex ECMs and performing simulations with the same ECM used in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
14

Ungan, Cahit Ugur. "Nonlinear Image Restoration." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606796/index.pdf.

Full text
Abstract:
This thesis analyzes the process of deblurring of degraded images generated by space-variant nonlinear image systems with Gaussian observation noise. The restoration of blurred images is performed by using two methods<br>a modified version of the Optimum Decoding Based Smoothing Algorithm and the Bootstrap Filter Algorithm which is a version of Particle Filtering methods. A computer software called MATLAB is used for performing the simulations of image estimation. The results of some simulations for various observation and image models are presented.
APA, Harvard, Vancouver, ISO, and other styles
15

Ball, Cory BH. "The Apprentices' Tower of Hanoi." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/etd/2512.

Full text
Abstract:
The Apprentices' Tower of Hanoi is introduced in this thesis. Several bounds are found in regards to optimal algorithms which solve the puzzle. Graph theoretic properties of the associated state graphs are explored. A brief summary of other Tower of Hanoi variants is also presented.
APA, Harvard, Vancouver, ISO, and other styles
16

Giraldo, Zuluaga Jhony Heriberto. "Graph-based Algorithms in Computer Vision, Machine Learning, and Signal Processing." Electronic Thesis or Diss., La Rochelle, 2022. http://www.theses.fr/2022LAROS037.

Full text
Abstract:
L'apprentissage de la représentation graphique et ses applications ont suscité une attention considérable ces dernières années. En particulier, les Réseaux Neuronaux Graphiques (RNG) et le Traitement du Signal Graphique (TSG) ont été largement étudiés. Les RNGs étendent les concepts des réseaux neuronaux convolutionnels aux données non euclidiennes modélisées sous forme de graphes. De même, le TSG étend les concepts du traitement classique des signaux numériques aux signaux supportés par des graphes. Les RNGs et TSG ont de nombreuses applications telles que l'apprentissage semi-supervisé, la segmentation sémantique de nuages de points, la prédiction de relations individuelles dans les réseaux sociaux, la modélisation de protéines pour la découverte de médicaments, le traitement d'images et de vidéos. Dans cette thèse, nous proposons de nouvelles approches pour le traitement des images et des vidéos, les RNGs, et la récupération des signaux de graphes variant dans le temps. Notre principale motivation est d'utiliser l'information géométrique que nous pouvons capturer à partir des données pour éviter les méthodes avides de données, c'est-à-dire l'apprentissage avec une supervision minimale. Toutes nos contributions s'appuient fortement sur les développements de la TSG et de la théorie spectrale des graphes. En particulier, la théorie de l'échantillonnage et de la reconstruction des signaux de graphes joue un rôle central dans cette thèse. Les principales contributions de cette thèse sont résumées comme suit : 1) nous proposons de nouveaux algorithmes pour la segmentation d'objets en mouvement en utilisant les concepts de la TSG et des RNGs, 2) nous proposons un nouvel algorithme pour la segmentation sémantique faiblement supervisée en utilisant des réseaux de neurones hypergraphiques, 3) nous proposons et analysons les RNGs en utilisant les concepts de la TSG et de la théorie des graphes spectraux, et 4) nous introduisons un nouvel algorithme basé sur l'extension d'une fonction de lissage de Sobolev pour la reconstruction de signaux graphiques variant dans le temps à partir d'échantillons discrets<br>Graph representation learning and its applications have gained significant attention in recent years. Notably, Graph Neural Networks (GNNs) and Graph Signal Processing (GSP) have been extensively studied. GNNs extend the concepts of convolutional neural networks to non-Euclidean data modeled as graphs. Similarly, GSP extends the concepts of classical digital signal processing to signals supported on graphs. GNNs and GSP have numerous applications such as semi-supervised learning, point cloud semantic segmentation, prediction of individual relations in social networks, modeling proteins for drug discovery, image, and video processing. In this thesis, we propose novel approaches in video and image processing, GNNs, and recovery of time-varying graph signals. Our main motivation is to use the geometrical information that we can capture from the data to avoid data hungry methods, i.e., learning with minimal supervision. All our contributions rely heavily on the developments of GSP and spectral graph theory. In particular, the sampling and reconstruction theory of graph signals play a central role in this thesis. The main contributions of this thesis are summarized as follows: 1) we propose new algorithms for moving object segmentation using concepts of GSP and GNNs, 2) we propose a new algorithm for weakly-supervised semantic segmentation using hypergraph neural networks, 3) we propose and analyze GNNs using concepts from GSP and spectral graph theory, and 4) we introduce a novel algorithm based on the extension of a Sobolev smoothness function for the reconstruction of time-varying graph signals from discrete samples
APA, Harvard, Vancouver, ISO, and other styles
17

Braun, Felix [Verfasser], Kristin [Akademischer Betreuer] Paetzold, Kristin [Gutachter] Paetzold, and Rainer [Gutachter] Stark. "Application of algorithm-based validation tools for the validation of complex, multi-variant products / Felix Braun ; Gutachter: Kristin Paetzold, Rainer Stark ; Akademischer Betreuer: Kristin Paetzold ; Universität der Bundeswehr München, Fakultät für Luft- und Raumfahrttechnik." Neubiberg : Universitätsbibliothek der Universität der Bundeswehr München, 2021. http://d-nb.info/1241842949/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Srinivas, L. "FIR System Identification Using Higher Order Cumulants -A Generalized Approach." Thesis, Indian Institute of Science, 1994. https://etd.iisc.ac.in/handle/2005/637.

Full text
Abstract:
The thesis presents algorithms based on a linear algebraic solution for the identification of the parameters of the FIR system using only higher order statistics when only the output of the system corrupted by additive Gaussian noise is observed. All the traditional parametric methods of estimating the parameters of the system have been based on the 2nd order statistics of the output of the system. These methods suffer from the deficiency that they do not preserve the phase response of the system and hence cannot identify non-minimum phase systems. To circumvent this problem, higher order statistics which preserve the phase characteristics of a process and hence are able to identify a non-minimum phase system and also are insensitive to additive Gaussian noise have been used in recent years. Existing algorithms for the identification of the FIR parameters based on the higher order cumulants use the autocorrelation sequence as well and give erroneous results in the presence of additive colored Gaussian noise. This problem can be overcome by obtaining algorithms which do not utilize the 2nd order statistics. An existing relationship between the 2nd order and any Ith order cumulants is generalized to a relationship between any two arbitrary k, Ith order cumulants. This new relationship is used to obtain new algorithms for FIR system identification which use only cumulants of order > 2 and with no other restriction than the Gaussian nature of the additive noise sequence. Simulation studies are presented to demonstrate the failure of the existing algorithms when the imposed constraints on the 2nd order statistics of the additive noise are violated while the proposed algorithms perform very well and give consistent results. Recently, a new algebraic approach for parameter estimation method denoted the Linear Combination of Slices (LCS) method was proposed and was based on expressing the FIR parameters as a linear combination of the cumulant slices. The rank deficient cumulant matrix S formed in the LCS method can be expressed as a product of matrices which have a certain structure. The orthogonality property of the subspace orthogonal to S and the range space of S has been exploited to obtain a new class of algorithms for the estimation of the parameters of a FIR system. Numerical simulation studies have been carried out to demonstrate the good behaviour of the proposed algorithms. Analytical expressions for the covariance of the estimates of the FIR parameters of the different algorithms presented in the thesis have been obtained and numerical comparison has been done for specific cases. Numerical examples to demonstrate the application of the proposed algorithms for channel equalization in data communication and as an initial solution to the cumulant matching nonlinear optimization methods have been presented.
APA, Harvard, Vancouver, ISO, and other styles
19

Srinivas, L. "FIR System Identification Using Higher Order Cumulants -A Generalized Approach." Thesis, Indian Institute of Science, 1994. http://hdl.handle.net/2005/637.

Full text
Abstract:
The thesis presents algorithms based on a linear algebraic solution for the identification of the parameters of the FIR system using only higher order statistics when only the output of the system corrupted by additive Gaussian noise is observed. All the traditional parametric methods of estimating the parameters of the system have been based on the 2nd order statistics of the output of the system. These methods suffer from the deficiency that they do not preserve the phase response of the system and hence cannot identify non-minimum phase systems. To circumvent this problem, higher order statistics which preserve the phase characteristics of a process and hence are able to identify a non-minimum phase system and also are insensitive to additive Gaussian noise have been used in recent years. Existing algorithms for the identification of the FIR parameters based on the higher order cumulants use the autocorrelation sequence as well and give erroneous results in the presence of additive colored Gaussian noise. This problem can be overcome by obtaining algorithms which do not utilize the 2nd order statistics. An existing relationship between the 2nd order and any Ith order cumulants is generalized to a relationship between any two arbitrary k, Ith order cumulants. This new relationship is used to obtain new algorithms for FIR system identification which use only cumulants of order > 2 and with no other restriction than the Gaussian nature of the additive noise sequence. Simulation studies are presented to demonstrate the failure of the existing algorithms when the imposed constraints on the 2nd order statistics of the additive noise are violated while the proposed algorithms perform very well and give consistent results. Recently, a new algebraic approach for parameter estimation method denoted the Linear Combination of Slices (LCS) method was proposed and was based on expressing the FIR parameters as a linear combination of the cumulant slices. The rank deficient cumulant matrix S formed in the LCS method can be expressed as a product of matrices which have a certain structure. The orthogonality property of the subspace orthogonal to S and the range space of S has been exploited to obtain a new class of algorithms for the estimation of the parameters of a FIR system. Numerical simulation studies have been carried out to demonstrate the good behaviour of the proposed algorithms. Analytical expressions for the covariance of the estimates of the FIR parameters of the different algorithms presented in the thesis have been obtained and numerical comparison has been done for specific cases. Numerical examples to demonstrate the application of the proposed algorithms for channel equalization in data communication and as an initial solution to the cumulant matching nonlinear optimization methods have been presented.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Han. "Detecting Rare Haplotype-Environmental Interaction and Nonlinear Effects of Rare Haplotypes using Bayesian LASSO on Quantitative Traits." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu149969433115895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Dimitry, El Baghdady Johan. "Equilibrium Strategies for Time-Inconsistent Stochastic Optimal Control of Asset Allocation." Thesis, KTH, Optimeringslära och systemteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-202520.

Full text
Abstract:
We have examinined the problem of constructing efficient strategies for continuous-time dynamic asset allocation. In order to obtain efficient investment strategies; a stochastic optimal control approach was applied to find optimal transaction control. Two mathematical problems are formulized and studied: Model I; a dynamic programming approach that maximizes an isoelastic functional with respect to given underlying portfolio dynamics and Model II; a more sophisticated approach where a time-inconsistent state dependent mean-variance functional is considered. In contrast to the optimal controls for Model I, which are obtained by solving the Hamilton-Jacobi-Bellman (HJB) partial differential equation; the efficient strategies for Model II are constructed by attaining subgame perfect Nash equilibrium controls that satisfy the extended HJB equation, introduced by Björk et al. in [1]. Furthermore; comprehensive execution algorithms where designed with help from the generated results and several simulations are performed. The results reveal that optimality is obtained for Model I by holding a fix portfolio balance throughout the whole investment period and Model II suggests a continuous liquidation of the risky holdings as time evolves. A clear advantage of using Model II is concluded as it is far more efficient and actually takes time-inconsistency into consideration.<br>Vi har undersökt problemet som uppstår vid konstruktion av effektiva strategier för tidskontinuerlig dynamisk tillgångsallokering. Tillvägagångsättet för konstruktionen av strategierna har baserats på stokastisk optimal styrteori där optimal transaktionsstyrning beräknas. Två matematiska problem formulerades och betraktades: Modell I, en metod där dynamisk programmering används för att maximera en isoelastisk funktional med avseende på given underliggande portföljdynamik. Modell II, en mer sofistikerad metod som tar i beaktning en tidsinkonsistent och tillståndsberoende avvägning mellan förväntad avkastning och varians. Till skillnad från de optimala styrvariablerna för Modell I som satisfierar Hamilton-Jacobi-Bellmans (HJB) partiella differentialekvation, konstrueras de effektiva strategierna för Modell II genom att erhålla subgame perfekt Nashjämvikt. Dessa satisfierar den utökade HJB ekvationen som introduceras av Björk et al. i [1]. Vidare har övergripande exekveringsalgoritmer skapats med hjälp av resultaten och ett flertal simuleringar har producerats. Resultaten avslöjar att optimalitet för Modell I erhålls genom att hålla en fix portföljbalans mellan de riskfria och riskfyllda tillgångarna, genom hela investeringsperioden. Medan för Modell II föreslås en kontinuerlig likvidering av de riskfyllda tillgångarna i takt med, men inte proportionerligt mot, tidens gång. Slutsatsen är att det finns en tydlig fördel med användandet av Modell II eftersom att resultaten påvisar en påtagligt högre grad av effektivitet samt att modellen faktiskt tar hänsyn till tidsinkonsistens.
APA, Harvard, Vancouver, ISO, and other styles
22

Krémé, Ama Marina. "Modification locale et consistance globale dans le plan temps-fréquence." Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0340.

Full text
Abstract:
Aujourd'hui, il est devenu facile de retoucher des images, par exemple de flouter une région, ou de la modifier pour faire disparaître ou apparaître un objet, une personne, etc. La retouche d'images fait partie des outils de base de la plupart des logiciels d'édition d'images. Dans le cadre des signaux audio, il est souvent plus naturel d'effectuer de telles retouches dans un domaine transformé, en particulier le domaine temps-fréquence. Là encore, c'est une pratique assez courante, mais qui ne repose pas nécessairement sur des arguments théoriques solides. Des cas d'applications incluent la restauration de régions du plan temps-fréquence où une information a été perdue (par exemple l'information de phase), la reconstruction d'un signal dégradé par une perturbation additive bien localisée dans le plan temps-fréquence, ou encore la séparation de signaux localisés dans différentes régions du plan temps-fréquence. Dans cette thèse, nous proposons et développons des méthodes théoriques et algorithmiques pour résoudre ce type de problème. Nous les abordons dans un premier temps comme un problème de reconstruction de données manquantes dans lequel il manque certaines phases des coefficients temps-fréquence. Nous formulons mathématiquement le problème, puis nous proposons trois méthodes pour le résoudre. Dans un second temps, nous proposons une approche qui consiste à atténuer une source de dégradation avec l'hypothèse que celle-ci est bien localisée dans une région spécifique du plan temps-fréquence. Nous obtenons la solution exacte du problème qui fait intervenir des opérateurs appelés multiplicateurs de Gabor<br>Nowadays, it has become easy to edit images, such as blurring an area, or changing it to hide or add an object, a person, etc. Image editing is one of the basic tools of most image processing software. In the context of audio signals, it is often more natural to perform such an editing in a transformed domain, in particular the time-frequency domain. Again, this is a fairly common practice, but not necessarily based on sound theoretical arguments. Application cases include the restoration of regions of the time-frequency plane where information has been lost (e.g. phase information), the reconstruction of a degraded signal by an additive perturbation well localized in the time-frequency plane, or the separation of signals localized in different regions of the time-frequency plane. In this thesis, we propose and develop theoretical and algorithmic methods to solve this issue. We first formulate the problem as a missing data reconstruction problem in which the missing data are only the phases of the time-frequency coefficients. We formulate it mathematically, then we propose three methods to solve it. Secondly, we propose an approach that consists in attenuating a source of degradation with the assumption that it is localized in a specific region of the time-frequency plane. We consider the case where the signal of interest is perturbed by an additive signal and has an energy that is more widely spread in the time-frequency plane. We formulate it as an optimization problem designed to attenuate the perturbation with precise control of the level of attenuation. We obtain the exact solution of the problem which involves operators called Gabor multipliers
APA, Harvard, Vancouver, ISO, and other styles
23

Jordánová, Ivana. "Hodnocení tepové frekvence a saturace krve kyslíkem pomocí chytrého telefonu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-378033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jen, Shi-chi, and 簡士期. "On Variant Population Size Genetic Algorithm." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/59806065827423348769.

Full text
Abstract:
碩士<br>國立交通大學<br>控制工程系<br>85<br>Genetic algorithm has been in development for three decades. It is a search algorithm based on the survival-of-the-fittest Darwinian principle in the natural process and could be used as an optimization tool for various problems. It has proved to be particularly effective in searching through poorly understood and irregular spaces. The size of the population can be critical in many applications of genetic algorithm. If the population is too small, the genetic algorithm may converge too quickly; if it is too large, the waiting time for an improvement might be too long. In this thesis, a variant population size genetic algorithm ( VPSGA ) is proposed for maintaining a varying population size in the search process. The population size self- tunes in a reasonable way according to the characteristics of the population. Simulation results are included to demonstrate the advantage of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
25

Hsia-Ching, Chang, and 張夏青. "Scheduling Algorithm for Time-Variant Task System." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/09190477098629933574.

Full text
Abstract:
碩士<br>靜宜大學<br>管理科學研究所<br>82<br>TDET (Time-Dependent Execution Time)問題在排程問題中是一新類型的 問題。過去在排程問題中,所討論的工作(task)的執行時間皆是固定值, 但在實際應用上,確實存在著工作(task)會隨著起始時間愈晚開始而使其 所須的執行時間愈少的例子。像是反飛彈飛彈航程的計算。因此引發對此 類問題作一探討的動機。過去相關於TDET模組的探討文獻並不多。在本文 中,我們將針對在TD ET模組中不可插隊的排程問題,以延遲工作數量最 小化為探討方向。我們發現這問題是屬於NP-Complete 問題,甚至在兩個 不同的到期時間(deadl ine)、同一個起始時間(release time)時。在同 一個到期時間、同一個起始時間的工作系統中,我們找到了一個O(n square )時間複雜度的演算法。遺憾的是,我們只能證實在a 相同和b 相 同的情況之下,我們的演算法是最佳的。至於在任意的a 值以及任意b 值 的情況下,我們也做了部份模擬結果。 In the scheduling problems, the TDET (Time-Dependent-Executi on- Time) model is a new one. In the past, the execution time of a task is fixed. But in real-world, there are some applications in which the execution time of a task depends on its starting ti me. One of them is the voyage computation of the autimissile mis sile. This phenomenon causes our motivation for discussing the T DET task model. In the past, there are few papers related to the TDET model. In this thesis, we consider the problem of nonpreemptively sched uling a set of tasks in TDET model so as to minimize the number of late tasks. We showed that this problem is NP-Complete, even for a task system with two distinct deadlines and identical rele ase times. Motivated by the complexity, we gave O(n square )-tim e algorithm for the task system with identical release times and identical deadlines. Unfortunately, we can only show the optimal ity of our algorithm under the condition either all the a-values are identical or all the b-values are identical. For the case of arbitrary a-values & b-values, we have done some simulation on task systems generated by random mumber generator.
APA, Harvard, Vancouver, ISO, and other styles
26

Hofacker, Ivo L., and Peter F. Stadler. "The Partition Function Variant of Sankoff´s Algorithm." 2004. https://ul.qucosa.de/id/qucosa%3A32982.

Full text
Abstract:
Many classes of functional RNA molcules are characterized by highly conserved secondary structures but little detectable sequence similarity. Reliable multiple alignments can therefore be constructed only when the shared structural features are taken into account. Sankoff's algorithm can be used to construct such structure-based alignments of RNA sequences in polynomial time. Here we extend the approach to a probabilistic one by explicitly computing the partition function of all pairwisely aligned sequences with a common set of base pairs. Stochastic backtracking can then be used to compute e.g. the probability that a prescribed sequence-structure pattern is conserved between two RNA sequences. The reliability of the alignment itself can be assessed in terms of the probabilities of each possible match.
APA, Harvard, Vancouver, ISO, and other styles
27

Chen, Yen Hung, and 陳彥宏. "Approximation Algorithms for Some Variant Steiner Tree Problems." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/52794120893057392096.

Full text
Abstract:
博士<br>國立清華大學<br>資訊工程學系<br>94<br>In the dissertation, we study approximation algorithms for several variants of the Steiner tree problem. The classical Steiner tree problem asks for a shortest acyclic network interconnecting a given subset of the vertices (terminals). Steiner trees are important in various applications such as multicast routing, evolutionary tree reconstruction in biology and VLSI routing. The first problem is the the Steiner consensus string problem. One of the applications of this problem is to reconstruct the evolutionary tree in biology. Given a finite set $W$ of $n$ strings (sequences) and an evolutionary tree structure T with a root vertex and n leaves, each of which is labeled with a unique string of W, the Steiner consensus string is a string which labels to the root that minimizes some distance functions. The median string problem is to find a Steiner consensus string that minimizes the sum of its Levenshtein distance to each string of the set W. We also study the centre string problem that finds a Steiner consensus string minimizing the maximum of Levenshtein distances to each string of the set W. In this dissertation, we shall show a (2-2/n)-approximation algorithm on median string problem and a 2-approximation algorithm on centre string problem. The second portion of this dissertation, we discuss the full Steiner tree problem and bottleneck full Steiner tree problem. The applications of both problems have the reconstruction of evolutionary tree, VLSI routing and telecommunications. Given a graph G=(V,E) with nonnegative edge lengths and a subset R of V, the full Steiner tree is defined to be a Steiner tree in G with all the vertices of R as its leaves. The full Steiner tree problem (FSTP) is to find a full Steiner tree in G with minimum length, and the bottleneck full Steiner tree problem (BFSTP) is to find a full Steiner tree T in G such that the length of the largest edge in T is minimized. We shall present an approximation algorithm with performance ratio 2p for the FSTP, where p is the best-known performance ratio for the Steiner tree problem. Then, we give an exact algorithm of O(|E|log|E|) time to solve the BFSTP. Finally, motivated by the applications of multicasting and broadcasting, we investigate two k-source shortest paths Steiner (spanning) tree problems of graphs with k given sources, and a set of destinations. Let G=(V,E) be an undirected graph with nonnegative edge lengths, S a set of k specified sources and a set R of destinations. The sets S and R do not need to be disjoint. The first problem is the k-source maximum vertex shortest paths Steiner (spanning) tree (k-MVST) problem, in which we want to find a Steiner tree T which connects all sources and destinations such that the maximum total distance from any vertex in R to all sources is minimized. The other problem is the k-source maximum source shortest paths Steiner (spanning) tree (k-MSST) problem, in which the objective function is the maximum total distance from any source to all vertices in R. Both problems have been shown to be NP-complete even when k=2 and |R|=|V|. In this dissertation, we shall propose a polynomial time approximation scheme (PTAS) for the 2-MVST problem. For the 2-MSST problem, we first give a (2+e)-approximation algorithm for any e>0, and then present a PTAS for the case that the input graphs are restricted to metric graphs. Finally, we show that there are simple 3-approximation algorithms for both problems with arbitrary k.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Mei-Jiuan, and 王美娟. "On Search Times of Varient Coalesced Hashing Algorithms." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/17450401246524063769.

Full text
Abstract:
碩士<br>國立中央大學<br>數學系<br>81<br>Williams 於西元1959年提出﹐在電腦內部的記憶體中﹐隨機結合索引存 取法則是一種高效率的搜尋法則。它藉由已被佔據的儲存格所形成便於以 後的需求﹐去檢索已被插入記錄之相關的鏈來顯示其特徵。連接含有被拒 記錄的格子到鏈上﹐有許多種不同的方法。在這篇論文的第一章分別介紹 了LICH﹐EICH﹐和VICH的平均搜尋時間﹐以及介紹LISCH和EISCH的最大搜 尋時間。這篇論文的主要目的是要研究早插入隨機結合索引存取法則( EICH)的最大的成功搜尋時間。參考資料如附錄所列。在第二章裡﹐研究 方法是參考 Pittle 和 Yu 於西元 1988 年提出的這篇論文中﹐針對 EISCH導出的T_1(i,n)精確分佈公式的推導過程。在 EICH 的證明過程當 中﹐利用儲藏區域來成改進EISCH的搜尋時間﹐並利用柯西積分公式﹑留 數定理﹑生成函數﹑隨機函數等方法﹐以及均勻分佈﹑條件機率﹐和獨立 事件的觀念來證明。我這篇論文的主要結果﹐定理1﹐列出了T_1(i,n)精 確分佈公式。因為每一個插入記錄的機率受到它的隨機位址以及儲藏區域 是否填滿所影響﹐所以EICH的精確與漸近公式非常複雜﹐而且它不像 EISCH的那麼容易導出。最後﹐化簡方程式(14)﹐以及當 n,m 逼近至無窮 大時﹐觀察T_1(i,n)它的變化﹐是亟待討論的問題。
APA, Harvard, Vancouver, ISO, and other styles
29

Chia-Wei, Chang, and 張家瑋. "Adaptive rate control algorithm for variant bandwidth in IEEE 802.11e." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/82593434674122224222.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>電機工程系<br>93<br>Abstract All based on carrier sense multiple access with collision avoidance (CSMA/CA) contention base to IEEE 802.11 a/b/g wireless system at present. It is unable to offer the assurance of intact quality of service (QoS) , therefore IEEE makes 802.11e standard to offer QoS. IEEE 802.11e does different medium access control (MAC) layer to design separately to pointer coordination function (PCF) and distributed coordination function (DCF). But each nodal execution speed with bandwidth to change at any time among wireless environment, so should really carry out QoS and guarantee that there are a lot of difficulties. This thesis hopes to strengthen the assurance of QoS to the variable and bandwidth characteristic in the wireless environment. This paper proposes an adaptive rate control algorithm (ARCA) to improve throughput, collision rate and drop rate of each packet in the IEEE 802.11e EDCA mode multi-rate infrastructure wireless networks. It uses more efficient wireless LAN resources. ARCA utilizes the parameter of bandwidth factor and priority table dynamically to adjust the parameter of arbitration inter-frame space (AIFS), contention window (CW) , persistence factor (PF) of EDCA. These parameters are adjusted by ARCA to improve the efficiency of the whole system according to the rising or falling of the data rate in physical layer. Making the 802.11e high data rate of station (STA) don’t be influenced the lower data rate of STA because the channel status changed. By way of simulating, ARCA compares with 802.11e standard. It can offer better throughput than 802.11e.
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Yanbo. "Models and algorithms for haplotype phasing and variant calling." Phd thesis, 2022. http://hdl.handle.net/1885/278014.

Full text
Abstract:
Haplotypes have been increasingly used in genetic studies. Analysis of variations among haplotypes has many applications in population genetics and biomedical research. However, the current DNA sequencing technologies can only read short fragments (called reads) randomly drawn from complete haplotypes, and reads from two haplotypes are mixed together for diploid species like humans. Therefore, recovering the complete haplotype sequences becomes a fundamental problem in computational genomics, which requires calling variants between two haplotypes from reads. With the emerging third-generation sequencing technologies, existing reference-based approaches for haplotype phasing suffer from performance issues to handle long and error-prone reads. At the same time, reference-free methods for variant calling and haplotype phasing meet challenges when applied to large and complex genomes. This thesis addresses a number of critical challenges in haplotype phasing and variant calling in computational genomics. Firstly, we introduce DCHap, a fast reference-based haplotype phasing algorithm that applies a divide-and-conquer strategy. This divide-and-conquer strategy improves both scalability and accuracy for handling third-generation sequencing data in haplotype phasing. Secondly, we propose an algorithm, Kmer2SNP, for reference-free variant calling using graph matching. The introduction of the heterozygous k-mer graph helps to recover SNPs between two haplotypes through the maximum weight matching. Thirdly, we describe Kmer2Haplotype, a pipeline of reference-free haplotype phasing by incorporating both short and long reads from the same individual. The problem of haplotype phasing is modeled as finding the minimum bi-partition of the haplotype-specific k-mer graph. We further design and conduct extensive experiments, and the benchmarking results show the effectiveness and efficiency of the above-proposed approaches compared to the state-of-art baselines. We finally conclude this thesis by discussing future research directions.
APA, Harvard, Vancouver, ISO, and other styles
31

TURNER, KATHRYN. "A VARIABLE-METRIC VARIANT OF THE KARMARKAR ALGORITHM FOR LINEAR PROGRAMMING." Thesis, 1987. http://hdl.handle.net/1911/16112.

Full text
Abstract:
The most time-consuming part of the Karmarkar algorithm for linear programming is computation of the step direction, which requires the projection of a vector onto the nullspace of a matrix that changes at each iteration. We present a variant of the Karmarkar algorithm that uses standard variable-metric techniques in an innovative way to approximate this projection. We prove that the modified algorithm that we construct using a step direction obtained from this approximation retains the polynomial-time complexity of the Karmarkar algorithm. We extend applicability of the modified algorithm to the solution of linear programming problems with unknown optimal value, using a construction of monotonic lower bounds on the optimal objective value that approximates the lower bound construction of Todd and Burrell. We show that our modified algorithm for solving problems with unknown optimal value also retains the polynomial-time complexity of the Karmarkar algorithm. Computational testing has verified that our modification substantially reduces the number of matrix factorizations needed for the solution of linear programming problems, compared to the number of matrix factorizations required by the Karmarkar algorithm.
APA, Harvard, Vancouver, ISO, and other styles
32

Chen, Tseng-Yi, and 陳增益. "GPU accelerate framework on variant Locally Linear Embedding dimension reduction algorithm." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/55501608598556211437.

Full text
Abstract:
碩士<br>國立清華大學<br>資訊系統與應用研究所<br>98<br>There are more and more data format in the information wor. And there are many algorithms or techniques to present data’s relationship, such as data mining、data analysis. But the data always have high dimension data structure in real world. It’s so difficult implement these data’s relation presentation algorithms or techniques, because the data dimension is multi-dimension. It will take much effort to present data’s relation and let use to realize the graph. So we need a technique to reduce data dimension. There are many data dimension reduction algorithms, such as PCA, MDS, Isomap and LLE…etc. And there are many research papers to discuss the issue which are like “how can we reduce the data dimension accurately” or “how can we modify some algorithms flow to increase the algorithm’s precision”. But there are few papers to talk about these algorithms efficiency or speed up these algorithms. This thesis will increase a dimension reduction algorithm’s computation speed through parallelize that algorithm and different computation platform. First, we chose one dimension reduction algorithm as our improve target. And the algorithm is Locally Linear Embedded(LLE). Because, the data set always form a nonlinear graph in real world. And LLE is a nonlinear dimension reduction algorithm. Second reason, the LLE algorithm have high parallelize computation ability. So, our target is how improve LLE algorithm in parallel computation. GPU computing architecture is so hot in recently. It has powerful float computation ability and high parallel computation capability. So, we use GPU computing architecture to execute our parallel LLE algorithm. We only port KNN search algorithm and large sparse eigen solution (LSES) functions to GPU. Because KNN search algorithm and LSES have heaving calculation loading. We get good performance after we port KNN and LSES. The parallel GPU KNN algorithm speedup 40X~50X performance. And LSES increase 10X performance.
APA, Harvard, Vancouver, ISO, and other styles
33

Tsan-FuHuang and 黃粲富. "Generating NFOV Video from 360° Video Based on Variant RRT Algorithm." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/2m2wc9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Bo-Han, and 陳柏翰. "A Study of White Balance Algorithms under Varied Brightness Conditions." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/34096248495373611035.

Full text
Abstract:
碩士<br>北台灣科學技術學院<br>機電整合研究所<br>96<br>As digital cameras are equipped with useful functions such as auto focus, auto exposure and auto white balance, their relative ease of operation in comparison with traditional cameras have won over the hearts of countless consumers. The three functions featured on DSC (digital still camera) have tremendous impact on the image quality of captured images. The focal point of this study is centered on the auto white balance algorithms on digital cameras. Currently auto white balance algorithms usually use the test images provided by SFU for performance comparison. In this image set, brightness control prevents the three primary color channels from saturation. As a result, these test images are darker and substantially different from the images processed by DSCs. In this paper, brightness adjustments are made for images in the SFU test image database in an attempt to investigate performance variations of various auto white balance algorithms under different brightness conditions along with their pros and cons. Results showed that the performance of white balance algorithms deteriorates when the brightness of test images increases. Different white balancing methods also demonstrated different extents of deterioration. It is therefore apparent that using this set of test images alone is insufficient to determine the performance of white balance algorithms. Also, improved white balance algorithms are proposed in this dissertation. One method is to improve the standard deviation weighted gray world method by adding a spatial domain mean filter along with use a different weighting method. Another method combines the results of two white balance algorithms via a simple step to estimate the light source. Both methods produce lower errors than previous white balance algorithms when tested using SFU image sets with adjusted brightness.
APA, Harvard, Vancouver, ISO, and other styles
35

Mancuso, Nicholas. "Algorithms for Viral Population Analysis." 2014. http://scholarworks.gsu.edu/cs_diss/85.

Full text
Abstract:
The genetic structure of an intra-host viral population has an effect on many clinically important phenotypic traits such as escape from vaccine induced immunity, virulence, and response to antiviral therapies. Next-generation sequencing provides read-coverage sufficient for genomic reconstruction of a heterogeneous, yet highly similar, viral population; and more specifically, for the detection of rare variants. Admittedly, while depth is less of an issue for modern sequencers, the short length of generated reads complicates viral population assembly. This task is worsened by the presence of both random and systematic sequencing errors in huge amounts of data. In this dissertation I present completed work for reconstructing a viral population given next-generation sequencing data. Several algorithms are described for solving this problem under the error-free amplicon (or sliding-window) model. In order for these methods to handle actual real-world data, an error-correction method is proposed. A formal derivation of its likelihood model along with optimization steps for an EM algorithm are presented. Although these methods perform well, they cannot take into account paired-end sequencing data. In order to address this, a new method is detailed that works under the error-free paired-end case along with maximum a-posteriori estimation of the model parameters.
APA, Harvard, Vancouver, ISO, and other styles
36

Li, Jyun-Sian, and 李俊賢. "New Algorithms for Robust Parameter Identification and Time-Variant Parameter Identification." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/67780492563357408574.

Full text
Abstract:
博士<br>國立臺灣大學<br>機械工程學研究所<br>100<br>Two subjects of continuous-time parameter identification problems expressed in linear regression form are discussed in this thesis. One is the time-invariant parameter identification while subject to non-stochastic disturbances termed as the robust identification. The other is the time-variant parameter identification. In addition to the measurement stochastic noise, the output signal of a system is usually contaminated with the non-stochastic disturbances which are usually resulted from errors of measure devices, system unmodled dynamics or the process disturbances acting on the system. Most identifications considering the disturbance as a white noise will have biased estimates while subject to these kinds of disturbances. In the parameterization, one can lump all the disturbances into one disturbance term at the output expressed in linear regression form. We proposes one off-line approach and two on-line approaches to deal with this problem. In the off-line approach, the unknown disturbance will be approximately expanded by a finite Fourier cosine series with unknown coefficients. The unknown coefficients and the known basis functions will be augmented to the original parameter vector and the regressor respectively. With the expanded regressor, one can obtain the estimates of the expanded parameter vector by adopting the least-squares batch calculation. A necessary condition on persistent excitation of the expanded regressor is proposed too. In the first of the two on-line approaches, the estimation scheme is built under the structure of gradient algorithm. A compensation is made to reject the effect of the disturbance in the estimation error dynamics by designing a stabilized controller. In the design procedure, the averaging method is used for system approximation and the $H_{infty}$ frequency shaping methodology is utilized to synthesize the controller. The control signal will be able to track the disturbance signal and cancel it in the estimation error dynamics and that guarantees the convergence of the parameter estimation. In the second of the on-line approaches, an state-observer based estimator is constructed. To include the estimation of the disturbance into the estimation scheme, the system plant is augmented with the model of the proposed disturbance generating filter also termed as dynamics extension filter. The Kalman filter is adopted to perform the states estimation. Compared with the conventional internal model approach, the proposed method could be applied to a more general disturbance class. The three proposed approaches can identify both parameters and the disturbance simultaneously. The design procedures of the above two on-line approaches can be grafted to the time-variant parameter identification problem with some modifications. Special consideration will be addressed in the context. Keywords: Robust identification, Time-variant parameter identification, Disturbance identification, Kalman filter.
APA, Harvard, Vancouver, ISO, and other styles
37

Teng, Chiao-Lien, and 鄧喬濂. "A Variant of SAFER Cipher with One Algorithm for Both Encryption and Decryption." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/22205119885591313320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Janson, Stefan, and Martin Middendorf. "A Hierarchical Particle Swarm Optimizer and Its Adaptive Variant." 2005. https://ul.qucosa.de/id/qucosa%3A33064.

Full text
Abstract:
Ahierarchical version of the particle swarm optimization (PSO) metaheuristic is introduced in this paper. In the new method called H-PSO, the particles are arranged in a dynamic hierarchy that is used to define a neighborhood structure. Depending on the quality of their so-far best-found solution, the particles move up or down the hierarchy. This gives good particles that move up in the hierarchy a larger influence on the swarm. We introduce a variant of H-PSO, in which the shape of the hierarchy is dynamically adapted during the execution of the algorithm. Another variant is to assign different behavior to the individual particles with respect to their level in the hierarchy. H-PSO and its variants are tested on a commonly used set of optimization functions and are compared to PSO using different standard neighborhood schemes.
APA, Harvard, Vancouver, ISO, and other styles
39

Cancellieri, Samuele. "Personal genome editing algorithms to identify increased variant-induced off-target potential." Doctoral thesis, 2022. http://hdl.handle.net/11562/1058995.

Full text
Abstract:
Clustered regularly interspaced short palindromic repeats (CRISPR) technologies allow for facile genomic modification in a site-specific manner. A key step in this process is the in-silico design of single guide RNAs (sgRNAs) to efficiently and specifically target a site of interest. To this end, it is necessary to enumerate all potential off-target sites within a given genome that could be inadvertently altered by nuclease-mediated cleavage. Off-target sites are quasi-complementary regions of the genome in which the specified sgRNA can bind, even without a perfect complementary nucleotides sequence. This problem is known as off-target sites enumeration and became common after discovery of CRISPR technology. To solve this problem, many in-silico solutions were proposed in the last years but, currently available software for this task are limited by computational efficiency, variant support, genetic annotation, assessment of the functional impact of potential off-target effects at population and individual level, and a user-friendly graphical interface designed to be usable by non-informatician without any programming knowledge. This thesis addresses all these topics by proposing two software to directly answer the off-target enumeration problem and perform all the related analysis. In details, the thesis proposes CRISPRitz, a tool designed and developed to compute fast and exhaustive searches on reference and alternative genome to enumerate all the possible off-target for a user-defined set of sgRNAs with specific thresholds of mismatches (non-complementary bps in RNA-DNA binding) and bulges (bubbles that alters the physical structure of RNA and DNA limiting the binding activity). The thesis also proposes CRISPRme, a tool developed starting from CRISPRitz, which answers the requests of professionals and technicians to implement a comprehensive and easy to use interface to perform off-target enumeration, analysis and assessment, with graphical reports, a graphical interface and the capability of performing real-time query on the resulting data to extract desired targets, with a focus on individual and personalized genome analysis.
APA, Harvard, Vancouver, ISO, and other styles
40

Chuan-Yu, Cho. "Adaptive Motion Estimation Algorithm for Varied Motion Contents and VLSI Motion Estimation Architecture Design for H.264/AVC." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0016-0109200613403225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Cho, Chuan-Yu, and 卓傳育. "Adaptive Motion Estimation Algorithm for Varied Motion Contents and VLSI Motion Estimation Architecture Design for H.264/AVC." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/03014446886289901505.

Full text
Abstract:
博士<br>國立清華大學<br>資訊工程學系<br>94<br>Motion estimation (ME) plays an important role in H.264 not only because it has extremely computational complexity, but also it affects the following block coding modes as well as the final coded bit-stream size. Therefore, studying on advance ME algorithms is one of the most efficient ways to improve the coding efficiency of a video codec. In thesis, we exhibit two ME schemes with software- and hardware-based implementations, respectively. Based on the studying on fast block-matching algorithms (FBMAs) and a priority list, the software-based ME scheme is started with a study on FBMAs, and then their efficiencies are illustrated in terms of algorithm checking points. A priority list is introduced to help with classification of motion content types of real world sequences. After doing statistical analyses on the proposed priority list and FBMAs, we propose a motion-content adaptive FBMA, which can adaptively switching searching strategies among three different FBMAs to maximize the coding efficiency under the considerations of motion-content variations. The latest H.264/AVC video coding standard adopts the variable block size (VBS) block partitions and multiple reference frames (MRF), which make the motion-compensation stage become extreme complicated. To save intermediate memory and maximum the hardware utilization, we propose an embedded merging scheme with a pipeline-based MRF extension. With this embedded design, only one copy of intermediate memory is required and fully utilization is expected after fulfilling the pipeline stages.
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Yi-Ting, and 王顗婷. "On the Application of Factor Graph and Sum-Product Algorithm for Symbol Detection of OFDM Systems in Time-variant Channels." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/92590377949695248141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chang, Shih-chi, and 張世奇. "Blind Adaptive DS-CDMA Receivers with Sliding Window Constant Modulus GSC-RLS Algorithm Based on Min/Max Criterion for Time-Variant Channels." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/88347590504212626430.

Full text
Abstract:
碩士<br>國立中山大學<br>電機工程學系研究所<br>94<br>The code division multiple access (CDMA) system implemented by the direct-sequence (DS) spread spectrum (SS) technique is one of the most promising multiplexing technologies for wireless communications services. The SS communication adopts a technique of using much wider bandwidth necessary to transmit the information over the channel. In the DS-CDMA system, due to the inherent structure interference, referred to as the multiple access interference (MAI), the system performance might degrade. Next, for DS-CDMA systems over frequency-selective fading channels, the effect of inter symbol interference (ISI) will exist, such that a multiuser RAKE receiver has to be employed to combat the ISI as well as MAI. Since, in practical wireless communication environment, there may have several communication systems operated in the same area at the same time. In this thesis, we consider the environment of DS-CDMA systems, where the asynchronous narrow band interference (NBI) due to other systems is joined suddenly to the CDMA system. In general, when a system works in a stable state with adaptive detectors, a suddenly joined NBI signal will cause the system performance to be crash down. Under such circumstance, the existing conventional adaptive RAKE detectors may not be able to track well for the rapidly sudden changing NBI associated with the problems of ISI and MAI. It is known that the adaptive filtering algorithms, based on the sliding window linear constrained recursive least squares (SW LC-RLS), is very attractive to a violent changing environment. The main concern of this thesis is to propose a novel sliding window constant modulus RLS (SW CM-RLS) algorithm, based on the Min/max criterion, to deal with the NBI for DS-CDMA system over multipath channels. For simplicity and having less system complexity the generalized side-lobe canceller (GSC) structure is employed, and is referred to as the SW CM-GSC-RLS algorithm. The aim of the SW CM-GSC-RLS algorithm is used to alleviate the effect of NBI. It has the advantages of having faster convergence property and tracking ability, and can be applied to the environment in which the NBI is suddenly joined to the system under the effect of channel mismatch to achieve desired performance. At the end of this thesis, we extend the idea of the proposed algorithm to the space-time DS-CDMA RAKE receiver, in which the adaptive beamformer with temporal domain DS-CDMA receiver is employed. Via computer simulation results, we show that our new proposed schemes outperform the conventional CM GSC-RLS algorithm as well as the GSC-RLS algorithm (the so-called LCMV approach), in terms of mean square error of estimating channel impulse response, output signal to interference plus noise ratio and bit-error-rate.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, G., Marian Gheorghe, L. Q. Pan, and M. J. Perez-Jimenez. "Evolutionary membrane computing: A comprehensive survey and new results." 2014. http://hdl.handle.net/10454/10830.

Full text
Abstract:
No<br>Evolutionary membrane computing is an important research direction of membrane computing that aims to explore the complex interactions between membrane computing and evolutionary computation. These disciplines are receiving increasing attention. In this paper, an overview of the evolutionary membrane computing state-of-the-art and new results on two established topics in well defined scopes (membrane-inspired evolutionary algorithms and automated design of membrane computing models) are presented. We survey their theoretical developments and applications, sketch the differences between them, and compare the advantages and limitations. (C) 2014 Elsevier Inc. All rights reserved.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography