To see the other types of publications on this topic, follow the link: Development and application of optimized algorithms.

Dissertations / Theses on the topic 'Development and application of optimized algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'Development and application of optimized algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hampel, Uwe, and Hans-Gerd Maas. "Dynamische Rissdetektion mittels photogrammetrischer Verfahren – Entwicklung und Anwendung optimierter Algorithmen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1244047882026-24052.

Full text
Abstract:
Die digitale Nahbereichsphotogrammetrie ermöglicht eine effiziente Erfassung dreidimensionaler Objektoberflächen bei experimentellen Untersuchungen. Besonders für die flächenhafte Erfassung von Verformungen und die Rissdetektion sind photogrammetrische Verfahren – unter Beachtung entsprechender Randbedingungen – prinzipiell geeignet. Der Beitrag geht unter Einbeziehung aktueller Untersuchungen an textilbewehrten Betonproben auf die Problematik der Rissdetektion ein und gibt einen Überblick über den Entwicklungsstand und das erreichbare Genauigkeitspotential. In Bezug auf die praktische Anwendung der vorgestellten Verfahren wird abschließend auf verschiedene Möglichkeiten der Optimierung eingegangen.
APA, Harvard, Vancouver, ISO, and other styles
2

Johnson, Donald C. "Application of a genetic algorithm to optimize staffing levels in software development." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA293725.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, December 1994.
"December 1994." Thesis advisor(s): B. Ramesh, T. Hamid. Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
3

Elliott, Donald M. "Application of a genetic algorithm to optimize quality assurance in software development." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from the National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA273193.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, September 1993.
Thesis advisor(s): Ramesh, B. ; Abdel-Hamid, Tarek K. "September 1993." Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
4

Wanis, Paul, and John S. Fairbanks. "Analysis of Optimized Design Tradeoffs in Application of Wavelet Algorithms to Video Compression." International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605769.

Full text
Abstract:
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California
Because all video compression schemes introduce artifacts into the compressed video images, degradation occurs. These artifacts, generated by a wavelet-based compression scheme, will vary with the compression ratio and input imagery, but do show some consistent patterns across applications. There are a number of design trade-offs that can be made to mitigate the effect of these artifacts. By understanding the artifacts introduced by video compression and being able to anticipate the amount of image degradation, the video compression can be configured in a manner optimal to the application under consideration in telemetry.
APA, Harvard, Vancouver, ISO, and other styles
5

RUEDA, CAMILO VELASCO. "ESNPREDICTOR: TIME SERIES FORECASTING APPLICATION BASED ON ECHO STATE NETWORKS OPTIMIZED BY GENETICS ALGORITHMS AND PARTICLE SWARM OPTIMIZATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=24785@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
A previsão de séries temporais é fundamental na tomada de decisões de curto, médio e longo prazo, em diversas áreas como o setor elétrico, a bolsa de valores, a meteorologia, entre outros. Tem-se na atualidade uma diversidade de técnicas e modelos para realizar essas previsões, mas as ferramentas estatísticas são as mais utilizadas principalmente por apresentarem um maior grau de interpretabilidade. No entanto, as técnicas de inteligência computacional têm sido cada vez mais aplicadas em previsão de séries temporais, destacando-se as Redes Neurais Artificiais (RNA) e os Sistemas de Inferência Fuzzy (SIF). Recentemente foi criado um novo tipo de RNA, denominada Echo State Networks (ESN), as quais diferem das RNA clássicas por apresentarem uma camada escondida com conexões aleatórias, denominada de Reservoir (Reservatório). Este Reservoir é ativado pelas entradas da rede e pelos seus estados anteriores, gerando o efeito de Echo State (Eco), fornecendo assim um dinamismo e um desempenho melhor para tarefas de natureza temporal. Uma dificuldade dessas redes ESN é a presença de diversos parâmetros, tais como Raio Espectral, Tamanho do Reservoir e a Percentual de Conexão, que precisam ser calibrados para que a ESN forneça bons resultados. Portanto, este trabalho tem como principal objetivo o desenvolvimento de uma ferramenta computacional capaz de realizar previsões de séries temporais, baseada nas ESN, com ajuste automático de seus parâmetros por Particle Swarm Optimization (PSO) e Algoritmos Genéticos (GA), facilitando a sua utilização pelo usuário. A ferramenta computacional desenvolvida oferece uma interface gráfica intuitiva e amigável, tanto em termos da modelagem da ESN, quanto em termos de realização de eventuais pré-processamentos na série a ser prevista.
The time series forecasting is critical to decision making in the short, medium and long term in several areas such as electrical, stock market, weather and industry. Today exist different techniques to model this forecast, but statistics are more used, because they have a bigger interpretability, due by the mathematic models created. However, intelligent techniques are being more applied in time series forecasting, where the principal models are the Artificial Neural Networks (ANN) and Fuzzy Inference Systems (FIS). A new type of ANN called Echo State Networks (ESN) was created recently, which differs from the classic ANN in a randomly connected hidden layer called Reservoir. This Reservoir is activated by the network inputs, and the historic of the reservoir activations generating so, the Echo State and giving to the network more dynamism and a better performance in temporal nature tasks. One problem with these networks is the presence of some parameters as, Spectral Radius, Reservoir Size and Connection Percent, which require calibration to make the network provide positive results. Therefore the aim of this work is to develop a computational application capable to do time series forecasting, based on ESN, with automatic parameters adjustment by Particle Swarm Optimization (PSO) and Genetic Algorithms (GA), facilitating its use by the user. The developed computational tool offers an intuitive and friendly interface, both in terms of modeling the ESN, and in terms of achievement of possible pre-process on the series to be forecasted.
APA, Harvard, Vancouver, ISO, and other styles
6

Huyan, Pengfei. "Electromagnetic digital actuators array : characterization of a planar conveyance application and optimized design." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2178/document.

Full text
Abstract:
Dans les systèmes mécaniques ou mécatroniques, les actionneurs sont les composants utilisés pour convertir l’énergie d’entrée, généralement l’énergie électrique, en tâche mécanique telles que le mouvement, la force ou une combinaison des deux. Actionneur analogique et actionneur numérique sont les deux types d’actionneurs les plus communs. Les actionneurs numériques possèdent les avantages du contrôle en boucle ouverte, faible consommation d’énergie par rapport aux actionneurs analogiques. Cependant, les actionneurs numériques présentent deux inconvénients majeurs. Les erreurs de fabrication de ces actionneurs doivent être contrôlées précisément parce que, contrairement à des actionneurs analogiques, une erreur de fabrication ne peut pas être compensée par la loi de commande. Un autre inconvénient est leur capacité à réaliser les tâches continues en raison de leur corse discrète. Un assemblage de plusieurs actionneurs numériques peut néanmoins réaliser des tâches multiples discrètes. Cette thèse porte sur la caractérisation et l’optimisation d’une conception expérimentale actionneurs tableau numériques pour l’application planaire de transport. Le premier objectif principal de la présente thèse est axé sur la caractérisation de l’ensemble des actionneurs existants et aussi une application planaire de transport sur la base du tableau des actionneurs. A cette fin, une modélisation de la matrice des actionneurs essais expérimentaux ont été effectués afin de déterminer l’influence de certains paramètres sur le comportement des actionneurs de tableau. Le deuxième objectif est de concevoir une nouvelle version du tableau actionneurs sur la base de l’expérience du premier prototype. Une optimisation de la conception a ensuite été réalisée en utilisant des techniques d’algorithmes génétiques tout en tenant compte de plusieurs critères
In mechanical or mechatronical systems, actuators are the components used to convert input energy, generally electrical energy, into mechanical tasks such as motion, force or a combination of both. Analogical actuator and digital actuator are two common types of actuators. Digital actuators have the advantages of open-loop control, low energy consumption and etc compared to analogical actuators. However, digital actuators present two main drawbacks. The manufacturing errors of these actuators have to be precisely controlled because, unlike to analogical actuators, a manufacturing error cannot be compensated using the control law. Another drawback is their inability to realize continuous tasks because of their discrete stroke. An assembly of several digital actuators can nevertheless realize multi-discrete tasks. This thesis focuses on the experimental characterization and optimization design of a digital actuators array for planar conveyance application. The firs main objective of the present thesis is focused on the characterization of the existing actuators array and also a planar conveyance application based on the actuators array. For that purpose, a modeling of the actuators array and experimental test has been carried out in order to determine the influence of some parameters on the actuators array behavior. The second objective is to design a new version of the actuators array based on the experience of the first prototype. An optimization of the design has then been realized using genetic algorithm techniques while considering several criteria
APA, Harvard, Vancouver, ISO, and other styles
7

Tsai, Ya-Lin. "Development of parallel processing algorithms to provide automatic image analysis for medical application." Thesis, University of Sunderland, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336914.

Full text
Abstract:
This thesis describes the development of: (i) an automatic chromosome analysis system capable of producing to a high degree of accuracy and consistency a correct classification for damaged chromosomes at a low cost and (ii) a parallel computer system to enable more rapid chromosome analysis. Chromosomes can be examined in a cytogenetics laboratory for a variety of purposes including an assessment of the affects of ionisation exposure on the genetic code of the cell. Scoring of chromosome aberrations caused by ionisation of radiation exposure, is possible by detecting dicentric chromosomes. In addition this approach provides a good biological radiation measure (dosimeter). However, currently manual methods are extremely time consuming and expensive with respect to labour costs. For the low radiation doses it is necessary to analyse a large number of chromosomes to identify a small number of damaged ones to score the number of aberrations. Consequently, the main objective of this research programme is to develop a rapid, low cost, and accurate automated chromosome analysis system. This research has concentrated solely on scoring dicentric chromosome since their characteristic shape is relatively easy to recognise in most cases and they most commonly created by exposure to radiation. The methods and theories considered in this thesis concerns chromosome image selection by automatic segment extraction using of the following: grey levels; image extraction by seed aggregation, a two dimensional function, a moment algorithm, for chromosome orientation; chromosome centreline determination; rapid detection of the chromosome centromere of the candidate. The new methods developed by the author and presented herein concern three steps or processes in automatic chromosome analysis. These include (i) a new segmentation scheme (ii) automatic selection the cell threshold grey scale level and (iii) the design a new methods capable of detecting bent chromosome with rapid determination the chromosome centromere. Parallel processing using the processor farm technique has been successfully developed to enable a more rapid chromosome classification system. The techniques described have been carefully tested and evaluated and have clearly demonstrated the potential application of the analysis methods by the author.
APA, Harvard, Vancouver, ISO, and other styles
8

Wong, Shu-fai, and 黃樹輝. "The Application of human body tracking for the development of a visualinterface." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30103009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dwyer, Michael G. "Development and application of novel algorithms for quantitative analysis of magnetic resonance imaging in multiple sclerosis." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6298.

Full text
Abstract:
This document is a critical synopsis of prior work by Michael Dwyer submitted in support of a PhD by published work. The selected work is focused on the application of quantitative magnet resonance imaging (MRI) analysis techniques to the study of multiple sclerosis (MS). MS is a debilitating disease with a multi-factorial pathology, progression, and clinical presentation. Its most salient feature is focal inflammatory lesions, but it also includes significant parenchymal atrophy and microstructural damage. As a powerful tool for in vivo investigation of tissue properties, MRI can provide important clinical and scientific information regarding these various aspects of the disease, but precise, accurate quantitative analysis techniques are needed to detect subtle changes and to cope with the vast amount of data produced in an MRI session. To address this, eight new techniques were developed by Michael Dwyer and his co-workers to better elucidate focal, atrophic, and occult/"invisible" pathology. These included: a method to better evaluate errors in lesion identification; a method to quantify differences in lesion distribution between scanner strengths; a method to measure optic nerve atrophy; a more precise method to quantify tissue-specific atrophy; a method sensitive to dynamic myelin changes; and a method to quantify iron in specific brain structures. Taken together, these new techniques are complementary and improve the ability of clinicians and researchers to reliably assess various key elements of MS pathology in vivo.
APA, Harvard, Vancouver, ISO, and other styles
10

Nyamugure, Philimon. "Modification, development, application and computational experiments of some selected network, distribution and resource allocation models in operations research." Thesis, University of Limpopo, 2017. http://hdl.handle.net/10386/1930.

Full text
Abstract:
Thesis (Ph.D. (Statistics)) -- University of Limpopo, 2017
Operations Research (OR) is a scientific method for developing quantitatively well-grounded recommendations for decision making. While it is true that it uses a variety of mathematical techniques, OR has a much broader scope. It is in fact a systematic approach to solving problems, which uses one or more analytical tools in the process of analysis. Over the years, OR has evolved through different stages. This study is motivated by new real-world challenges needed for efficiency and innovation in line with the aims and objectives of OR – the science of better, as classified by the OR Society of the United Kingdom. New real-world challenges are encountered on a daily basis from problems arising in the fields of water, energy, agriculture, mining, tourism, IT development, natural phenomena, transport, climate change, economic and other societal requirements. To counter all these challenges, new techniques ought to be developed. The growth of global markets and the resulting increase in competition have highlighted the need for OR techniques to be improved. These developments, among other reasons, are an indication that new techniques are needed to improve the day-to-day running of organisations, regardless of size, type and location. The principal aim of this study is to modify and develop new OR techniques that can be used to solve emerging problems encountered in the areas of linear programming, integer programming, mixed integer programming, network routing and travelling salesman problems. Distribution models, resource allocation models, travelling salesman problem, general linear mixed integer ii programming and other network problems that occur in real life, have been modelled mathematically in this thesis. Most of these models belong to the NP-hard (non-deterministic polynomial) class of difficult problems. In other words, these types of problems cannot be solved in polynomial time (P). No general purpose algorithm for these problems is known. The thesis is divided into two major areas namely: (1) network models and (2) resource allocation and distribution models. Under network models, five new techniques have been developed: the minimum weight algorithm for a non-directed network, maximum reliability route in both non-directed and directed acyclic network, minimum spanning tree with index less than two, routing through 0k0 specified nodes, and a new heuristic to the travelling salesman problem. Under the resource allocation and distribution models section, four new models have been developed, and these are: a unified approach to solve transportation and assignment problems, a transportation branch and bound algorithm for the generalised assignment problem, a new hybrid search method over the extreme points for solving a large-scale LP model with non-negative coefficients, and a heuristic for a mixed integer program using the characteristic equation approach. In most of the nine approaches developed in the thesis, efforts were done to compare the effectiveness of the new approaches to existing techniques. Improvements in the new techniques in solving problems were noted. However, it was difficult to compare some of the new techniques to the existing ones because computational packages of the new techniques need to be developed first. This aspect will be subject matter of future research on developing these techniques further. It was concluded with strong evidence, that development of new OR techniques is a must if we are to encounter the emerging problems faced by the world today. Key words: NP-hard problem, Network models, Reliability, Heuristic, Largescale LP, Characteristic equation, Algorithm.
APA, Harvard, Vancouver, ISO, and other styles
11

Rudolph, Jan Daniel [Verfasser], and Matthias [Akademischer Betreuer] Mann. "Development and application of software and algorithms for network approaches to proteomics data analysis / Jan Daniel Rudolph ; Betreuer: Matthias Mann." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2019. http://d-nb.info/1206877723/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Pang, Luping. "Study on development and application of computer aided algorithms using invasive and non-invasive electrical signals in the electrophysiological investigation." Berlin mbv, 2008. http://d-nb.info/989978311/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Uszkoreit, Julian [Verfasser], and Oliver [Akademischer Betreuer] Kohlbacher. "Development and Application of Flexible Algorithms for the Protein Inference Problem in Bottom-up Mass Spectrometry / Julian Uszkoreit ; Betreuer: Oliver Kohlbacher." Tübingen : Universitätsbibliothek Tübingen, 2017. http://d-nb.info/1165506912/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hom, Geoffrey Deshaies Raymond Joseph. "Advances in computational protein design : development of more efficient search algorithms and their application to the full-sequence design of larger proteins /." Diss., Pasadena, Calif. : California Institute of Technology, 2005. http://resolver.caltech.edu/CaltechETD:etd-05302005-223153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Diao, Jie. "Development of Techniques to Quantify Chemical and Mechanical Modifications of Polymer Surfaces: Application to Chemical Mechanical Polishing." Diss., Available online, Georgia Institute of Technology, 2004, 2004. http://etd.gatech.edu/theses/available/etd-11222004-001703/.

Full text
Abstract:
Thesis (Ph. D.)--Chemical Engineering, Georgia Institute of Technology, 2006.
Samuels, Robert J., Committee Member ; Henderson, Clifford L., Committee Member ; Danyluk, Steven, Committee Member ; Hess, Dennis W., Committee Chair ; Bottomley, Lawrence A., Committee Member ; Morris, Jeffrey F., Committee Co-Chair. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
16

Klasen, Jonas Raphael [Verfasser], and Maarten [Akademischer Betreuer] Koornneef. "Development and application of statistical algorithms for the detection of additive and interacting loci underlying quantitative traits / Jonas Raphael Klasen. Gutachter: Maarten Koornneef." Köln : Universitäts- und Stadtbibliothek Köln, 2015. http://d-nb.info/1080719172/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Cai, Ji [Verfasser], Bülent [Gutachter] Tezkan, and Michael [Gutachter] Becken. "Development of 1D and 2D Joint Inversion Algorithms for Semi-Airborne and LOTEM Data: A Data Application from Eastern Thuringia, Germany / Ji Cai ; Gutachter: Bülent Tezkan, Michael Becken." Köln : Universitäts- und Stadtbibliothek Köln, 2020. http://d-nb.info/1218230185/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bishop, Courtney Alexandra. "Development and application of image analysis techniques to study structural and metabolic neurodegeneration in the human hippocampus using MRI and PET." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:2549bad2-432f-4d0e-8878-be9cce6ae0d2.

Full text
Abstract:
Despite the association between hippocampal atrophy and a vast array of highly debilitating neurological diseases, such as Alzheimer’s disease and frontotemporal lobar degeneration, tools to accurately and robustly quantify the degeneration of this structure still largely elude us. In this thesis, we firstly evaluate previously-developed hippocampal segmentation methods (FMRIB’s Integrated Registration and Segmentation Tool (FIRST), Freesurfer (FS), and three versions of a Classifier Fusion (CF) technique) on two clinical MR datasets, to gain a better understanding of the modes of success and failure of these techniques, and to use this acquired knowledge for subsequent method improvement (e.g., FIRSTv3). Secondly, a fully automated, novel hippocampal segmentation method is developed, termed Fast Marching for Automated Segmentation of the Hippocampus (FMASH). This combined region-growing and atlas-based approach uses a 3D Sethian Fast Marching (FM) technique to propagate a hippocampal region from an automatically-defined seed point in the MR image. Region growth is dictated by both subject-specific intensity features and a probabilistic shape prior (or atlas). Following method development, FMASH is thoroughly validated on an independent clinical dataset from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), with an investigation of the dependency of such atlas-based approaches on their prior information. In response to our findings, we subsequently present a novel label-warping approach to effectively account for the detrimental effects of using cross-dataset priors in atlas-based segmentation. Finally, a clinical application of MR hippocampal segmentation is presented, with a combined MR-PET analysis of wholefield and subfield hippocampal changes in Alzheimer’s disease and frontotemporal lobar degeneration. This thesis therefore contributes both novel computational tools and valuable knowledge for further neurological investigations in both the academic and the clinical field.
APA, Harvard, Vancouver, ISO, and other styles
19

Hunt, Julian David. "Integration of rationale management with multi-criteria decision analysis, probabilistic forecasting and semantics : application to the UK energy sector." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:2cc24d23-3e93-42e0-bb7a-6e39a65d7425.

Full text
Abstract:
This thesis presents a new integrated tool and decision support framework to approach complex problems resulting from the interaction of many multi-criteria issues. The framework is embedded in an integrated tool called OUTDO (Oxford University Tool for Decision Organisation). OUTDO integrates Multi-Criteria Decision Analysis (MCDA), decision rationale management with a modified Issue-Based Information Systems (IBIS) representation, and probabilistic forecasting to effectively capture the essential reasons why decisions are made and to dynamically re-use the rationale. In doing so, it allows exploration of how changes in external parameters affect complicated and uncertain decision making processes in the present and in the future. Once the decision maker constructs his or her own decision process, OUTDO checks if the decision process is consistent and coherent and looks for possible ways to improve it using three new semantic-based decision support approaches. For this reason, two ontologies (the Decision Ontology and the Energy Ontology) were integrated into OUTDO to provide it with these semantic capabilities. The Decision Ontology keeps a record of the decision rationale extracted from OUTDO and the Energy Ontology describes the energy generation domain, focusing on the water requirement in thermoelectric power plants. A case study, with the objective of recommending electricity generation and steam condensation technologies for ten different regions in the UK, is used to verify OUTDO’s features and reach conclusions about the overall work.
APA, Harvard, Vancouver, ISO, and other styles
20

Mallangi, Siva Sai Reddy. "Low-Power Policies Based on DVFS for the MUSEIC v2 System-on-Chip." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229443.

Full text
Abstract:
Multi functional health monitoring wearable devices are quite prominent these days. Usually these devices are battery-operated and consequently are limited by their battery life (from few hours to a few weeks depending on the application). Of late, it was realized that these devices, which are currently being operated at fixed voltage and frequency, are capable of operating at multiple voltages and frequencies. By switching these voltages and frequencies to lower values based upon power requirements, these devices can achieve tremendous benefits in the form of energy savings. Dynamic Voltage and Frequency Scaling (DVFS) techniques have proven to be handy in this situation for an efficient trade-off between energy and timely behavior. Within imec, wearable devices make use of the indigenously developed MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). This system is optimized for efficient and accurate collection, processing, and transfer of data from multiple (health) sensors. MUSEIC v2 has limited means in controlling the voltage and frequency dynamically. In this thesis we explore how traditional DVFS techniques can be applied to the MUSEIC v2. Experiments were conducted to find out the optimum power modes to efficiently operate and also to scale up-down the supply voltage and frequency. Considering the overhead caused when switching voltage and frequency, transition analysis was also done. Real-time and non real-time benchmarks were implemented based on these techniques and their performance results were obtained and analyzed. In this process, several state of the art scheduling algorithms and scaling techniques were reviewed in identifying a suitable technique. Using our proposed scaling technique implementation, we have achieved 86.95% power reduction in average, in contrast to the conventional way of the MUSEIC v2 chip’s processor operating at a fixed voltage and frequency. Techniques that include light sleep and deep sleep mode were also studied and implemented, which tested the system’s capability in accommodating Dynamic Power Management (DPM) techniques that can achieve greater benefits. A novel approach for implementing the deep sleep mechanism was also proposed and found that it can obtain up to 71.54% power savings, when compared to a traditional way of executing deep sleep mode.
Nuförtiden så har multifunktionella bärbara hälsoenheter fått en betydande roll. Dessa enheter drivs vanligtvis av batterier och är därför begränsade av batteritiden (från ett par timmar till ett par veckor beroende på tillämpningen). På senaste tiden har det framkommit att dessa enheter som används vid en fast spänning och frekvens kan användas vid flera spänningar och frekvenser. Genom att byta till lägre spänning och frekvens på grund av effektbehov så kan enheterna få enorma fördelar när det kommer till energibesparing. Dynamisk skalning av spänning och frekvens-tekniker (såkallad Dynamic Voltage and Frequency Scaling, DVFS) har visat sig vara användbara i detta sammanhang för en effektiv avvägning mellan energi och beteende. Hos Imec så använder sig bärbara enheter av den internt utvecklade MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). Systemet är optimerat för effektiv och korrekt insamling, bearbetning och överföring av data från flera (hälso) sensorer. MUSEIC v2 har begränsad möjlighet att styra spänningen och frekvensen dynamiskt. I detta examensarbete undersöker vi hur traditionella DVFS-tekniker kan appliceras på MUSEIC v2. Experiment utfördes för att ta reda på de optimala effektlägena och för att effektivt kunna styra och även skala upp matningsspänningen och frekvensen. Eftersom att ”overhead” skapades vid växling av spänning och frekvens gjordes också en övergångsanalys. Realtidsoch icke-realtidskalkyler genomfördes baserat på dessa tekniker och resultaten sammanställdes och analyserades. I denna process granskades flera toppmoderna schemaläggningsalgoritmer och skalningstekniker för att hitta en lämplig teknik. Genom att använda vår föreslagna skalningsteknikimplementering har vi uppnått 86,95% effektreduktion i jämförelse med det konventionella sättet att MUSEIC v2-chipets processor arbetar med en fast spänning och frekvens. Tekniker som inkluderar lätt sömn och djupt sömnläge studerades och implementerades, vilket testade systemets förmåga att tillgodose DPM-tekniker (Dynamic Power Management) som kan uppnå ännu större fördelar. En ny metod för att genomföra den djupa sömnmekanismen föreslogs också och enligt erhållna resultat så kan den ge upp till 71,54% lägre energiförbrukning jämfört med det traditionella sättet att implementera djupt sömnläge.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Kuan-Liang, and 劉冠良. "Development of far-field acoustic imaging algorithms using an optimized random array." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/02284395280260479666.

Full text
Abstract:
碩士
國立交通大學
機械工程系所
97
Arrays with sparse and random microphone deployment are known to be capable of delivering high quality far-field images without grating lobes. Numerical simulations are undertaken in this thesis to optimize the microphone deployment. Global optimization techniques including the Monte Carlo (MC) algorithm, the Simulated Annealing (SA) algorithm and the Intra-Block Monte Carlo (IBMC) algorithms are exploited to find the optimal microphone deployment efficiently. As predicted by the conventional wisdom, the results reveal that randomized deployment is required to avoid grating lobes. The combined use of the SA and the IBMC algorithms enables efficient search for satisfactory deployment with excellent beam pattern and relatively uniform distribution of microphones. In Direction of arrival (DOA) estimation, the planar wave sources are assumed to be spherical wave sources in this thesis. Far-field acoustic imaging algorithms including the delay and sum (DAS) algorithm, the time reversal (TR) algorithm, the single input multiple output equivalent source inverse filtering (SIMO-ESIF) algorithm, the Minimum Variance Distortionless Response (MVDR) algorithm and the Multiple Signal Classification (MUSIC) algorithm are employed to estimate DOA. Results show that the MUSIC algorithm can attain the highest resolution of localizing sound sources positions.
APA, Harvard, Vancouver, ISO, and other styles
22

"Application of Machine Learning Algorithm to Forecast Load and Development of a Battery Control Algorithm to Optimize PV System Performance in Phoenix, Arizona." Master's thesis, 2018. http://hdl.handle.net/2286/R.I.51560.

Full text
Abstract:
abstract: The students of Arizona State University, under the mentorship of Dr George Karady, have been collaborating with Salt River Project (SRP), a major power utility in the state of Arizona, trying to study and optimize a battery-supported grid-tied rooftop Photovoltaic (PV) system, sold by a commercial vendor. SRP believes this system has the potential to satisfy the needs of its customers, who opt for utilizing solar power to partially satisfy their power needs. An important part of this elaborate project is the development of a new load forecasting algorithm and a better control strategy for the optimized utilization of the storage system. The built-in algorithm of this commercial unit uses simple forecasting and battery control strategies. With the recent improvement in Machine Learning (ML) techniques, development of a more sophisticated model of the problem in hand was possible. This research is aimed at achieving the goal by utilizing the appropriate ML techniques to better model the problem, which will essentially result in a better solution. In this research, a set of six unique features are used to model the load forecasting problem and different ML algorithms are simulated on the developed model. A similar approach is taken to solve the PV prediction problem. Finally, a very effective battery control strategy is built (utilizing the results of the load and PV forecasting), with the aim of ensuring a reduction in the amount of energy consumed from the grid during the “on-peak” hours. Apart from the reduction in the energy consumption, this battery control algorithm decelerates the “cycling aging” or the aging of the battery owing to the charge/dis-charges cycles endured by selectively charging/dis-charging the battery based on need. ii The results of this proposed strategy are verified using a hardware implementation (the PV system was coupled with a custom-built load bank and this setup was used to simulate a house). The results pertaining to the performances of the built-in algorithm and the ML algorithm are compared and the economic analysis is performed. The findings of this research have in the process of being published in a reputed journal.
Dissertation/Thesis
Masters Thesis Electrical Engineering 2018
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Zhi-Xing, and 陳志行. "Optimized application on Roll cage Applied to passenger cars vehicle rigidity and Development." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/fe6z6d.

Full text
Abstract:
碩士
東南科技大學
機械工程研究所
107
This thesis uses the school to build the car to the 11th generation ALTIS through the actual measurement size, using PRO / E drawing software to draw the model, after the model is completed, use the vehicle's repair manual on the exact size of both, and draw the basic model ALTIS type, in the anti-roll The cage optimization geometry is commercialized. The forming model is imported into the ANASYS analysis software. The material is selected as the structural rigid material. The linear static analysis is carried out into the Workbench, and the optimized model is optimized according to the analysis result. The amount of deformation of the ALTIS-XV type that is subjected to the steering of the vehicle during the steering of the ALTIS-XV type is effectively suppressed by 3.0%, which is the highest among all models. Under the torsional stiffness condition, the ALTIS-XV type increases the rigidity by 8.2% for each weight increase, and the model with the highest rigidity for all models increases the rigidity by 19.5%. The ALTIS-XV model has the best performance under the conditions of comprehensive conditions. It is the best in both condition 2 and torsional rigidity. The highest percentage of weight and inhibition is the highest. The best geometric form designed by this model can be effective. The structure rigidity is greatly improved on the premise of not interfering with the use space and comfort in the car, so it can be effectively integrated int the rigid structure of passenger cars. Therefore, the best model design of this paper is ALTIS-XV type.
APA, Harvard, Vancouver, ISO, and other styles
24

Liao, Sheng-Lun, and 廖聖侖. "Development of New Quantum Optimal Control Algorithms and Exact Optimized Effective Potential in Time-dependent Density Functional Theory." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/y4j932.

Full text
Abstract:
博士
國立臺灣大學
物理學研究所
106
Nowadays, advances in science not only aim at observing and uncovering novel physical and chemical phenomena but also attempt to control the ultrafast electronic dynamics. To this end, we develop efficient convergent algorithms for quantum optimal control problems and formulate solvable equations to implement orbital-dependent functionals in time-dependent density functional theory (TDDFT). In the first major part of this thesis, a fast-kick-off search algorithm is presented for quickly finding optimal control fields in the state-to-state transition probability control problems, especially those with poorly chosen initial control fields. This new algorithm is based on the efficient monotonically convergent iteration algorithm, the two-point boundary-value quantum control paradigm (TBQCP) method, aided by the implementation of an instantaneous overlap function that monitors the search progress throughout. Our numerical control simulations for vibrational state-to-state transitions and for ultrafast electron tunneling have demonstrated that the new algorithm not only can greatly improve the search efficiency over its original one, but it also can attain good monotonic convergence quality in the case of the frequency constraints. We also extend the TBQCP method to the mixed-states quantum optimal control problem, and study the maximum attainable field-free molecular orientation with optimally shaped linearly polarized near-single-cycle THz laser pulses of a thermal ensemble. Large-scale benchmark optimal control simulations are performed, including rotational energy levels with the rotational quantum numbers up to J = 100 for OCS linear molecules. As a result, it is shown that a very high degree of field-free orientation can be achieved by strong, optimally shaped near-single-cycle THz pulses. The second major part of this thesis is devoted to our recent work on the exact time-dependent optimized effective potential (TDOEP) in TDDFT. In order to tackle the long-standing challenge in solving the exact TDOEP integral equation derived from orbital-dependent functionals, we formulate a completely equivalent Sturm-Liouville-type time-local TDOEP equation that admits a unique real-time solution in terms of time-dependent Kohn-Sham and effective memory orbitals. The time-local formulation is numerically implemented to study the many-electron dynamics of a one-dimensional hydrogen chain. It is shown that the long-time behavior of the electric dipole converges correctly and the zero-force theorem is fulfilled in the current implementation. We further conduct the non-adiabatic TDDFT calculations for the one-dimensional two-electron helium model based on the time-local TDOEP equation. Through comparing the time-dependent dipole moment and probability density, we show that the TDOEP approach is more accurate than the adiabatic local spin density approximation (ALSDA) and the Krieger-Li-Iafrate (KLI) approximation. It is found that the non-adiabatic and memory-dependent terms in the time-local TDOEP equation elaborately refine the time-dependent structure of exchange-correlation potential and yield the resultant probability density evolution in consistent with the time-dependent Schrödinger equation solutions. These findings take a crucial step toward further studies on memory effects in TDDDFT. Our new developments of the optimal control methods can be extended to the efficient and accurate investigation of a broad range of quantum optimal control problems in novel chemical and physical processes of current interest. And our new development of the time-local TDOEP equation represents a major breakthrough in the formulation of the non-adiabatic real-time TDDFT. Much remains to be explored for the many-electron non-adiabatic quantum dynamics of atomic, molecular, and condensed matter systems in the future.
APA, Harvard, Vancouver, ISO, and other styles
25

Lin, Ming Tzer, and 林銘澤. "An application framework for development and comparison of curve detection algorithms." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/40410535152241100954.

Full text
Abstract:
碩士
國立臺北科技大學
資訊工程系碩士班
91
Curve detection is a core technique of computer vision and image analysis. Its detecting rate has a direct impact on the overall image and vision tasks, and its performance is a key to real-time recognition. Thus, selecting the most suitable curve detection algorithm for an application is an important issue. Comparison of curve detection algorithms will help application developers and researchers to make the best choice based on the specific applications. However, a standard platform for comparing curve detection algorithms does not exist. Therefore, we propose a fair and reliable comparison framework. It helps developers in obtaining complete information of comparison and details of experiment in a time efficient manner, and alleviates the developers from tedious chores in collecting, collating, and comparing experimental data. A number of showcases were developed with the framework to demonstrate how the proposed application framework facilitated development and comparison of curve detection algorithms.
APA, Harvard, Vancouver, ISO, and other styles
26

Amoozgar, M. Hadi. "Development of Fault Diagnosis and Fault Tolerant Control Algorithms with Application to Unmanned Systems." Thesis, 2012. http://spectrum.library.concordia.ca/974548/1/Amoozgar_MASc_F2012.pdf.

Full text
Abstract:
Unmanned vehicles have been increasingly employed in real life. They include unmanned air vehicles (UAVs), unmanned ground vehicles (UGVs), unmanned spacecrafts, and unmanned underwater vehicles (UUVs). Unmanned vehicles like any other autonomous systems need controllers to stabilize and control them. On the other hand unmanned systems might subject to different faults. Detecting a fault, finding the location and severity of it, are crucial for unmanned vehicles. Having enough information about a fault, it is needed to redesign controller based on post fault characteristics of the system. The obtained controlled system in this case can tolerate the fault and may have a better performance. The main focus of this thesis is to develop Fault Detection and Diagnosis (FDD) algorithms, and Fault Tolerant Controllers (FTC) to increase performance, safety and reliability of various missions using unmanned systems. In the field of unmanned ground vehicles, a new kinematical control method has been proposed for the trajectory tracking of nonholonomic Wheeled Mobile Robots (MWRs). It has been experimentally tested on an UGV, called Qbot. A stable leader-follower formation controller for time-varying formation configuration of multiple nonholonomic wheeled mobile robots has also been presented and is examined through computer simulation. In the field of unmanned aerial vehicles, Two-Stage Kalman Filter (TSKF), Adaptive Two-Stage Kalman Filter (ATSKF), and Interacting Multiple Model (IMM) filter were proposed for FDD of the quadrotor helicopter testbed in the presence of actuator faults. As for space missions, an FDD algorithm for the attitude control system of the Japan Canada Joint Collaboration Satellite - Formation Flying (JC2Sat-FF) mission has been developed. The FDD scheme was achieved using an IMM-based FDD algorithm. The efficiency of the FDD algorithm has been shown through simulation results in a nonlinear simulator of the JC2Sat-FF. A fault tolerant fuzzy gain-scheduled PID controller has also been designed for a quadrotor unmanned helicopter in the presence of actuator faults. The developed FDD algorithms and fuzzy controller were evaluated through experimental application to a quadrotor helicopter testbed called Qball-X4.
APA, Harvard, Vancouver, ISO, and other styles
27

Kala, S. "ASIC Implementation of A High Throughput, Low Latency, Memory Optimized FFT Processor." Thesis, 2012. http://etd.iisc.ernet.in/handle/2005/2557.

Full text
Abstract:
The rapid advancements in semiconductor technology have led to constant shrinking of transistor sizes as per Moore's Law. Wireless communications is one field which has seen explosive growth, thanks to the cramming of more transistors into a single chip. Design of these systems involve trade-offs between performance, area and power. Fast Fourier Transform is an important component in most of the wireless communication systems. FFTs are widely used in applications like OFDM transceivers, Spectrum sensing in Cognitive Radio, Image Processing, Radar Signal Processing etc. FFT is the most compute intensive and time consuming operation in most of the above applications. It is always a challenge to develop an architecture which gives high throughput while reducing the latency without much area overhead. Next generation wireless systems demand high transmission efficiency and hence FFT processor should be capable of doing computations much faster. Architectures based on smaller radices for computing longer FFTs are inefficient. In this thesis, a fully parallel unrolled FFT architecture based on novel radix-4 engine is proposed which is catered for wide range of applications. The radix-4 butterfly unit takes all four inputs in parallel and can selectively produce one out of the four outputs. The proposed architecture uses Radix-4^3 and Radix-4^4 algorithms for computation of various FFTs. The Radix-4^4 block can take all 256 inputs in parallel and can use the select control signals to generate one out of the 256 outputs. In existing Cooley-Tukey architectures, the output from each stage has to be reordered before the next stage can start computation. This needs intermediate storage after each stage. In our architecture, each stage can directly generate the reordered outputs and hence reduce these buffers. A solution for output reordering problem in Radix-4^3 and Radix-4^4 FFT architectures are also discussed in this work. Although the hardware complexity in terms of adders and multipliers are increased in our architecture, a significant reduction in intermediate memory requirement is achieved. FFTs of varying sizes starting from 64 point to 64K point have been implemented in ASIC using UMC 130nm CMOS technology. The data representation used in this work is fixed point format and selected word length is 16 bits to get maximum Signal to Quantization Noise Ratio (SQNR). The architecture has been found to be more suitable for computing FFT of large sizes. For 4096 point and 64K point FFTs, this design gives comparable throughput with considerable reduction in area and latency when compared to the state-of-art implementations. The 64K point FFT architecture resulted in a throughput of 1332 mega samples per second with an area of 171.78 mm^2 and total power of 10.7W at 333 MHz.
APA, Harvard, Vancouver, ISO, and other styles
28

Miller, Nathan Daniel. "Enhancing the study of seedling form and development through the application of computer vision algorithms." 2008. http://www.library.wisc.edu/databases/connect/dissertations.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Violette, A., D. F. Cortes, J. A. Bergeon, Robert A. Falconer, and I. Toth. "Optimized LC-MS/MS quantification method for the detection of piperacillin and application to the development of charged liposaccharides as penetration enhancers." 2008. http://hdl.handle.net/10454/4619.

Full text
Abstract:
No
Piperacillin, a potent ß-lactam antibiotic, is effective in a large variety of Gram+ and Gram¿ bacterial infections but its administration is limited to the parenteral route as it is not absorbed when given orally. In an attempt to overcome this problem, we have synthesized a novel series of charged liposaccharide complexes of piperacillin comprising a sugar moiety derived from d-glucose conjugated to a lipoamino acid residue with varying side-chain length (cationic entity) and the piperacillin anion. A complete multiple reaction monitoring LC¿MS/MS method was developed to detect and characterize the synthesized complexes. The same method was then successfully applied to assess the in vitro apparent permeability values of the charged liposaccharide complexes in Caco-2 monolayers.
BBSRC
APA, Harvard, Vancouver, ISO, and other styles
30

Lo, Yun-Ta, and 羅運達. "Development and Application of Optimal Variable Step-Size NLMS Algorithms in Feedforward Active Noise Control Subject to Disturbance." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/18845319951223551987.

Full text
Abstract:
碩士
國立中興大學
機械工程學系所
100
In the application of active noise control (ANC) system, weights update of an adaptive filter will affected by disturbance picked up by the error microphone, leading to the degradation of system performance. In addition, the existing adaptive Variable Step-Size(VSS) algorithms usually have commonly issue in selection of Step-Size function parameters. If the parameter selection is poor, the control performance of the control system will be substantially reduced. However, the selections of parameters are usually only rely on the rule of thumb, leading to the use of the inconvenience. Therefore, this paper proposed Optimal Variable Step-Size (OVSS) with NFxLMS/CE_DC Algorithms to the control algorithms, which reducing the number of algorithm parameters selection, more convenient to use; because of the design of Disturbance Compensator(DC), further enhance the adaptability of the system on the main noise source. Computer simulation shows that the proposed method has better performance and robustness as compared to that of existing methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Kasaiezadeh, Mahabadi Seyed Alireza. "Development of New Global Optimization Algorithms Using Stochastic Level Set Method with Application in: Topology Optimization, Path Planning and Image Processing." Thesis, 2012. http://hdl.handle.net/10012/6803.

Full text
Abstract:
A unique mathematical tool is developed to deal with global optimization of a set of engineering problems. These include image processing, mechanical topology optimization, and optimal path planning in a variational framework, as well as some benchmark problems in parameter optimization. The optimization tool in these applications is based on the level set theory by which an evolving contour converges toward the optimum solution. Depending upon the application, the objective function is defined, and then the level set theory is used for optimization. Level set theory, as a member of active contour methods, is an extension of the steepest descent method in conventional parameter optimization to the variational framework. It intrinsically suffers from trapping in local solutions, a common drawback of gradient based optimization methods. In this thesis, methods are developed to deal with this drawbacks of the level set approach. By investigating the current global optimization methods, one can conclude that these methods usually cannot be extended to the variational framework; or if they can, the computational costs become drastically expensive. To cope with this complexity, a global optimization algorithm is first developed in parameter space and compared with the existing methods. This method is called "Spiral Bacterial Foraging Optimization" (SBFO) method because it is inspired by the aggregation process of a particular bacterium called, Dictyostelium Discoideum. Regardless of the real phenomenon behind the SBFO, it leads to new ideas in developing global optimization methods. According to these ideas, an effective global optimization method should have i) a stochastic operator, and/or ii) a multi-agent structure. These two properties are very common in the existing global optimization methods. To improve the computational time and costs, the algorithm may include gradient-based approaches to increase the convergence speed. This property is particularly available in SBFO and it is the basis on which SBFO can be extended to variational framework. To mitigate the computational costs of the algorithm, use of the gradient based approaches can be helpful. Therefore, SBFO as a multi-agent stochastic gradient based structure can be extended to multi-agent stochastic level set method. In three steps, the variational set up is formulated: i) A single stochastic level set method, called "Active Contours with Stochastic Fronts" (ACSF), ii) Multi-agent stochastic level set method (MSLSM), and iii) Stochastic level set method without gradient such as E-ARC algorithm. For image processing applications, the first two steps have been implemented and show significant improvement in the results. As expected, a multi agent structure is more accurate in terms of ability to find the global solution but it is much more computationally expensive. According to the results, if one uses an initial level set with enough holes in its topology, a single stochastic level set method can achieve almost the same level of accuracy as a multi-agent structure can obtain. Therefore, for a topology optimization problem for which a high level of calculations (at each iteration a finite element model should be solved) is required, only ACSF with initial guess with multiple holes is implemented. In some applications, such as optimal path planning, objective functions are usually very complicated; finding a closed-form equation for the objective function and its gradient is therefore impossible or sometimes very computationally expensive. In these situations, the level set theory and its extensions cannot be directly employed. As a result, the Evolving Arc algorithm that is inspired by "Electric Arc" in nature, is proposed. The results show that it can be a good solution for either unconstrained or constrained problems. Finally, a rigorous convergence analysis for SBFO and ACSF is presented that is new amongst global optimization methods in both parameter and variational framework.
APA, Harvard, Vancouver, ISO, and other styles
32

Hom, Geoffrey Kai Tong. "Advances in Computational Protein Design: Development of More Efficient Search Algorithms and their Application to the Full-Sequence Design of Larger Proteins." Thesis, 2005. https://thesis.library.caltech.edu/2303/1/gh_thesis_5_30_05.pdf.

Full text
Abstract:

Protein design is the art of choosing an amino acid sequence that will fold into a desired structure. Computational protein design aims to quantify and automate this process. In computational protein design, various metrics may be used to calculate an energy score for a sequence with respect to a desired protein structure. An ongoing challenge is to find the lowest-energy sequences from amongst the vast multitude of sequence possibilities. A variety of exact and approximate algorithms may be used in this search.

The work in this thesis focuses on the development and testing of four search algorithms. The first algorithm, HERO, is an exact algorithm, meaning that it will always find the lowest-energy sequence if the algorithm converges. We show that HERO is faster than other exact algorithms and converges on some previously intractable designs. The second algorithm, Vegas, is an approximate algorithm, meaning that it may not find the lowest-energy sequence. We show that, under certain conditions, Vegas finds the lowest-energy sequence in less time than HERO. The third algorithm, Monte Carlo, is an approximate algorithm that had been developed previously. We tested whether Monte Carlo was thorough enough to do a challenging computational design: the full-sequence design of a protein. Monte Carlo didn’t find the lowest-energy sequence, although a similar sequence from Vegas folded into the desired structure. Several biophysical methods suggested that the Monte Carlo sequence should also fold into the desired structure. Nevertheless, the Monte Carlo structure as determined by X-ray crystallography was markedly different from the predicted structure. We attribute this discrepancy to the presence of a high concentration of dioxane in the crystallization conditions. The fourth algorithm, FC_FASTER, is an approximate algorithm for designs of fixed amino acid composition. Such designs may accelerate improvements to the physical model. We show that FC_FASTER finds lower-energy sequences and is faster than our current fixed-composition algorithm.

APA, Harvard, Vancouver, ISO, and other styles
33

Iglesias, Martínez Miguel Enrique. "Development of algorithms of statistical signal processing for the detection and pattern recognitionin time series. Application to the diagnosis of electrical machines and to the features extraction in Actigraphy signals." Doctoral thesis, 2020. http://hdl.handle.net/10251/145603.

Full text
Abstract:
[ES] En la actualidad, el desarrollo y aplicación de algoritmos para el reconocimiento de patrones que mejoren los niveles de rendimiento, detección y procesamiento de datos en diferentes áreas del conocimiento resulta un tema de gran interés. En este contexto, y específicamente en relación con la aplicación de estos algoritmos en el monitoreo y diagnóstico de máquinas eléctricas, el uso de señales de flujo es una alternativa muy interesante para detectar las diferentes fallas. Asimismo, y en relación con el uso de señales biomédicas, es de gran interés extraer características relevantes en las señales de actigrafía para la identificación de patrones que pueden estar asociados con una patología específica. En esta tesis, se han desarrollado y aplicado algoritmos basados en el procesamiento estadístico y espectral de señales, para la detección y diagnóstico de fallas en máquinas eléctricas, así como su aplicación al tratamiento de señales de actigrafía. Con el desarrollo de los algoritmos propuestos, se pretende tener un sistema dinámico de indicación e identificación para detectar la falla o la patología asociada que no depende de parámetros o información externa que pueda condicionar los resultados, sólo de la información primaria que inicialmente presenta la señal a tratar (como la periodicidad, amplitud, frecuencia y fase de la muestra). A partir del uso de los algoritmos desarrollados para la detección y diagnóstico de fallas en máquinas eléctricas, basados en el procesamiento estadístico y espectral de señales, se pretende avanzar, en relación con los modelos actualmente existentes, en la identificación de fallas mediante el uso de señales de flujo. Además, y por otro lado, mediante el uso de estadísticas de orden superior, para la extracción de anomalías en las señales de actigrafía, se han encontrado parámetros alternativos para la identificación de procesos que pueden estar relacionados con patologías específicas.
[CAT] En l'actualitat, el desenvolupament i aplicació d'algoritmes per al reconeixement de patrons que milloren els nivells de rendiment, detecció i processament de dades en diferents àrees del coneixement és un tema de gran interés. En aquest context, i específicament en relació amb l'aplicació d'aquests algoritmes a la monitorització i diagnòstic de màquines elèctriques, l'ús de senyals de flux és una alternativa molt interessant per tal de detectar les diferents avaries. Així mateix, i en relació amb l'ús de senyals biomèdics, és de gran interés extraure característiques rellevants en els senyals d'actigrafia per a la identificació de patrons que poden estar associats amb una patologia específica. En aquesta tesi, s'han desenvolupat i aplicat algoritmes basats en el processament estadístic i espectral de senyals per a la detecció i diagnòstic d'avaries en màquines elèctriques, així com la seua aplicació al tractament de senyals d'actigrafia. Amb el desenvolupament dels algoritmes proposats, es pretén obtindre un sistema dinàmic d'indicació i identificació per a detectar l'avaria o la patologia associada, el qual no depenga de paràmetres o informació externa que puga condicionar els resultats, només de la informació primària que inicialment presenta el senyal a tractar (com la periodicitat, amplitud, freqüència i fase de la mostra). A partir de l'ús dels algoritmes desenvolupats per a la detecció i diagnòstic d'avaries en màquines elèctriques, basats en el processament estadístic i espectral de senyals, es pretén avançar, en relació amb els models actualment existents, en la identificació de avaries mitjançant l'ús de senyals de flux. A més, i d'altra banda, mitjançant l'ús d'estadístics d'ordre superior, per a l'extracció d'anomalies en els senyals d'actigrafía, s'han trobat paràmetres alternatius per a la identificació de processos que poden estar relacionats amb patologies específiques.
[EN] Nowadays, the development and application of algorithms for pattern recognition that improve the levels of performance, detection and data processing in different areas of knowledge is a topic of great interest. In this context, and specifically in relation to the application of these algorithms to the monitoring and diagnosis of electrical machines, the use of stray flux signals is a very interesting alternative to detect the different faults. Likewise, and in relation to the use of biomedical signals, it is of great interest to extract relevant features in actigraphy signals for the identification of patterns that may be associated with a specific pathology. In this thesis, algorithms based on statistical and spectral signal processing have been developed and applied to the detection and diagnosis of failures in electrical machines, as well as to the treatment of actigraphy signals. With the development of the proposed algorithms, it is intended to have a dynamic indication and identification system for detecting the failure or associated pathology that does not depend on parameters or external information that may condition the results, but only rely on the primary information that initially presents the signal to be treated (such as the periodicity, amplitude, frequency and phase of the sample). From the use of the algorithms developed for the detection and diagnosis of failures in electrical machines, based on the statistical and spectral signal processing, it is intended to advance, in relation to the models currently existing, in the identification of failures through the use of stray flux signals. In addition, and on the other hand, through the use of higher order statistics for the extraction of anomalies in actigraphy signals, alternative parameters have been found for the identification of processes that may be related to specific pathologies.
Iglesias Martínez, ME. (2020). Development of algorithms of statistical signal processing for the detection and pattern recognitionin time series. Application to the diagnosis of electrical machines and to the features extraction in Actigraphy signals [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/145603
TESIS
APA, Harvard, Vancouver, ISO, and other styles
34

McNeany, Scott Edward. "Characterizing software components using evolutionary testing and path-guided analysis." 2013. http://hdl.handle.net/1805/3775.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Evolutionary testing (ET) techniques (e.g., mutation, crossover, and natural selection) have been applied successfully to many areas of software engineering, such as error/fault identification, data mining, and software cost estimation. Previous research has also applied ET techniques to performance testing. Its application to performance testing, however, only goes as far as finding the best and worst case, execution times. Although such performance testing is beneficial, it provides little insight into performance characteristics of complex functions with multiple branches. This thesis therefore provides two contributions towards performance testing of software systems. First, this thesis demonstrates how ET and genetic algorithms (GAs), which are search heuristic mechanisms for solving optimization problems using mutation, crossover, and natural selection, can be combined with a constraint solver to target specific paths in the software. Secondly, this thesis demonstrates how such an approach can identify local minima and maxima execution times, which can provide a more detailed characterization of software performance. The results from applying our approach to example software applications show that it is able to characterize different execution paths in relatively short amounts of time. This thesis also examines a modified exhaustive approach which can be plugged in when the constraint solver cannot properly provide the information needed to target specific paths.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography