To see the other types of publications on this topic, follow the link: Componenti discreti.

Dissertations / Theses on the topic 'Componenti discreti'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Componenti discreti.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Della, Chiesa Enrico. "Progetto a componenti discreti di un circuito wake-up radio in ambito ultra-low power." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
L’obiettivo principale nella progettazione di una Wireless sensor Network (WSN) in ambito ultra-low power è la minimizzazione dei consumi energetici, per aumentare la durata della batteria o addirittura rimuoverla, rendendo il sistema autosufficiente. La maggior parte dell’energia in questi sistemi viene consumata dal ricevitore, che rimane acceso aspettando l’inizio di una comunicazione. Gestire le accensioni del ricevitore si rivela la tecnica migliore per abbassare il consumo energetico. Un circuito studiato per la gestione del ricevitore è la Wake-up Radio (WuR). Una Wake-up radio è un circuito con bassi consumi di potenza, dell’ordine dei uW, che analizza il canale radio e attiva il ricevitore principale solo se viene rilevato un segnale di wake-up, che può limitarsi alla semplice presenza di un segnale ad una certa frequenza o può prevedere la presenza di un indirizzo. In questo elaborato è presentata la progettazione di una Wake-up Radio a componenti discreti. In particolare ci si è concentrati sull’implementazione di una rete di indirizzamento utilizzando i componenti più efficienti presenti sul mercato. Per prima cosa si definiscono le specifiche della rete di indirizzamento e le sue possibili implementazioni; seguono poi la selezione dei componenti, la sintesi logica e infine l’analisi dei consumi. Il risultato finale è di un consumo pari a 2,1 uW in stato di riposo e 417 uW in stato di decodifica dell’indirizzo, ricavato con una frequenza di clock pari a 10KHz. Considerando la bit-rate di trasmissione dell’indirizzo di 7kbit/s e un massimo tempo di risveglio di 60ms la potenza media necessaria al funzionamento della rete è di 12 uW, paragonabile a quella ottenuta in altri progetti in letteratura.
APA, Harvard, Vancouver, ISO, and other styles
2

Bagshaw, Richard William. "Production data analysis for discrete component manufacture." Thesis, Loughborough University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

England, Dean. "Operational planning of discrete component manufacturing lines." Thesis, Loughborough University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.416182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kilian, Stephanie L. "Coordination of Continuous and Discrete Components of Action." Cleveland State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=csu1403047071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Le, Hanh T. Banking &amp Finance Australian School of Business UNSW. "Discrete PCA: an application to corporate governance research." Awarded by:University of New South Wales. Banking & Finance, 2007. http://handle.unsw.edu.au/1959.4/40753.

Full text
Abstract:
This thesis introduces the application of discrete Principal Component Analysis (PCA) to corporate governance research. Given the presence of many discrete variables in typical governance studies, I argue that this method is superior to standard PCA that has been employed by others working in the area. Using a dataset of 244 companies listed on the London Stock Exchange in the year 2002-2003, I find that Pearson's correlations underestimate the strength of association between two variables, when at least one of them is discrete. Accordingly, standard PCA performed on the Pearson correlation matrix results in biased estimates. Applying discrete PCA on the polychoric correlation matrix, I extract from 28 corporate governance variables 10 significant factors. These factors represent 8 main aspects of the governance system, namely auditor reputation, large shareholder influence, size of board committees, social responsibility, risk optimisation, director independence level, female representation and institutional ownership. Finally, I investigate the relationship between corporate governance and a firm's long-run share market performance, with the former being the factors extracted. Consistent with Demsetz' (1983) argument, I document limited explanatory power for these governance factors.
APA, Harvard, Vancouver, ISO, and other styles
6

Magnusson, Alexander, and David Pantzar. "Integrating 5G Components into a TSN Discrete Event Simulation Framework." Thesis, Mälardalens högskola, Inbyggda system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54552.

Full text
Abstract:
TSN has for many years been the staple of reliable communication over traditional switched Ethernet and, has been used to advance the industrial automation sector. However, TSN is not mobile, which is needed to fully enable Industry 4.0. The development of 5G and its promised URLLC combined with TSN would give both a mobile and reliable heterogeneous network. The 3GPP has suggested different designs for a 5G and TSN integration. This thesis investigates the different proposed integration designs. Besides the integration design, one of the most essential steps towards validity of the integration is to evaluate the TSN-5G networks based on simulation. Currently, this simulation environment is missing. The investigation in this thesis shows that the most exhaustive work had been done on the Logical TSN Bridge design for simulators, such as the ones based on OMNeT++. Capabilities of the simulator itself are also investigated, where aspects such as the lack of a 5G medium and clock synchronization are presented. In this thesis, we implement the 5G-TSN component that results in a translator which sets different 5G channel parameters depending on the Ethernet packet's priority and its corresponding value. To verify the functionality of the translator that is developed within the simulator, it is tested in a use case inspired by the vehicle industry, containing both TSN and 5G devices. Results from the use case indicate that the translation is performed correctly.
APA, Harvard, Vancouver, ISO, and other styles
7

Faggiani, Robson Brino. "Análise de componentes de um tutorial computadorizado para ensinar a realização de tentativas discretas." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/47/47132/tde-27032015-124725/.

Full text
Abstract:
A Terapia ABA é a forma de tratar indivíduos autistas que tem apresentado melhores resultados. O arranjo de ensino mais utilizado é a tentativa discreta, que tem sido ensinada a profissionais por meio do Behavioral Skills Training, um pacote de ensino presencial. O ensino computadorizado tem sido estudado como uma alternativa por ser mais econômico. O objetivo do presente estudo foi investigar o efeito de diferentes componentes de um tutorial computadorizado, ECoTed, sobre o desempenho dos participantes na realização de tentativas discretas de emparelhamento de identidade e imitação motora. Foram realizados três experimentos. No Experimento I, seis estudantes de Psicologia foram submetidos a um tutorial com quatro condições experimentais, ou tipos de ensino: ensino teórico, vídeo-modelação, observação de correção, identificação de erros. Após a Linha de Base, os participantes passaram por ensino teórico. Depois, foram divididos em três Grupos; cada Grupo passou pelas condições experimentais restantes em ordens diferentes. Se no teste realizado após cada condição o participante atingisse o critério de 100% de acertos, ia para o follow up; caso não cumprisse o critério após o tutorial, era ensinado presencialmente. Os dados foram coletados com o participante realizando tentativas discretas com um ator. Foi realizado um delineamento de linha de base múltipla em cada Grupo. A imitação motora não era ensinada; no entanto, antes de cada teste os participantes estudavam folhas-resumo que listavam os passos de realização dos dois tipos de tentativas discretas. Cinco participantes obtiveram mais de 90% de acertos após ensino teórico e 100% de acertos nos dois tipos de tentativa discreta após as outras condições. Os resultados foram mantidos no follow up. Na literatura, os participantes atingiam 90% de acertos após vídeo-modelação; no presente experimento alcançaram esses resultados apenas com ensino teórico. Para verificar se a escolaridade foi relevante nos resultados, o Experimento II foi realizado. Um participante não universitário e não formado passou pelas condições experimentais na mesma ordem do Grupo 1 do Experimento I. Seus resultados foram semelhantes aos dos participantes do primeiro experimento. O Experimento III foi realizado para investigar a efetividade das animações do ensino teórico. Dois participantes foram submetidos às seguintes condições: ensino teórico sem animações, ensino teórico e vídeo-modelação. Ambos os participantes obtiveram 80% de acertos em emparelhamento de identidade após o ensino sem animações, indicando que esta variável não foi relevante. Oito dos nove participantes dos três experimentos aprenderam a realizar os dois tipos de tentativas discretas após o ECoTed, sugerindo sua efetividade. Sendo que esses participantes obtiveram mais de 80% de acertos após ensino teórico, não foi possível avaliar a efetividade dos outros tipos de ensino. Os resultados após ensino teórico podem estar ligados a uma possível menor exigência ao comportamento dos participantes neste experimento em relação aos demais. A organização do ensino teórico, em que os conceitos eram definidos simultaneamente à demonstração de sua aplicação, pode ter produzido os resultados obtidos. Novos estudos podem investigar a efetividade do ECoTed com pais de crianças autistas e o desempenho dos participantes ao ensinarem diretamente crianças autistas
ABA Therapy is the treatment for autistic people that has been presenting the best results. The most commonly used teaching arrangement is the discrete trial, which has been taught to professionals through the Behavioral Skills Training, a teacher-dependent package. Computer-based teaching has been studied as an alternative because it is more affordable. The goal of the present study was to investigate the effect of different components of a computer-based tutorial, ECoTed, on the performance of participants in the implementation of discrete trials of identity matching and motor imitation. Three experiments were conducted. In Experiment I, six Psychology students were exposed to a tutorial with four experimental conditions, or kinds of teaching: theoretical teaching, video-modeling, observation of corrections and error identification. After the baseline, the participants went through theoretical teaching. Then, they were divided in three groups; each of them went through the remaining conditions in different orders. If the participant fulfilled the 100% correct responses criterion in any of the tests that took place after each condition, s/he was conducted to the follow up phase; if the criterion was not reached after the tutorial, the participant was directly taught by the experimenter. All the data were collected in a setting in which the participant implemented discrete trials with an actor. A multiple baseline design was used in each group. Motor imitation was not taught; however, before each test the participants were allowed to study a summary sheet, which listed all the steps for the implementation of both kinds of the discrete trials. Five participants had more than 90% correct responses after theoretical teaching and 100% correct responses after the other experimental conditions. Results were similar in the follow up. In other studies, participants reached 90% of correct responses after having been through video-modeling; in the present study, participants reached that result after theoretical teaching only. In order to verify if the education level was relevant, Experiment II was conducted. A non-university and non-graduate participant went through the same conditions as the participants from Group 1 of Experiment I. His results were similar to the performance of the participants of the first experiment. Experiment III was conducted to investigate the effectiveness of of theoretical teaching animations. Two participants were exposed to the following conditions: theoretical teaching without animation, theoretical teaching and video-modeling. Both participants had more than 80% correct responses in the identity matching task after theoretical teaching without animation; which suggests that such variable was not relevant. Eight of nine participants of the three experiments learned how to implement both kinds of discrete trials after the ECoTed, which suggests its effectiveness. Given that these participants had more than 80% correct responses after the theoretical teaching, it was not possible to evaluate the effectiveness of the other kinds of teaching. The results after theoretical teaching can be linked to a lower demand on the behavior of participants in comparison to other studies. Theoretical teaching organization, in which the concepts were defined while its application was shown, might have produced the results. New studies might investigate the effectiveness of ECoTed with parents of autistic children and the performance of participants when teaching autistic children
APA, Harvard, Vancouver, ISO, and other styles
8

Alhaji, Bukar Baba Bukar. "Bayesian analysis for mixtures of discrete distributions with a non-parametric component." Thesis, University of Essex, 2016. http://repository.essex.ac.uk/16759/.

Full text
Abstract:
Bayesian finite mixture modelling is a flexible parametric modelling approach for classification and density fitting. Many application areas require distinguishing a signal from a noise component. In practice, it is often difficult to justify a specific distribution for the signal component, therefore the signal distribution is usually further modelled via a mixture of distributions. However, modelling the signal as a mixture of distributions is computationally challenging due to the difficulties in justifying the exact number of components to be used and due to the label-switching problem. The use of a non-parametric distribution to model the signal component is proposed. This new methodology leads to more accurate parameter estimation, smaller classification error rate and smaller false non-discovery rate in the case of discrete data. Moreover, it does not incur the label-switching problem. An application of the method to data generated by ChIP-sequencing experiments is shown. A one-dimensional Markov random field model is proposed, which accounts for the spatial dependencies in the data. The methodology is also applied to ChIP-seq data, which shows that the new method detected more genes enriched regions than similar existing methods at the same false discovery rate.
APA, Harvard, Vancouver, ISO, and other styles
9

Bellini, Edmundo F. "Approximate interval estimation methods for the reliability of systems using discrete component data." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA241375.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, September 1990.
Thesis Advisor(s): Wood, W. Max. Second Reader: Larson, Harold J. "September 1990." Description based on title screen viewed on December 16, 2009. DTIC Descriptor(s): Methodology, coherence, accuracy, Monte Carlo method, cycles, estimates, reliability, approximation(mathematics), statistical distributions, equations, confidence level, confidence limits, intervals, poisson density functions, binomials. DTIC Identifier(s): Statistical inference, estimates, theses, chi square tests, binomials. Author(s) subject terms: Binomial, system reliability, Chosquare statistic. Includes bibliographical references (p. 73). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
10

Teixeira, Alan. "Disclosure Rules, Manager Discretion and the Relative Informativeness of Earnings Components." Thesis, University of Auckland, 2001. http://hdl.handle.net/2292/2401.

Full text
Abstract:
This is a study of earnings quality, examining whether components of earnings based on New Zealand (N.Z.) accounting classification systems have different information parameters. The N.Z. environment provides a unique opportunity to examine a period with no legislative backing of accounting standards and a flexible accounting standard. Combined, this gave mangers the ability to clearly identify earnings components they considered to be differentially informative. Informativeness is assessed by the ability of current period earnings to predict next period earnings and the contemporaneous relation between returns and earnings. The results indicate that disaggregated reported earnings are more informative than aggregated earnings in a non-trivial way. In one of the sample periods disaggregated earnings explained 29% of the variance in returns, more than twice the explanatory power of aggregated earnings. N.Z. accounting standard setters replaced SSAP7 with FRS7 in 1994 contending that the discretion available to mangers reduced the informativeness of earnings. Not only do the results not support that contention but earnings informativeness has fallen since FRS7 came into effect, suggesting that standard setters should revisit that decision. The results also have implications for the content and form of the N.Z. Stock Exchange (NZSE) preliminary announcement. "Unusual earnings" reported to the NZSE by companies are shown to be differentially informative to investors yet the NZSE does not always identify these components when the preliminary announcement is summarised and disseminated to market participants. To summarise, the effective codification of earnings brought about by FRS7 has reduced the informativeness of earnings – locking differences between components into total earnings. The N.Z. results beg the question as to whether similar economic events are locked into the COMPUSTAT summary earnings variables for U.S. data.
APA, Harvard, Vancouver, ISO, and other styles
11

Cadavid, Cadavid Juan Manuel. "Discrete-Event Simulation: Development of a simulation project for Cell 14 at Volvo CE Components." Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-6162.

Full text
Abstract:

In line with the company-wide CS09 project being carried out at Volvo CE Components, Cell 14 will have changes in terms of distribution of machines and parts routing to meet the lean manufacturing goals established.  These changes are of course dependant on future production volumes, as well as lot sizing and material handling considerations.

In this context, an important emphasis is given to the awareness of the performance measures that support decision making in these production development projects.  By using simulation as a confirmation tool, it is possible to re-assess these measures by testing the impact of changes in complex situations, in line with the lean manufacturing principles.

The aim of the project is to develop a discrete event simulation model following the methodology proposed by Banks et al (1999).  A model of Cell 14 will be built using the software Technomatix Plant Simulation ® which is used by the Company and the results from the simulation study will be analyzed.

APA, Harvard, Vancouver, ISO, and other styles
12

Bhardwaj, Divya Anshu. "Inverse Discrete Cosine Transform by Bit Parallel Implementation and Power Comparision." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2447.

Full text
Abstract:

The goal of this project was to implement and compare Invere Discrete Cosine Transform using three methods i.e. by bit parallel, digit serial and bit serial. This application describes a one dimensional Discrete Cosine Transform by bit prallel method and has been implemented by 0.35 ìm technology. When implementing a design, there are several considerations like word length etc. were taken into account. The code was implemented using WHDL and some of the calculations were done in MATLAB. The VHDL code was the synthesized using Design Analyzer of Synopsis; power was calculated and the results were compared.

APA, Harvard, Vancouver, ISO, and other styles
13

Mosallam, Ahmed. "Remaining useful life estimation of critical components based on Bayesian Approaches." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2069/document.

Full text
Abstract:
La construction de modèles de pronostic nécessite la compréhension du processus de dégradation des composants critiques surveillés afin d’estimer correctement leurs durées de fonctionnement avant défaillance. Un processus de d´dégradation peut être modélisé en utilisant des modèles de Connaissance issus des lois de la physique. Cependant, cette approche n´nécessite des compétences Pluridisciplinaires et des moyens expérimentaux importants pour la validation des modèles générés, ce qui n’est pas toujours facile à mettre en place en pratique. Une des alternatives consiste à apprendre le modèle de dégradation à partir de données issues de capteurs installés sur le système. On parle alors d’approche guidée par des données. Dans cette thèse, nous proposons une approche de pronostic guidée par des données. Elle vise à estimer à tout instant l’état de santé du composant physique et prédire sa durée de fonctionnement avant défaillance. Cette approche repose sur deux phases, une phase hors ligne et une phase en ligne. Dans la phase hors ligne, on cherche à sélectionner, parmi l’ensemble des signaux fournis par les capteurs, ceux qui contiennent le plus d’information sur la dégradation. Cela est réalisé en utilisant un algorithme de sélection non supervisé développé dans la thèse. Ensuite, les signaux sélectionnés sont utilisés pour construire différents indicateurs de santé représentant les différents historiques de données (un historique par composant). Dans la phase en ligne, l’approche développée permet d’estimer l’état de santé du composant test en faisant appel au filtre Bayésien discret. Elle permet également de calculer la durée de fonctionnement avant défaillance du composant en utilisant le classifieur k-plus proches voisins (k-NN) et le processus de Gauss pour la régression. La durée de fonctionnement avant défaillance est alors obtenue en comparant l’indicateur de santé courant aux indicateurs de santé appris hors ligne. L’approche développée à été vérifiée sur des données expérimentales issues de la plateforme PRO-NOSTIA sur les roulements ainsi que sur des données fournies par le Prognostic Center of Excellence de la NASA sur les batteries et les turboréacteurs
Constructing prognostics models rely upon understanding the degradation process of the monitoredcritical components to correctly estimate the remaining useful life (RUL). Traditionally, a degradationprocess is represented in the form of physical or experts models. Such models require extensiveexperimentation and verification that are not always feasible in practice. Another approach that buildsup knowledge about the system degradation over time from component sensor data is known as datadriven. Data driven models require that sufficient historical data have been collected.In this work, a two phases data driven method for RUL prediction is presented. In the offline phase, theproposed method builds on finding variables that contain information about the degradation behaviorusing unsupervised variable selection method. Different health indicators (HI) are constructed fromthe selected variables, which represent the degradation as a function of time, and saved in the offlinedatabase as reference models. In the online phase, the method estimates the degradation state usingdiscrete Bayesian filter. The method finally finds the most similar offline health indicator, to the onlineone, using k-nearest neighbors (k-NN) classifier and Gaussian process regression (GPR) to use it asa RUL estimator. The method is verified using PRONOSTIA bearing as well as battery and turbofanengine degradation data acquired from NASA data repository. The results show the effectiveness ofthe method in predicting the RUL
APA, Harvard, Vancouver, ISO, and other styles
14

Soares, António José Rodrigues. "Modelação e simulação do aprovisionamento de refinarias de petróleo bruto." Master's thesis, Instituto Superior de Economia e Gestão, 1995. http://hdl.handle.net/10400.5/12336.

Full text
Abstract:
Mestrado em Matemática Aplicada à Economia e à Gestão
Neste trabalho apresenta-se urn estudo sobre o problema do aprovisionamento de refinarias de petróleo bruto. São modelados dois sub-sistemas que descrevem respectivamente: as operações de descarga dos navios transportadores de petróleo bruto e o enchimento dos depósitos de armazenamento, e as operações de abastecimento das refmarias. É ainda modelado o processo de colocação das encomendas. O modelo construido, foi simulado utilizando a linguagem de simulação genérica SLAM II, que permite a inclusão de componentes de simulação continua essenciais à modelação de alguns aspectos do problema. O modelo de simulação desenvolvido permite extrair conclusões quanto à gestao dos meios logísticos necessários às operações descritas nos dois sub-sistemas, e ainda, testar cenários para políticas de aprovisionamento alternativas, no que diz respeito a aspectos como: os locais de compra, o tipo de navios utilizados no transporte e o tipo de crude a adquirir. Finalmente apresenta-se uma aplicação do modelo ao cálculo dos custos de fretagem, quando se varia o tipo de navio utilizado no transporte do petróleo bruto, bern como os resultados computacionais obtidos.
APA, Harvard, Vancouver, ISO, and other styles
15

Braff, Emily. "A Comparison of a Matrix Programming and Standard Discrete Trial Training Format to Teach Two-Component Tacts." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4445.

Full text
Abstract:
Teaching using matrix programming has been shown to result in recombinative generalization. However, this procedure has not been compared to more standard discrete trial training formats such as DTT. This study compared acquisition and recombinative generalization of two-component tacts using each procedure. Matrix training was found to be more efficient than the DTT format. Half the amount of teaching was required to teach roughly the same number of targets using matrix training as compared to DTT.
APA, Harvard, Vancouver, ISO, and other styles
16

Amrani, Naoufal, Joan Serra-Sagrista, Miguel Hernandez-Cabronero, and Michael Marcellin. "Regression Wavelet Analysis for Progressive-Lossy-to-Lossless Coding of Remote-Sensing Data." IEEE, 2016. http://hdl.handle.net/10150/623190.

Full text
Abstract:
Regression Wavelet Analysis (RWA) is a novel wavelet-based scheme for coding hyperspectral images that employs multiple regression analysis to exploit the relationships among spectral wavelet transformed components. The scheme is based on a pyramidal prediction, using different regression models, to increase the statistical independence in the wavelet domain For lossless coding, RWA has proven to be superior to other spectral transform like PCA and to the best and most recent coding standard in remote sensing, CCSDS-123.0. In this paper we show that RWA also allows progressive lossy-to-lossless (PLL) coding and that it attains a rate-distortion performance superior to those obtained with state-of-the-art schemes. To take into account the predictive significance of the spectral components, we propose a Prediction Weighting scheme for JPEG2000 that captures the contribution of each transformed component to the prediction process.
APA, Harvard, Vancouver, ISO, and other styles
17

Rabak, Cesar Scarpini. "Otimização do processo de inserção automática de componentes eletrônicos empregando a técnica de times assíncronos." Universidade de São Paulo, 1999. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-24062005-190210/.

Full text
Abstract:
Máquinas insersoras de componentes são utilizadas na indústria eletrônica moderna para a montagem automática de placas de circuito impresso. Com a competição acirrada, há necessidade de se buscar todas as oportunidades para diminuir custos e aumentar a produtividade na exploração desses equipamentos. Neste trabalho, foi proposto um procedimento de otimização do processo de inserção da máquina insersora AVK da Panasonic, implementado em um sistema baseado na técnica de times assíncronos (A-Teams). Foram realizados testes com exemplos de placas de circuito impresso empregadas por uma indústria do ramo e problemas sintéticos para avaliar o desempenho do sistema.
Component inserting machines are employed in the modern electronics industry for the automatic assembly of printed circuit boards. Due the fierce competition, there is a need to search for all opportunities to reduce costs and increase the productivity in the exploitation of these equipment. In this work we propose an optimization procedure for the insertion process of the AVK Panasonic inserting machine, implemented in a system based on asynchronous teams (A-Teams). Tests were conducted using as examples both printed circuit boards used by a particular industry of the realm and synthetic problems for the evaluation of the system.
APA, Harvard, Vancouver, ISO, and other styles
18

Mahadevan, Anandi. "Real Time Ballistocardiogram Artifact Removal in EEG-fMRI Using Dilated Discrete Hermite Transform." University of Akron / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=akron1226235813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Gueye, Soguy Mak-Karé. "Coordination modulaire de gestionnaires autonomes par contrôle discret." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM097/document.

Full text
Abstract:
Les systèmes informatiques sont devenus de plus en plus distribués et hétérogènes, ce qui rend leur administration manuelle difficile et source d'erreurs. L'administration autonome a été proposée comme solution à ce problème. Elle consiste à automatiser l'administration des systèmes informatiques à l'aide de boucles de contrôle appelées gestionnaires autonomes. De nombreux travaux de recherche se sont intéressés à l'automatisation des fonctions d'administration de systèmes informatiques et aujourd'hui, beaucoup de gestionnaires autonomes sont disponibles. Toutefois, les gestionnaires autonomes existants sont, la plupart, spécialisés dans la gestion de quelques aspects d'administration. Cela rend nécessaire la coexistence de plusieurs gestionnaires autonomes pour atteindre une gestion globale des systèmes. La coexistence de plusieurs gestionnaires permet la gestion de plusieurs aspects, mais nécessite des mécanismes de coordination afin d'éviter des décisions incohérentes. Nous étudions l'utilisation de techniques de contrôle pour la conception de contrôleurs de coordination, nous utilisons la programmation synchrone qui fournit des méthodes formelles, et la synthèse de contrôleur discret pour automatiser la construction de contrôleur. Nous suivons une approche à base de composants, et utilisons le contrôle discret modulaire qui permet de décomposer la complexité combinatoire inhérente à la technique d'exploration d'espace d'états. Cela améliore le passage à l'échelle de notre approche et permet la construction d'un contrôle hiérarchique. Notre approche permet la réutilisation de gestionnaires complexes dans des contextes différents, sans modifier leurs spécifications de contrôle. Nous construisons une coordination de gestionnaires basée sur le modèle à composants offrant introspection, adaptabilité et reconfiguration. Cette thèse présente notre méthodologie et des études de cas. Nous évaluons et démontrons les avantages de notre approche par la coordination de gestionnaires autonomes dédiés à la gestion de la disponibilité, et à la gestion de la performance et l'optimisation de ressources
Computing systems have become more and more distributed and heterogeneous, making their manual administration difficult and error-prone. The Autonomic Computing approach has been proposed to overcome this issue, by automating the administration of computing systems with the help of control loops called autonomic managers. Many research works have investigated the automation of the administration functions of computing systems and today many autonomic managers are available. However the existing autonomic managers are mostly specialized in the management of few administration concerns. This makes necessary the coexistence of multiple autonomic managers for achieving a global system management. The coexistence of several managers make possible to address multiple concerns, yet requires coordination mechanisms to avoid incoherent management decisions. We investigate the use of control techniques for the design of coordination controllers, for which we exercise synchronous programming that provide formal semantics, and discrete controller synthesis to automate the construction of the controller. We follow a component-based approach, and explore modular discrete control allowing to break down the combinatorial complexity inherent to the state-space exploration technique. This improves scalability of the approach and allows constructing a hierarchical control. It also allows re-using complex managers in different contexts without modifying their control specifications. We build a component-based coordination of managers, with introspection, adaptivity and reconfiguration. This thesis details our methodology and presents case-studies. We evaluate and demonstrate the benefits of our approach by coordinating autonomic managers which addresse the management of availability, and the management of performance and resources optimization
APA, Harvard, Vancouver, ISO, and other styles
20

Kulac, Oray. "A comparative analysis of active and passive sensors in anti-air warfare area defense using discrete event simulation components." Thesis, Monterey, California: Naval Postgraduate School, 1999. http://hdl.handle.net/10945/13620.

Full text
Abstract:
Anti-air warfare (AAW) has been a top priority for the world's navies in developing tactics and choosing the most effective ship defense systems. Analyses of such extremely complex system behaviors require the utilization of innovative tools that are flexible, scalable and reusable. This thesis develops a model as an analysis tool to measure the effectiveness of radar and IR sensors in AAW area defense. The model is designed to support reuse, provide easy model configuration, flexibility and scale changes. A component-based simulation approach was adopted for this model using the JAVA (Trade mark) programming language to provide the necessary scalability and flexibility. The MODKIT approach was used as the architecture of component designs and SlMKIT, was used for discrete event simulation purposes. In addition, a small combat component library was constructed for future research. To demonstrate the analysis capability of the model a comparative analysis was conducted for radar and IR sensors in AAW area defense. The results of the simulation runs indicate that the model provides a good capability for aiding decision making, including effectiveness analysis, parameter sensitivity analysis, and exploratory analysis.
APA, Harvard, Vancouver, ISO, and other styles
21

McAdams, Ian. "DEVELOPMENT OF A DISCRETE COMPONENT PLATFORM TOWARDS LOW-POWER, WIRELESS, CONDUCTIVITY-CORRECTED, CONDUCTANCE-BASED BLADDER VOLUME ESTIMATION IN FELINES." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1560442028426129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Silva, Murilo da. "Implementação de um localizador de faltas híbrido para linhas de transmissão com três terminais baseado na transformada wavelet." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-11042008-110740/.

Full text
Abstract:
Este trabalho apresenta o estudo e o desenvolvimento de um algoritmo híbrido para detecção, classificação e localização de faltas em sistemas com três terminais utilizando como principal ferramenta a transformada wavelet (TW) em suas versões discreta (TWD) e estacionária (TWE). O algoritmo é dito híbrido, pois alia duas metodologias para localizar a falta. A primeira baseada na análise de componentes de alta freqüência (ondas viajantes) e a segunda, baseada na extração dos componentes fundamentais para o cálculo da impedância aparente. A metodologia proposta foi concebida de maneira a trabalhar com dados sincronizados dos três terminais ou apenas dados locais para estimar a localização da falta. O localizador híbrido escolhe automaticamente qual a melhor técnica de localização ser utilizada para alcançar uma localização confiável e precisa. Deste modo, um método pode suprir as dificuldades do outro, ou, no mínimo, fornecer mais informações para que, junto ao conhecimento do operador, uma localização próxima da ótima possa ser alcançada. Com o objetivo de testar e validar a aplicabilidade do algoritmo de localização de faltas híbrido para linhas com três terminais, utilizou-se de dados de sinais faltosos obtidos através de simulações do software ATP (Altenative Transients Program), levando-se em conta a variação de diversos parâmetros que poderiam influenciar o desempenho do algoritmo proposto. Os resultados alcançados pelo algoritmo frente às situações avaliadas são bastante animadores, apontando a uma promissora aplicabilidade do mesmo.
This work presents a study and development of a hybrid algorithm for fault detection, classification and location in tree terminal lines based on wavelet transform (WT). It will be presented in two versions: discrete wavelet transform (DWT) and stationary wavelet transform (SWT). The algorithm is called hybrid because it uses two fault location methodologies: one based on fundamental components and other based on traveling waves. The proposed methodology works either with synchronized tree terminal data or only local data. The hybrid fault locator chooses automatically which location technique to be used in order to reach a reliable and accurate fault location. In this manner, this technique can avoid some difficulties present in other techniques, aiming to reach an optimized fault location. The proposed hybrid fault location was evaluated by simulated fault signals obtained by alternative transient program (ATP). In the tests, several parameters, which would influence the performance of the hybrid algorithm, were varied, such as: fault inception angle, fault resistance, fault type, etc. The results obtained by the proposed methodology are very encouraging and it points out to a very promising application.
APA, Harvard, Vancouver, ISO, and other styles
23

El, Helou Rafic Gerges. "Multiscale Computational Framework for Analysis and Design of Ultra-High Performance Concrete Structural Components and Systems." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73381.

Full text
Abstract:
This research develops and validates computational tools for the design and analysis of structural components and systems constructed with Ultra-High Performance Concrete (UHPC). The modeling strategy utilizes the Lattice Discrete Particle Model (LDPM) to represent UHPC material and structural member response, and extends a structural-level triaxial continuum constitutive law to account for the addition of discrete fibers. The approach is robust, general, and could be utilized by other researchers to expand the computational capability and simulate the behavior of different composite materials. The work described herein identifies the model material parameters by conducting a complete material characterization for UHPC, with and without fiber reinforcement, describing its behavior in unconfined compression, uniaxial tension, and fracture toughness. It characterizes the effect of fiber orientations, fiber-matrix interaction, and resolves the issue of multi-axial stress states on fiber pullout. The capabilities of the computational models are demonstrated by comparing the material test data that were not used in the parameter identification phase to numerical simulations to validate the models' predictive capabilities. These models offer a mechanics-based shortcut to UHPC analysis that can strategically support ongoing development of material and structural design codes and standards.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Breeden, Ashley Nicole. "An Evaluation of Behavioral Skills Training with the Addition of a Fluency Component." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3015.

Full text
Abstract:
Abstract Behavior Skills Training (BST) typically consists of an initial informational component presented to the learners either vocally, through a handout, presentation, or both. Results from the active student responding literature indicates these methods as the least effective means of conveying important information to learners. This study sought to utilize an alternative instructional component, fluency training, and to evaluate if any effects are observed on implementation of the behavior chain of Discrete Trial Training (DTT). Teacher's had previous training and experience on implementing DTT prior to this study--however, all teachers implemented strategies with low integrity. Teachers were trained to fluent levels on verbally stating the component steps of DTT and were then observed during probe sessions to evaluate percentage of steps implemented correctly. The probes indicate an initial improvement, but decreases over time that are consistent with results on other passive in-service trainings. Teachers then took part in a single session of Modeling, Role-Play, and Feedback. Results suggest that while fluency training had an impact on participants' verbal performance on discrete trial information, and affected overt performance during subsequent probes, the effects were small and transient. Performance improved only after training on the components of BST and additional training had been completed in-situ.
APA, Harvard, Vancouver, ISO, and other styles
25

Ghiglione, Viviana [Verfasser], Peter [Akademischer Betreuer] Gritzmann, Peter [Gutachter] Gritzmann, and Paolo [Gutachter] Dulio. "Switching Components in Discrete Tomography: Characterization, Constructions, and Number-Theoretical Aspects / Viviana Ghiglione ; Gutachter: Peter Gritzmann, Paolo Dulio ; Betreuer: Peter Gritzmann." München : Universitätsbibliothek der TU München, 2019. http://d-nb.info/118325928X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Guimarães, Thayso Silva. "Reconhecimento de face utilizando transformada discreta do cosseno bidimensional, análise de componentes principais bidimensional e mapas auto-organizáveis concorrentes." Universidade Federal de Uberlândia, 2010. https://repositorio.ufu.br/handle/123456789/14430.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The identification of a person by their face is one of the most effective non-intrusive methods in biometrics, however, is also one of the greatest challenges for researchers in the area, consisting of research in psychophysics, neuroscience, engineering, pattern recognition, analysis and image processing, computer vision and applied in face recognition by humans and by machines. The algorithm proposed in this dissertation for face recognition was developed in three stages. In the first stage feature matrices are derived of faces using the Two-Dimensional Discrete Cosine Transform (2D-DCT) and Two-Dimensional Principal Component Analysis (2D-PCA). The training of the Concurrent Self-Organizing Map (Csoma) is performed in the second stage using the characteristic matrices of the faces. And finally, the third stage we obtain the feature matrix of the image consulting classifying it using the CSOM network of the second step. To check the performance of face recognition algorithm proposed in this paper were tested using three well-known image databases in the area of image processing: ORL, YaleA and Face94.
A identificação de uma pessoa pela sua face é um dos métodos não-intrusivo mais efetivo em biometria, no entanto, também é um dos maiores desafios para os pesquisadores na área; consistindo em pesquisas em psicofísica, neurociência, engenharia, reconhecimento de padrões, análise e processamento de imagens, e visão computacional aplicada no reconhecimento de faces pelos seres humanos e pelas máquinas. O algoritmo proposto nesta dissertação para reconhecimento de faces foi desenvolvido em três etapas. Na primeira etapa são obtidas as matrizes características das faces utilizando a Two-Dimensional Discrete Cosine Transform (2D-DCT) e a Two-Dimensional Principal Component Analysis (2D-PCA). O treinamento da Concurrent Self-Organizing Map (CSOM) é realizado na segunda etapa usando as matrizes características das faces. E finalmente, na terceira etapa obtém-se a matriz característica da imagem consulta classificando-a utilizando a rede CSOM da segunda etapa. Para verificar o desempenho do algoritmo de reconhecimento de faces proposto neste trabalho foram realizados testes utilizando três bancos de imagens bem conhecidos na área de processamento de imagens: ORL, YaleA e Face94.
Mestre em Ciências
APA, Harvard, Vancouver, ISO, and other styles
27

Oliveira, Mario Orlando. "Proteção diferencial adaptativa de transformadores de potência baseada na análise de componentes wavelets." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/87355.

Full text
Abstract:
Este trabalho fundamenta-se no desenvolvimento e aprimoramento de uma metodologia de proteção diferencial de Transformadores de Potência. A metodologia desenvolvida avalia eventos transitórios que dificultam a operação correta de relés diferenciais aplicados à proteção de transformadores. O estudo concentra-se no estabelecimento de contribuições ao estado da arte associadas à análise de sinais de corrente diferenciais geradas tanto por faltas internas e externas quanto por distúrbios transitórios. A concepção da metodologia proposta baseou-se na quantificação da energia espectral gerada a través dos coeficientes de detalhe da Transformada Wavelet Discreta. A metodologia de proteção proposta foi desenvolvida em ambiente MATLAB® e testada por meio de simulações realizadas através do software ATP/EMTP (Alternative Transients Program/Electromagnetic Transients Program). Os resultados da pesquisa mostram a aplicabilidade do algoritmo de proteção, mesmo nas condições mais adversas, como na ocorrência da saturação dos transformadores de corrente.
This work is based on the development and improvement of a methodology to differential protection of power transformer. The proposed methodology evaluates transient events that difficult the correct operation of differential relays applied to transformer protection. The study establishes contributions to the state of the art related to differential current analysis generated by internal and external faults and transient disturbance. The conception of the proposed methodology was based on the spectral energies variation generated by each event and calculated through the detail coefficient of Discrete Wavelet Transform. The proposed methodology was developed in MATLAB® environment and tested through several simulations performed with the ATP/EMTP software (Alternative Transients Program / Electromagnetic Transients Program). The results of the research show the applicability of the protection algorithms, even in adverse conditions, such as saturation of current transformers.
APA, Harvard, Vancouver, ISO, and other styles
28

Tahlyan, Divyakant. "Performance Evaluation of Choice Set Generation Algorithms for Modeling Truck Route Choice: Insights from Large Streams of Truck-GPS Data." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7649.

Full text
Abstract:
This thesis evaluates truck route choice set generation algorithms and derives guidance on using the algorithms for effective generation of choice sets for modeling truck route choice. Specifically, route choice sets generated from a breadth first search link elimination (BFS-LE) algorithm are evaluated against observed truck routes derived from large streams of GPS traces of a sizeable truck fleet in the Tampa Bay region of Florida. A systematic evaluation approach is presented to arrive at an appropriate combination of spatial aggregation and minimum number of trips to be observed between each origin-destination (OD) location for evaluating algorithm-generated choice sets. The evaluation is based on both the ability to generate relevant routes that are typically considered by the travelers and the generation of irrelevant (or extraneous) routes that are seldom chosen. Based on this evaluation, the thesis offers guidance on effectively using the BFS-LE approach to maximize the generation of relevant routes. It is found that carefully chosen spatial aggregation can reduce the need to generate large number of routes for each trip. Further, estimation of route choice models and their subsequent application on validation datasets revealed that the benefits of spatial aggregation might be harnessed better if irrelevant routes are eliminated from the choice sets. Lastly, a comparison of route attributes of the relevant and irrelevant routes shed light on presence of systematic differences in route characteristics of the relevant and irrelevant routes.
APA, Harvard, Vancouver, ISO, and other styles
29

Pinheiro, Giselia Andrea Lopes. "Programação de ganho e deslocamento de nível cc para condicionamento de sinais de medição: Implementação com componentes discretos usando microcontrolador." Universidade Federal do Maranhão, 2004. http://tedebc.ufma.br:8080/jspui/handle/tede/362.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:52:54Z (GMT). No. of bitstreams: 1 Giselia Andrea Lopes Pinheiro.pdf: 458686 bytes, checksum: 82f645a2a8db5a46231ba0ffbdd613c7 (MD5) Previous issue date: 2004-09-30
Analog, digital and mixed circuits allow their utilization in several different applications. In instrumentation, in order to measure several quantities using different sensors, the conditioning circuit must be programmable to yield different gain and dc level shift values in order to use the maximum A/D converter input span without causing saturation. A procedure for defining and applying the gain an dc level shift values that guarantees the full measurement range with loss of resolution within acceptable limits, taking into consideration implementation practical aspects, like passive components values, is presented in this work. Architecture for implementing this circuit that support both differential and single-end modes of operation is proposed.
Circuitos analógicos, digitais e mistos programáveis permitem a sua utilização em diversas aplicações diferentes. Em instrumentação, para se medir diversas grandezas utilizando sensores diferentes, o circuito de condicionamento deve ser programado para prover diferentes valores de ganho e de compensação de nível cc, de forma a utilizar a máxima faixa de entrada do conversor A/D sem causar saturação. Neste trabalho, descreve-se um procedimento para definição e aplicação dos valores de ganho e de ajuste de nível cc que garante nenhuma perda de faixa de medição e com perdas de resolução dentro de limites aceitáveis, levando em consideração aspectos práticos de implementação, como os valores de componentes passivos. Propõe-se uma arquitetura para implementação deste circuito que proporciona sua operação tanto em modo diferencial quanto em modo de terminação única.
APA, Harvard, Vancouver, ISO, and other styles
30

Bartés, i. Serrallonga Manel. "Contribució a l'etapa de filtratge en l'estudi d'imatges de ressonància magnètica funcional. Aplicació a l'anàlisi d'una tasca d'atenció sostinguda." Doctoral thesis, Universitat de Vic - Universitat Central de Catalunya, 2014. http://hdl.handle.net/10803/285811.

Full text
Abstract:
L'objectiu d'aquesta tesi és contribuir a trobar mètodes d’eliminació de soroll en imatges de ressonància magnètica funcional, que difereixin dels tradicionals i que permetin una major extracció d’informació durant el procés d’anàlisis. El primer capítol fa una introducció a la ressonància magnètica funcional. El segon capítol explica molt breument la problemàtica que provoquen les altes concentracions de soroll a les dades. El capítol número tres fa un repàs de les contribucions portades a terme durant la realització d’aquest treball. El quart capítol exposa de manera breu les estades de recerca portades a terme. El cinquè capítol està dedicat a resoldre el problema de la presència de soroll en les imatges de ressonància magnètica i planteja l’ús de diferents tècniques de filtratge. El capítol sis es focalitza en els resultats obtinguts, tant amb dades artificials com experimentals i fa una discussió de les implicacions d’aquests. Finalment, el setè capítol mostra les conclusions.
El objetivo de esta tesis es contribuir a encontrar métodos de eliminación de ruido en imágenes de resonancia magnética funcional, que difieran de los tradicionales y que permitan una mayor extracción de información durante el proceso de análisis. El primer capítulo hace una introducción a la resonancia magnética funcional. El segundo capítulo explica muy brevemente la problemática que provocan las altas concentraciones de ruido en los datos. El capítulo número tres hace un repaso de las contribuciones realizadas durante la realización de este trabajo. El cuarto capítulo expone de manera breve las estancias de investigación llevadas a cabo. El quinto capítulo está dedicado a resolver el problema de la presencia de ruido en las imágenes de resonancia magnética y plantea el uso de diferentes técnicas de filtrado. El capítulo seis se focaliza en los resultados obtenidos, tanto con datos artificiales como experimentales y hace una discusión de las implicaciones de estos. Finalmente, el séptimo capítulo muestra las conclusiones.
The aim of this thesis is to contribute to find methods of noise removal in functional magnetic resonance imaging, which differ from the traditional and allow a greater extraction of information during the process of analysis. The first chapter introduces the functional magnetic resonance imaging. The second chapter explains briefly the problems that cause high levels of noise in the data. The chapter number three is a review of the contributions carried out during the realization of this work. The fourth chapter presents briefly the research stays performed. The fifth chapter focuses on solving the problem of the presence of noise in magnetic resonance imaging and proposes the use of different filtering techniques as a solution. The chapter 6 is focused on the results, with both artificial and experimental data and discusses the different results. Finally, the seventh chapter shows the conclusions.
APA, Harvard, Vancouver, ISO, and other styles
31

Li, You. "Multispektrální zpracování obrazu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-449408.

Full text
Abstract:
S rychlým rozvojem technologie multispektrálního zobrazování v posledních desetiletích obrázky získané zobrazovacími systémy obsahují nejen barevná pásma RGB v každodenním životě, ale také mají multispektrální barevná pásma a vysoké prostorové rozlišení v multispektrálních obrazových datech. Díky tomu obrázky obsahují bohaté informace o charakteristických cílových oblastech. Fúze obrazu je také důležitou větví v oblasti zpracování obrazu, kde je více obrázků ze stejné oblasti ve stejné výšce sloučeno do jednoho obrazu. Poté se zlepší korelace mezi spektrálními informacemi multispektrálních obrazů. Aby se informace na obrázku neztratily. Tato práce obsahuje popis návrhu a implementace multispektrálního obrazového systému, předzpracování multispektrálních obrazů, fúzi multispektrálních obrazů a analýzu hlavních komponent. Nakonec je představeno hodnocení celého systému.
APA, Harvard, Vancouver, ISO, and other styles
32

Tôrres, Filipe Emídio. "Avaliação de representações transformadas para compressão de sinais de eletroencefalografia, com base em análise de componentes principais, decomposições wavelet, transformada discreta de cossenos e compressive sensing." reponame:Repositório Institucional da UnB, 2018. http://repositorio.unb.br/handle/10482/32583.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Faculdade UnB Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2018.
Submitted by Fabiana Santos (fabianacamargo@bce.unb.br) on 2018-08-30T19:08:22Z No. of bitstreams: 1 2018_FilipeEmídioTôrres.pdf: 3263020 bytes, checksum: 67052b5b208c8be101de72f84c20c0f9 (MD5)
Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2018-09-10T18:37:59Z (GMT) No. of bitstreams: 1 2018_FilipeEmídioTôrres.pdf: 3263020 bytes, checksum: 67052b5b208c8be101de72f84c20c0f9 (MD5)
Made available in DSpace on 2018-09-10T18:37:59Z (GMT). No. of bitstreams: 1 2018_FilipeEmídioTôrres.pdf: 3263020 bytes, checksum: 67052b5b208c8be101de72f84c20c0f9 (MD5) Previous issue date: 2018-08-30
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).
Os sinais de eletroencefalografia (EEG) podem ser utilizados para aplicações clínicas, como análises de níveis de sono, diagnósticos e acompanhamento de epilepsia, monitoramento e reabilitação. Esse tipo de sinal também é usado no contexto de interação cérebro-máquina (BCI do inglês, Brain Computer Interface), e seu uso é crescente em várias aplicações deste tipo, como controle de cadeiras de rodas, computadores e automóveis. Sendo assim, existem problemas comumente encontrados, por exemplo, na aquisição desse sinal. Muitas das vezes são necessárias de dezenas a centenas de eletrodos, além de que podem ocorrer falhas de contato exigindo trocas periódicas ou renovação de gel condutor. Outras dificuldades encontradas dizem respeito ao armazenamento e transmissão desses dados em dispositivos móveis e com restrição de consumo de energia. Portanto, existem técnicas de processamento de sinais diversas que podem diminuir o número de sensores necessários e reduzir os custos de armazenamento e transmissão. A proposta desta pesquisa é implementar e avaliar o Compressive Sensing (CS) e mais outras 4 técnicas aplicadas à compressão de sinais de EEG, visando compará-las quanto ao nível de esparsificação e à qualidade de sinais reconstruídos a partir da mesma quantidade de coeficientes. As técnicas utilizadas são o CS, a análise de componentes principais (PCA), análise de componentes independentes (ICA), 30 famílias de wavelets implementadas com base em bancos de filtros de decomposição e a transformada discreta de cossenos (DCT). O CS é destas técnicas a mais recentemente desenvolvida e apresenta possíveis vantagens na fase de aquisição com relação às demais, e o trabalho deseja avaliar sua viabilidade. Para a avaliação são considerados dois bancos de dados de sinais reais, um de polissonografia chamado Sleep Heart Health Study e um estudo em crianças do Instituto de Tecnologia de Massachusetts (MIT), ambos disponíveis publicamente. O estudo se baseia na transformação, quantização, codificação e em seus processos inversos para reconstrução do sinal. A partir dos resultados são realizadas comparações entre os sinais reconstruídos utilizando as diferentes representações escolhidas. Para a comparação, são usadas métricas quantitativas de razão do sinal-ruído (SNR), fator de compressão (CF), um tipo de diferença percentual residual (PRD1) e medidas de tempo.Foi observado que os algoritmos podem reconstruir os sinais com menos de 1=3 dos coeficientes originais dependendo da técnica utilizada. Em geral a DCT e a PCA têm um melhor resultado contra as outras nas métricas utilizadas. Porém cabe ressaltar que o CS permite menor custo de aquisição, possivelmente requisitando um hardware mais simples para isso. De fato, toda a aquisição realizada com base em CS pôde ser feita com medidas obtidas usando apenas soma dos sinais dos eletrodos, sem perdas em relação a matrizes de medidas que envolvem também multiplicações. Admitindo, por exemplo, uma reconstrução a partir de 50% do número de coeficientes do sinal no banco do MIT, a DCT conseguiu uma relação de SNR de 27; 8 dB entre o sinal original e a reconstrução. O PCA teve 24; 0 dB e as melhores wavelets ficaram na faixa dos 19 dB, já o CS com 8; 3 dB e o ICA apenas 1; 1 dB. Para esse mesmo banco, com 50% de CF, o PRD1 resultou em 27; 8% na DCT, 24; 0% na PCA, 17; 2% na wavelet biortogonal 2.2, 8; 3% no CS–10 e 1; 1% no ICA. Portanto, o estudo e uso do CS é justificado pela diferença de complexidade da fase de aquisição com relação a outras técnicas, inclusive tendo melhores resultados do que algumas delas. Na próxima etapa da pesquisa, pretende-se avaliar a compressão multicanal, para verificar o desempenho de cada técnica ao explorar a redundância entre os canais. Além de ferramentas que possam ajudar no desempenho do CS, como fontes de informação a priori e pré-filtragem dos sinais.
Electroencephalography (EEG) signals can be used for clinical applications such as sleep level analysis, diagnosis and monitoring of epilepsy, monitoring and rehabilitation. This type of signal is also used in the context of the Brain Computer Interface (BCI), and its use is increasing in many applications of this type, such as wheelchair, computer and automobile control. Thus, there are problems commonly encountered, for example, in the acquisition of this signal. Often times, it is necessary tens to thousands of electrodes, besides of contact failures may occur requiring periodic changes or conductive gel renewal. Other difficulties encountered relate to the storage and transmission of this data in mobile devices and with restricted energy consumption. Therefore, there are several signal processing techniques that can reduce the number of sensors required and also save storage and transmission costs. The purpose of this research is to implement and evaluate the Compressive Sensing (CS) and other 4 techniques applied to the compression of EEG signals, in order to compare them with the level of scattering and the quality of reconstructed signals from the same number of coefficients. The techniques used are CS, Principal Component Analysis (PCA), Independent Component Analysis (ICA), 30 families of wavelets implemented on the basis of decomposition filter banks and DCT (discrete cosine transform). CS is one of the most recently developed techniques and presents possible advantages in the acquisition phase in relation to the others, and the work wants to evaluate its viability. Two real-signal databases, a polysomnography called the Sleep Heart Health Study and one study of children at the Massachusetts Institute of Technology (MIT), both publicly available, are considered for the evaluation. The study is based on transformation, quantization, coding and its inverse processes for signal reconstruction. From the results are made comparisons between the reconstructed signals using the different representations chosen. For comparison, quantitative measurements of signal-to-noise ratio (SNR), compression factor (CF), a type of residual percentage difference (PRD1), and time measurements are used. It was observed that the algorithms can reconstruct the signals with less than 1/3 of the original coefficients depending on the technique used. In general, DCT and PCA have a better result comparing the others depending the metrics used. However, it is worth mentioning that CS allows lower cost of acquisition, possibly requesting a simpler hardware for this. In fact, all the acquisition based on CS could be done with measurements obtained using only the sum of the signals of the electrodes, without losses in relation to matrices of measures that also involve multiplications. Assuming, for example, a reconstruction from 50 % of the number of signal coefficients in the MIT database, the DCT achieved a SNR ratio of 27:8 dB between the original signal and the reconstruction. The PCA had 24:0 dB and the best wavelets were in the 19 dB range, the CS with 8:3 dB and the ICA only 1:1 dB. For this same database, with 50 % of CF, PRD1 resulted in 27:8% by DCT, 24:0% by PCA, 17:2% by biortogonal wavelet 2.2, 8:3% by CS–10 and 1:1% by ICA. Therefore, the study and use of CS is justified by the difference in complexity of the acquisition phase in relation to other techniques, including having better results than some of them. In the next step of the research, it is intended to evaluate the multichannel compression, to verify the performance of each technique when exploring the redundancy between the channels. In addition to tools that can help in the performance of the CS, as sources of information a priori and pre-filtering the signals.
APA, Harvard, Vancouver, ISO, and other styles
33

Memedi, Mevludin. "Mobile systems for monitoring Parkinson's disease." Licentiate thesis, Örebro universitet, Akademin för naturvetenskap och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-20552.

Full text
Abstract:
This thesis presents the development and evaluation of IT-based methods and systems for supporting assessment of symptoms and enabling remote monitoring of Parkinson‟s disease (PD) patients. PD is a common neurological disorder associated with impaired body movements. Its clinical management regarding treatment outcomes and follow-up of patients is complex. In order to reveal the full extent of a patient‟s condition, there is a need for repeated and time-stamped assessments related to both patient‟s perception towards common symptoms and motor function. In this thesis, data from a mobile device test battery, collected during a three year clinical study, was used for the development and evaluation of methods. The data was gathered from a series of tests, consisting of selfassessments and motor tests (tapping and spiral drawing). These tests were carried out repeatedly in a telemedicine setting during week-long test periods. One objective was to develop a computer method that would process tracedspiral drawings and generate a score representing PD-related drawing impairments. The data processing part consisted of using the discrete wavelet transform and principal component analysis. When this computer method was evaluated against human clinical ratings, the results showed that it could perform quantitative assessments of drawing impairment in spirals comparatively well. As a part of this objective, a review of systems and methods for detecting the handwriting and drawing impairment using touch screens was performed. The review showed that measures concerning forces, accelerations, and radial displacements were the most important ones in detecting fine motor movement anomalies. Another objective of this thesis work was to design and evaluate an information system for delivering assessment support information to the treating clinical staff for monitoring PD symptoms in their patients. The system consisted of a patient node for data collection based on the mobile device test battery, a service node for data storage and processing, and a web application for data presentation. A system module was designed for compiling the test battery time series into summary scores on a test period level. The web application allowed adequate graphic feedback of the summary scores to the treating clinical staff. The evaluation results for this integrated system indicate that it can be used as a tool for frequent PD symptom assessments in home environments.
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Li Ge. "Particle breakage mechanics in milling operation." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28950.

Full text
Abstract:
Milling is a common unit operation in industry for the purpose of intentional size reduction. Considerable amount of energy is consumed during a grinding process and much of the energy is dissipated as heat and sound, which often makes grinding into an energy-intensive and highly inefficient operation. Despite many attempts to interpret particle breakage during a milling process, the grindability of a material in a milling operation remains aloof and the mechanisms of particle breakage are still poorly understood. Hence the optimisation and refinement in the design and operation of milling are in great need of an improved scientific understanding of the complex failure mechanisms. This thesis aims to provide an in-depth understanding of particle breakage associated with stressing events that occur during milling. A hybrid of experimental, theoretical and numerical methods has been adopted to elucidate the particle breakage mechanics. This study covers from single particle damage at micro-scale to bulk comminution during the whole milling process. The mechanical properties of two selected materials, i.e. alumina and zeolite were measured by indentation techniques. The breakage test of zeolite granules subjected to impact loading was carried out and it was found that tangential component velocity plays an increasingly important role in particle breakage with increasing impact velocity. Besides, single particle breakage via in-situ loading was conducted under X-ray microcomputed tomography (μCT) to study the microstructure of selected particles, visualize the progressive failure process and evaluate the progressive failure using the technique of digital image correlation (DIC). A new particle breakage model was proposed deploying a mechanical approach assuming that the subsurface lateral crack accounts for chipping mechanism. Considering the limitation of existing models in predicting breakage under oblique impact and the significance of tangential component velocity identified from experiment, the effect of impact angle is considered in the developed breakage model, which enables the contribution of the normal and tangential velocity component to be rationalized. The assessment of breakage models including chipping and fragmentation under oblique impact suggests that the equivalent normal velocity proposed in the new model is able to give close prediction with experimental results sourced from the public literature. Milling experiments were performed using the UPZ100 impact pin mill (courtesy by Hosokawa Micron Ltd. UK) to measure the comminution characteristics of the test solids. Several parameters were used to evaluate the milling performance including product size distribution, relative size span, grinding energy and size reduction ratio etc. The collective data from impact pin mill provides the basis for the validation of numerical simulation results. The Discrete Element Method (DEM) is first used to model single particle breakage subject to normal impact loading using a bonded contact model. A validation of the bonded contact model was conducted where the disparity with the experimental results is discussed. A parametric study of the most significant parameters e.g. bond Young’s modulus, the mean tensile bond strength, the coefficient of variation of the strength and particle & particle restitution coefficient in the DEM contact model was carried out to gain a further understanding of the effect of input parameters on the single particle breakage behavior. The upscaling from laboratory scale (single particle impact test) to industrial process scale (impact pin mill) is achieved using Population Balance Modelling (PBM). Two important functions in PBM, the selection function and breakage function are discussed based on the single particle impact from both experimental and numerical methods. An example of predicting product size reduction via PBM was given and compared to the milling results from impact pin mill. Finally, the DEM simulation of particle dynamics with emphasis on the impact energy distribution was presented and discussed, which sheds further insights into the coupling of PBM and DEM.
APA, Harvard, Vancouver, ISO, and other styles
35

Hitz, Adrien. "Modelling of extremes." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:ad32f298-b140-4aae-b50e-931259714085.

Full text
Abstract:
This work focuses on statistical methods to understand how frequently rare events occur and what the magnitude of extreme values such as large losses is. It lies in a field called extreme value analysis whose scope is to provide support for scientific decision making when extreme observations are of particular importance such as in environmental applications, insurance and finance. In the univariate case, I propose new techniques to model tails of discrete distributions and illustrate them in an application on word frequency and multiple birth data. Suitably rescaled, the limiting tails of some discrete distributions are shown to converge to a discrete generalized Pareto distribution and generalized Zipf distribution respectively. In the multivariate high-dimensional case, I suggest modeling tail dependence between random variables by a graph such that its nodes correspond to the variables and shocks propagate through the edges. Relying on the ideas of graphical models, I prove that if the variables satisfy a new notion called asymptotic conditional independence, then the density of the joint distribution can be simplified and expressed in terms of lower dimensional functions. This generalizes the Hammersley- Clifford theorem and enables us to infer tail distributions from observations in reduced dimension. As an illustration, extreme river flows are modeled by a tree graphical model whose structure appears to recover almost exactly the actual river network. A fundamental concept when studying limiting tail distributions is regular variation. I propose a new notion in the multivariate case called one-component regular variation, of which Karamata's and the representation theorem, two important results in the univariate case, are generalizations. Eventually, I turn my attention to website visit data and fit a censored copula Gaussian graphical model allowing the visualization of users' behavior by a graph.
APA, Harvard, Vancouver, ISO, and other styles
36

Zarjam, Pega. "EEG Data acquisition and automatic seizure detection using wavelet transforms in the newborn EEG." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15795/.

Full text
Abstract:
This thesis deals with the problem of newborn seizre detection from the Electroencephalogram (EEG) signals. The ultimate goal is to design an automated seizure detection system to assist the medical personnel in timely seizure detection. Seizure detection is vital as neurological diseases or dysfunctions in newborn infants are often first manifested by seizure and prolonged seizures can result in impaired neuro-development or even fatality. The EEG has proved superior to clinical examination of newborns in early detection and prognostication of brain dysfunctions. However, long-term newborn EEG signals acquisition is considerably more difficult than that of adults and children. This is because, the number of the electrodes attached to the skin is limited by the size of the head, the newborns EEGs vary from day to day, and the newborns are reluctant of being in the recording situation. Also, the movement of the newborn can create artifact in the recording and as a result strongly affect the electrical seizure recognition. Most of the existing methods for neonates are either time or frequency based, and, therefore, do not consider the non-stationarity nature of the EEG signal. Thus, notwithstanding the plethora of existing methods, this thesis applies the discrete wavelet transform (DWT) to account for the non-stationarity of the EEG signals. First, two methods for seizure detection in neonates are proposed. The detection schemes are based on observing the changing behaviour of a number of statistical quantities of the wavelet coefficients (WC) of the EEG signal at different scales. In the first method, the variance and mean of the WC are considered as a feature set to dassify the EEG data into seizure and non-seizure. The test results give an average seizure detection rate (SDR) of 97.4%. In the second method, the number of zero-crossings, and the average distance between adjacent extrema of the WC of certain scales are extracted to form a feature set. The test obtains an average SDR of 95.2%. The proposed feature sets are both simple to implement, have high detection rate and low false alarm rate. Then, in order to reduce the complexity of the proposed schemes, two optimising methods are used to reduce the number of selected features. First, the mutual information feature selection (MIFS) algorithm is applied to select the optimum feature subset. The results show that an optimal subset of 9 features, provides SDR of 94%. Compared to that of the full feature set, it is clear that the optimal feature set can significantly reduce the system complexity. The drawback of the MIFS algorithm is that it ignores the interaction between features. To overcome this drawback, an alternative algorithm, the mutual information evaluation function (MIEF) is then used. The MIEF evaluates a set of candidate features extracted from the WC to select an informative feature subset. This function is based on the measurement of the information gain and takes into consideration the interaction between features. The performance of the proposed features is evaluated and compared to that of the features obtained using the MIFS algorithm. The MIEF algorithm selected the optimal 10 features resulting an average SDR of 96.3%. It is also shown, an average SDR of 93.5% can be obtained with only 4 features when the MIEF algorithm is used. In comparison with results of the first two methods, it is shown that the optimal feature subsets improve the system performance and significantly reduce the system complexity for implementation purpose.
APA, Harvard, Vancouver, ISO, and other styles
37

McCool, Christopher Steven. "Hybrid 2D and 3D face verification." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16436/.

Full text
Abstract:
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
APA, Harvard, Vancouver, ISO, and other styles
38

Chrysostomou, Charalambos. "Characterisation and classification of protein sequences by using enhanced amino acid indices and signal processing-based methods." Thesis, De Montfort University, 2013. http://hdl.handle.net/2086/9895.

Full text
Abstract:
Protein sequencing has produced overwhelming amount of protein sequences, especially in the last decade. Nevertheless, the majority of the proteins' functional and structural classes are still unknown, and experimental methods currently used to determine these properties are very expensive, laborious and time consuming. Therefore, automated computational methods are urgently required to accurately and reliably predict functional and structural classes of the proteins. Several bioinformatics methods have been developed to determine such properties of the proteins directly from their sequence information. Such methods that involve signal processing methods have recently become popular in the bioinformatics area and been investigated for the analysis of DNA and protein sequences and shown to be useful and generally help better characterise the sequences. However, there are various technical issues that need to be addressed in order to overcome problems associated with the signal processing methods for the analysis of the proteins sequences. Amino acid indices that are used to transform the protein sequences into signals have various applications and can represent diverse features of the protein sequences and amino acids. As the majority of indices have similar features, this project proposes a new set of computationally derived indices that better represent the original group of indices. A study is also carried out that resulted in finding a unique and universal set of best discriminating amino acid indices for the characterisation of allergenic proteins. This analysis extracts features directly from the protein sequences by using Discrete Fourier Transform (DFT) to build a classification model based on Support Vector Machines (SVM) for the allergenic proteins. The proposed predictive model yields a higher and more reliable accuracy than those of the existing methods. A new method is proposed for performing a multiple sequence alignment. For this method, DFT-based method is used to construct a new distance matrix in combination with multiple amino acid indices that were used to encode protein sequences into numerical sequences. Additionally, a new type of substitution matrix is proposed where the physicochemical similarities between any given amino acids is calculated. These similarities were calculated based on the 25 amino acids indices selected, where each one represents a unique biological protein feature. The proposed multiple sequence alignment method yields a better and more reliable alignment than the existing methods. In order to evaluate complex information that is generated as a result of DFT, Complex Informational Spectrum Analysis (CISA) is developed and presented. As the results show, when protein classes present similarities or differences according to the Common Frequency Peak (CFP) in specific amino acid indices, then it is probable that these classes are related to the protein feature that the specific amino acid represents. By using only the absolute spectrum in the analysis of protein sequences using the informational spectrum analysis is proven to be insufficient, as biologically related features can appear individually either in the real or the imaginary spectrum. This is successfully demonstrated over the analysis of influenza neuraminidase protein sequences. Upon identification of a new protein, it is important to single out amino acid responsible for the structural and functional classification of the protein, as well as the amino acids contributing to the protein's specific biological characterisation. In this work, a novel approach is presented to identify and quantify the relationship between individual amino acids and the protein. This is successfully demonstrated over the analysis of influenza neuraminidase protein sequences. Characterisation and identification problem of the Influenza A virus protein sequences is tackled through a Subgroup Discovery (SD) algorithm, which can provide ancillary knowledge to the experts. The main objective of the case study was to derive interpretable knowledge for the influenza A virus problem and to consequently better describe the relationships between subtypes of this virus. Finally, by using DFT-based sequence-driven features a Support Vector Machine (SVM)-based classification model was built and tested, that yields higher predictive accuracy than that of SD. The methods developed and presented in this study yield promising results and can be easily applied to proteomic fields.
APA, Harvard, Vancouver, ISO, and other styles
39

Beye, Mamadou Lamine. "Etude et contribution à l’optimisation de la commande des HEMTs GaN." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI102.

Full text
Abstract:
Cette thèse s'inscrit dans un contexte de développement durable où les enjeux énergétiques consistent à concevoir des convertisseurs de puissance plus disséminés, donc avec une Spécification ambitieuse en termes de densités massique et volumique. Les composants à semiconducteur dit à grand Gap permettent l’augmentation de la fréquence de commutation et permettent un fonctionnement à plus haute température locale. Les commutations à front raides et à haute fréquence des transistors rendent le système plus sensible aux éléments parasites. Ceci perturbe en retour la commutation des transistors et génère des pertes joules supplémentaires. Dans ce contexte les travaux ont été effectués dans le cadre d’une cotutelle entre les laboratoires Ampère (INSA Lyon) et LN2 (Université de Sherbrooke), le but étant d’apporter des contributions à l’optimisation de la commutation des HEMTs GaN. Le premier axe des travaux consiste à mettre en place des stratégies de contrôle de vitesses de commutation en tension et en courant, par la grille, dans le but d’améliorer la signature CEM. Les circuits de contrôle proposés sont développés dans un premier temps en boucle ouverte puis dans un second temps en boucle fermée afin de compenser des non-linéarités (température, courant de charge et tension de fonctionnement). Les prototypes de contrôle de grille ont été testés à partir de composants discrets du marché. Des limites apparaissent, que l’intégration monolithique GaN doit corriger à terme, en particulier en atténuant fortement le problème des inductances parasites. Les analyses en simulation ont reposé sur l’adoption d’un modèle comportemental de HEMT GaN identifiable. Le deuxième axe des travaux consiste à vérifier de manière systémique différentes stratégies de contrôle de grille notamment pour la gestion du compromis entre pertes joule pendant les temps morts au sein d’un à bras d’onduleur et la performance fréquentielle des commutations. Aux termes de ces travaux, les systèmes de contrôles développés en boucle ouverte ont permis de ralentir les vitesses de commutation d’au moins 30 %, occasionnant une augmentation des pertes de commutation, dans un ordre de grandeur inférieur à 50 %. Due à la rapidité de commutation des HEMT GaN et aux limites des composants discrets du marché, le taux de réduction des vitesses de commutation obtenu avec la boucle fermée (taux de réduction inférieur à 20 %) est moins intéressant qu’avec la boucle ouverte. L’utilisation d’un circuit monolithique peut être une alternative pour augmenter le taux de réduction des vitesses de commutation en boucle fermée. Des résultats de simulation sous SPICE en vue du circuit monolithique sont à la base de cette hypothèse. Concernant le deuxième axe, l’application de commande multiniveaux de grille des transistors du bras d’onduleur a permis de réduire les pertes de conduction inverse et les pertes dues aux phénomènes de Cross Talk d’au moins 30 %
This thesis is part of the sustainable development context where the energy challenges rely on designing numerous and lumped power converters with good power density and high efficiency. New power semiconductor devices, namely wide band semiconductors (GaN, SiC) are used in designing the converters. The high frequency control of these converters makes the system more sensitive to parasitic elements. The latter elements disrupt the switching behavior of the transistors and generate additional losses. In this context this work was carried out in a cotutelle partnership between Ampère Laboratory in Villeurbanne and LN2 laboratory at the University of Sherbrooke; the aim being to make a contribution in optimizing the switching conditions of GaN HEMTs. The first work axis consists in managing the voltage and current switching speed through gate control strategies in order to improve the conducted EMI. Firstly, most of the proposed control circuits are developed in open-loop and then secondly in closed-loop in order to compensate the effects of non-linearities (with respect to temperature, load current and operating voltage). Concerning the development of control systems, it can be done first by the use of available discrete components, then by the alternative of the monolithic GaN integration which is considered in order to bring more speed and efficiency. Monolithic integration would also solve the problem of parasitic inductances. To facilitate the design of integrated circuits and control systems, the development of a behavioral model of HEMT GaN will serve as a modeling tool. The second axis of the work consists in experimentally validating well-adapted control system for the gate of the power transistor in order to master the transient behaviors of the power transistors. Namely it is necessary to allow a satisfying management of losses during dead time in a half bridge converter. At the end of this work, the control systems developed in open loop made it possible to slow the switching speeds by at least 30 % but causing an increase in switching losses up to 50% in some cases. Due to the fast switching speed of HEMT GaNs and the limitations of discrete components on the market, the reduction rate of switching speeds obtained with the closed loop (reduction rate less than 20%) is less attractive than that of the open loop. Using a monolithic circuit can be an alternative to increase the rate of reduction of closed loop switching speeds. SPICE simulation toward monolithic circuit are the basis of this hypothesis. Concerning the second axis, the application of multilevel gate voltage control of the transistors of half bridge made it possible to reduce the losses of reverse conduction and the losses due to the phenomena of Cross Talk by at least by 30 %
APA, Harvard, Vancouver, ISO, and other styles
40

Bakewell, Katie. "Self-Assembly of DNA Graphs and Postman Tours." UNF Digital Commons, 2018. https://digitalcommons.unf.edu/etd/857.

Full text
Abstract:
DNA graph structures can self-assemble from branched junction molecules to yield solutions to computational problems. Self-assembly of graphs have previously been shown to give polynomial time solutions to hard computational problems such as 3-SAT and k-colorability problems. Jonoska et al. have proposed studying self-assembly of graphs topologically, considering the boundary components of their thickened graphs, which allows for reading the solutions to computational problems through reporter strands. We discuss weighting algorithms and consider applications of self-assembly of graphs and the boundary components of their thickened graphs to problems involving minimal weight Eulerian walks such as the Chinese Postman Problem and the Windy Postman Problem.
APA, Harvard, Vancouver, ISO, and other styles
41

Arad, Cosmin Ionel. "Programming Model and Protocols for Reconfigurable Distributed Systems." Doctoral thesis, KTH, Programvaruteknik och Datorsystem, SCS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122311.

Full text
Abstract:
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for largescale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics.

QC 20130520

APA, Harvard, Vancouver, ISO, and other styles
42

Arad, Cosmin. "Programming Model and Protocols for Reconfigurable Distributed Systems." Doctoral thesis, SICS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-24202.

Full text
Abstract:
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for large-scale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics.
Kompics
CATS
REST
APA, Harvard, Vancouver, ISO, and other styles
43

Kämmerer, Lutz. "High Dimensional Fast Fourier Transform Based on Rank-1 Lattice Sampling." Doctoral thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-157673.

Full text
Abstract:
We consider multivariate trigonometric polynomials with frequencies supported on a fixed but arbitrary frequency index set I, which is a finite set of integer vectors of length d. Naturally, one is interested in spatial discretizations in the d-dimensional torus such that - the sampling values of the trigonometric polynomial at the nodes of this spatial discretization uniquely determines the trigonometric polynomial, - the corresponding discrete Fourier transform is fast realizable, and - the corresponding fast Fourier transform is stable. An algorithm that computes the discrete Fourier transform and that needs a computational complexity that is bounded from above by terms that are linear in the maximum of the number of input and output data up to some logarithmic factors is called fast Fourier transform. We call the fast Fourier transform stable if the Fourier matrix of the discrete Fourier transform has a condition number near one and the fast algorithm does not corrupt this theoretical stability. We suggest to use rank-1 lattices and a generalization as spatial discretizations in order to sample multivariate trigonometric polynomials and we develop construction methods in order to determine reconstructing sampling sets, i.e., sets of sampling nodes that allow for the unique, fast, and stable reconstruction of trigonometric polynomials. The methods for determining reconstructing rank-1 lattices are component{by{component constructions, similar to the seminal methods that are developed in the field of numerical integration. During this thesis we identify a component{by{component construction of reconstructing rank-1 lattices that allows for an estimate of the number of sampling nodes M |I|\le M\le \max\left(\frac{2}{3}|I|^2,\max\{3\|\mathbf{k}\|_\infty\colon\mathbf{k}\in I\}\right) that is sufficient in order to uniquely reconstruct each multivariate trigonometric polynomial with frequencies supported on the frequency index set I. We observe that the bounds on the number M only depends on the number of frequency indices contained in I and the expansion of I, but not on the spatial dimension d. Hence, rank-1 lattices are suitable spatial discretizations in arbitrarily high dimensional problems. Furthermore, we consider a generalization of the concept of rank-1 lattices, which we call generated sets. We use a quite different approach in order to determine suitable reconstructing generated sets. The corresponding construction method is based on a continuous optimization method. Besides the theoretical considerations, we focus on the practicability of the presented algorithms and illustrate the theoretical findings by means of several examples. In addition, we investigate the approximation properties of the considered sampling schemes. We apply the results to the most important structures of frequency indices in higher dimensions, so-called hyperbolic crosses and demonstrate the approximation properties by the means of several examples that include the solution of Poisson's equation as one representative of partial differential equations.
APA, Harvard, Vancouver, ISO, and other styles
44

Kämmerer, Lutz. "High Dimensional Fast Fourier Transform Based on Rank-1 Lattice Sampling." Doctoral thesis, Universitätsverlag der Technischen Universität Chemnitz, 2014. https://monarch.qucosa.de/id/qucosa%3A20167.

Full text
Abstract:
We consider multivariate trigonometric polynomials with frequencies supported on a fixed but arbitrary frequency index set I, which is a finite set of integer vectors of length d. Naturally, one is interested in spatial discretizations in the d-dimensional torus such that - the sampling values of the trigonometric polynomial at the nodes of this spatial discretization uniquely determines the trigonometric polynomial, - the corresponding discrete Fourier transform is fast realizable, and - the corresponding fast Fourier transform is stable. An algorithm that computes the discrete Fourier transform and that needs a computational complexity that is bounded from above by terms that are linear in the maximum of the number of input and output data up to some logarithmic factors is called fast Fourier transform. We call the fast Fourier transform stable if the Fourier matrix of the discrete Fourier transform has a condition number near one and the fast algorithm does not corrupt this theoretical stability. We suggest to use rank-1 lattices and a generalization as spatial discretizations in order to sample multivariate trigonometric polynomials and we develop construction methods in order to determine reconstructing sampling sets, i.e., sets of sampling nodes that allow for the unique, fast, and stable reconstruction of trigonometric polynomials. The methods for determining reconstructing rank-1 lattices are component{by{component constructions, similar to the seminal methods that are developed in the field of numerical integration. During this thesis we identify a component{by{component construction of reconstructing rank-1 lattices that allows for an estimate of the number of sampling nodes M |I|\le M\le \max\left(\frac{2}{3}|I|^2,\max\{3\|\mathbf{k}\|_\infty\colon\mathbf{k}\in I\}\right) that is sufficient in order to uniquely reconstruct each multivariate trigonometric polynomial with frequencies supported on the frequency index set I. We observe that the bounds on the number M only depends on the number of frequency indices contained in I and the expansion of I, but not on the spatial dimension d. Hence, rank-1 lattices are suitable spatial discretizations in arbitrarily high dimensional problems. Furthermore, we consider a generalization of the concept of rank-1 lattices, which we call generated sets. We use a quite different approach in order to determine suitable reconstructing generated sets. The corresponding construction method is based on a continuous optimization method. Besides the theoretical considerations, we focus on the practicability of the presented algorithms and illustrate the theoretical findings by means of several examples. In addition, we investigate the approximation properties of the considered sampling schemes. We apply the results to the most important structures of frequency indices in higher dimensions, so-called hyperbolic crosses and demonstrate the approximation properties by the means of several examples that include the solution of Poisson's equation as one representative of partial differential equations.
APA, Harvard, Vancouver, ISO, and other styles
45

HUANG, JUN-ZHONG, and 黃俊中. "High speed optical receiver using discrete components." Thesis, 1992. http://ndltd.ncl.edu.tw/handle/88172126067732066317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Che, Xuan. "Spatial graphical models with discrete and continuous components." Thesis, 2012. http://hdl.handle.net/1957/33644.

Full text
Abstract:
Graphical models use Markov properties to establish associations among dependent variables. To estimate spatial correlation and other parameters in graphical models, the conditional independences and joint probability distribution of the graph need to be specified. We can rely on Gaussian multivariate models to derive the joint distribution when all the nodes of the graph are assumed to be normally distributed. However, when some of the nodes are discrete, the Gaussian model no longer affords an appropriate joint distribution function. We develop methods specifying the joint distribution of a chain graph with both discrete and continuous components, with spatial dependencies assumed among all variables on the graph. We propose a new group of chain graphs known as the generalized tree networks. Constructing the chain graph as a generalized tree network, we partition its joint distributions according to the maximal cliques. Copula models help us to model correlation among discrete variables in the cliques. We examine the method by analyzing datasets with simulated Gaussian and Bernoulli Markov random fields, as well as with a real dataset involving household income and election results. Estimates from the graphical models are compared with those from spatial random effects models and multivariate regression models.
Graduation date: 2013
APA, Harvard, Vancouver, ISO, and other styles
47

Chou, Cheng-hsien, and 周政憲. "Integer modulation index CPM without discrete power component." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/81377811160634969336.

Full text
Abstract:
碩士
國立高雄第一科技大學
電腦與通訊工程研究所
100
Continuous phase modulation (Continuous Phase Modulation, CPM) signal has the characteristic of constant envelope, bandwidth efficiency, power efficiency and low side-lobe. Nevertheless, CPM with integer modulation indexes usually do not with the above properties. In recent years, the foreign savant Huang suggested to decompose CPM signals with integer modulation indexes into a set of PAM (Pulse Amplitude Modulation, PAM) waveforms. The PAM waveforms can be classified into two parts of pulses, one is data-dependent PAM pulses and the other is data-independent. The data-independent PAM pulse is correspondent to the discrete power spectral that is a waste of power. Therefore, most CPM with integer modulation index is not useful and does not discuss by the researchers. In fact, only binary CPFSK with unity modulation index has been employed in the pager system. In this thesis, we study new modulator by removing the data-independent PAM waveform and the Euclidean distance and spectrum are calculated. Since this new modulation no longer has the constant envelope property, a limiter is constructed to recover the constant envelope. It is found that the novel modulator has better distance yet with bandwidth expansion, as compared to the original CPM signals.
APA, Harvard, Vancouver, ISO, and other styles
48

Lin, Pao-Te, and 林寶德. "Thermal Conductivity Vacuum Gauge Constructed by Using Discrete Components." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/43913127861945660491.

Full text
Abstract:
碩士
明新科技大學
電子工程研究所
100
This study explored an intelligent vacuum gauge which has properties of low cost, high precision, and low power consumption. Because of the heating element and temperature sensor of the vacuum gauge are made by SMD (Surface Mounted Devices) twin transistors, the vacuum gauge includes advantages of low cost and higher environmental tolerance. The dual side environmental temperature sensing transistor is used to detect environmental temperature and compensate temperature variation. Therefore, the precision of the vacuum gauge is increased. The 16-bit single-chip microprocessor is used to be the core of controller, calculator, power management, analog sensing signal conditioner and optimization. Thus, the vacuum gauge with high precision and low power consumption can be fabricated.
APA, Harvard, Vancouver, ISO, and other styles
49

Hsu, Tsung-Yao, and 許宗堯. "Development of Object-Oriented Simulation Components for Discrete Event Simulation-A Study on Supermarket." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/29188167839957536928.

Full text
Abstract:
碩士
國立臺灣科技大學
管理研究所資訊管理學程
87
Adopting object-oriented approach to program simulation models has many advantages. One of them is that it enhances the reuse of program models and reduce the coding time. The object-oriented programming languages of more frequently used to develop simulation models are C++, Java, ADA, Objective-C, Modula-3, SmallTalk, Object Pascal, etc. Traditional programming approach of simulation models is usually associated with simulation libraries. That kind of approach is inferior to component-based programming. Delphi provides a good environment of developing components. Those visual simulation components developed in Delphi can reduce a great deal of programming time and let us construct the simulation models easily and rapidly. Our study develop some simulation components including essential simulation componets and supermarket components. By using these components, users can build a simple supermaket simulation model quickly. Users also can modify or extened the functions of the components to satisfy their needs. Associated with the other Delphi componets can make the functions of simulation application more complete. Also we demonstrate a complete simulation application which provides some practical functions.
APA, Harvard, Vancouver, ISO, and other styles
50

Almeida, Manuel José da Silva. "Modelos de simulação de processos por junção de componentes." Master's thesis, 2015. http://hdl.handle.net/1822/40303.

Full text
Abstract:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
As organizações, perante um ambiente complexo e cada vez mais competitivo à escala global, necessitam de se adaptar rapidamente às mudanças que ocorrem à sua volta. A Gestão por Processos de Negócio pode ser a resposta, uma vez que ajuda as organizações a responder de forma adequada e mais rapidamente às pressões a que estão sujeitas. No entanto, os processos de negócio são sistemas complexos que envolvem atividades, pessoas e tecnologia sob uma grande dependência, complexidade e variabilidade, o que dificulta a previsão do desempenho e comportamento destes sistemas. Mas as mudanças representam riscos devido ao impacto que podem ter no processo e nos componentes da organização, sendo que muitos esforços da reengenharia ou reformulação de processos acabam por falhar quando são levados à prática. É importante que estas sejam dotadas de mecanismos que permitam uma adaptação contínua às exigências a que estão sujeitas. A simulação de processos de negócio, enquadrada numa abordagem BPM, contribui para a avaliação de cenários futuros e de novas opções sem incorrer nos custos e riscos da sua implementação. A simulação de processos ajuda a prever potenciais impactos das modificações nos processos de negócio atuais e comparar alternativas de implementação. Neste contexto, da qualidade dos modelos de processo, da exatidão dos dados de entrada e dos componentes a utilizar, dependem os resultados da simulação. Neste trabalho, vamos concentrar a nossa atenção no desenvolvimento de modelos de simulação. Quando os componentes são de fácil utilização e previamente testados, a simulação tem maior potencial para prever os impactos das mudanças sobre as operações atuais e fornecer orientações sobre o melhor caminho a seguir. Componentes imprecisos podem alterar os resultados de simulação, alterando o propósito da condução para melhorias prevista. Com este trabalho pretende-se construir um repositório de componentes de simulação previamente desenvolvidos e testados, possibilitando a sua reutilização de uma forma ágil e fidedigna.
Organizations, faced with a complex and competitive environment on a global scale, need to adapt quickly to changes around them. Business Process Management can be the answer, as it helps organizations to respond promptly and appropriately to the pressures they are subjected to. However, business processes are complex systems that involve activities, people and technology under a huge dependency, complexity and variability, which makes difficult to predict the performance and the behaviour of these systems. Nevertheless, changes means risks due to the impact that they can have in the process and in the organization’s components. That’s why so many reengineering efforts or processes reformulations end up failing when they are put into practice. The simulation of business processes, framed in a BPM approach, contribute to the assessment of future scenarios and new options without incurring the costs and risks of its implementation. The process simulation helps to predict potential impacts of current business processes’ changes and to compare alternatives of implementation. According to this, the quality of process models, the accuracy of the input data and the components to be used are dependent on the simulation results. In this project we are going to focus our attention on the development of simulation models. When the components are easy to use and pre-tested, the simulations have more potential to predict the changes’ impact on current operations and provide guidance on the best way to follow. The lack of precision of the components can interfere with the simulation results. This work is intended to build a repository of simulation components previously developed and tested, enabling their reuse in an agile and reliable way.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography