To see the other types of publications on this topic, follow the link: Sequential identification.

Dissertations / Theses on the topic 'Sequential identification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'Sequential identification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kuznetsova, Yulia. "Analysis and Evaluation of Sequential Redundancy Identification Algorithms." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-51105.

Full text
Abstract:
This thesis has a goal of analysing different methods used for identifying redundant faults in synchronous sequential circuits as a part of reducing the complexity of ATPG algorithms and minimizing the test sets. It starts with an overview of various faults which occur in digital circuits of different types and moves on to the common testing methods used for fault detection. As it is not possible to perform an exhaustive search in order to detect every possible fault in any given circuit due to time and power consumption issues, there are certain needs for minimizing the set of tests which detects the existing faults. Therefore discovering the untestable and redundant faults is so important when testing. The overview of both classical and novel methods for detecting untestable and redundant faults is presented followed by the analysis of the results and the benefits each of these methods promises.
APA, Harvard, Vancouver, ISO, and other styles
2

Flowe, Heather D. "The effect of lineup member similarity on recognition accuracy in simultaneous and sequential lineups." Diss., Connected to a 24 p. preview or request complete full text in PDF format. Access restricted to UC IP addresses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3189995.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2005.
Title from first page of PDF file (viewed March 1, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references ( p. 113-116).
APA, Harvard, Vancouver, ISO, and other styles
3

Topp, Lisa Dawn. "An evaluation of eyewitness decision making strategies for simultaneous and sequential lineups." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Annis, Jeffrey Scott. "A Model of Positive Sequential Dependencies in Judgments of Frequency." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4626.

Full text
Abstract:
Positive sequential dependencies occur when the response on the current trial n is positively correlated with the response on trial n-1. This was recently observed in a Judgment of Frequency (JOF) task (Malmberg and Annis, 2011). A model of positive sequential dependencies was developed in the REM framework (Shiffrin & Steyvers, 1997) by assuming that features that represent the current test item in a retrieval cue carry over from the previous retrieval cue. To assess the model, we sought a set of data that allows us to distinguish between frequency similarity and item similarity. Therefore, we chose to use a JOF task in which we manipulated the item similarity of the stimuli by presenting either landscape photos (high similarity), or photos of everyday objects such as shoes, cars, etc (low similarity). Similarity was modeled by assuming either that the item representations share a proportion of features or by assuming that the exemplars from different stimulus classes vary in the distinctiveness or diagnosticity. The model fits indicated that the best way to model similarity was to assume that items share a proportions of features.
APA, Harvard, Vancouver, ISO, and other styles
5

YU, CHENGGANG. "A SUB-GROUPING METHODOLOGY AND NON-PARAMETRIC SEQUENTIAL RATIO TEST FOR SIGNAL VALIDATION." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1022258703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Batra, Sushil Baker Erich J. Lee Myeongwoo. "Identification of phenotypes in Caenorabhditis elegans on the basis of sequence similarity." Waco, Tex. : Baylor University, 2009. http://hdl.handle.net/2104/5325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Baghaee, Sajjad. "Identification And Localization On A Wireless Magnetic Sensor Network." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614447/index.pdf.

Full text
Abstract:
This study focused on using magnetic sensors for localization and identification of targets with a wireless sensor network (WSN). A wireless sensor network with MICAz motes was set up utilizing a centralized tree-based system. The MTS310, which is equipped with a 2-axis magnetic sensor was used as the sensor board on MICAz motes. The use of magnetic sensors in wireless sensor networks is a topic that has gained limited attention in comparison to that of other sensors. Research has generally focused on the detection of large ferromagnetic targets (e.g., cars and airplanes). Moreover, the changes in the magnetic field intensity measured by the sensor have been used to obtain simple information, such as target direction or whether or not the target has passed a certain point. This work aims at understanding the sensing limitations of magnetic sensors by considering small-scale targets moving within a 30 cm radius. Four heavy iron bars were used as test targets in this study. Target detection, identification and sequential localization were accomplished using the Minimum Euclidean Distance (MED) method. The results show the accuracy of this method for this job. Different forms of sensor sensing region discretization were considered. Target identification was done on the boundaries of sensing regions. Different gateways were selected as entrance point for identification point and the results of them were compared with each other. An online ILS system was implemented and continuous movements of the ferromagnetic objects were monitored. The undesirable factors which affect the measurements were discussed and techniques to reduce or eliminate faulty measurements are presented. A magnetic sensor orientation detector and set/reset strap have been designed and fabricated. Orthogonal Matching Pursuit (OMP) algorithm was proposed for multiple sensors multiple target case in ILS systems as a future work. This study can then be used to design energy-efficient, intelligent magnetic sensor networks
APA, Harvard, Vancouver, ISO, and other styles
8

Meyer, Marcus. "Parameter identification problems for elastic large deformations - Part I: model and solution of the inverse problem." Universitätsbibliothek Chemnitz, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200901869.

Full text
Abstract:
In this paper we discuss the identification of parameter functions in material models for elastic large deformations. A model of the the forward problem is given, where the displacement of a deformed material is found as the solution of a n onlinear PDE. Here, the crucial point is the definition of the 2nd Piola-Kirchhoff stress tensor by using several material laws including a number of material parameters. In the main part of the paper we consider the identification of such parameters from measured displacements, where the inverse problem is given as an optimal control problem. We introduce a solution of the identification problem with Lagrange and SQP methods. The presented algorithm is applied to linear elastic material with large deformations.
APA, Harvard, Vancouver, ISO, and other styles
9

Lindsten, Fredrik. "Particle filters and Markov chains for learning of dynamical systems." Doctoral thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-97692.

Full text
Abstract:
Sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) methods provide computational tools for systematic inference and learning in complex dynamical systems, such as nonlinear and non-Gaussian state-space models. This thesis builds upon several methodological advances within these classes of Monte Carlo methods.Particular emphasis is placed on the combination of SMC and MCMC in so called particle MCMC algorithms. These algorithms rely on SMC for generating samples from the often highly autocorrelated state-trajectory. A specific particle MCMC algorithm, referred to as particle Gibbs with ancestor sampling (PGAS), is suggested. By making use of backward sampling ideas, albeit implemented in a forward-only fashion, PGAS enjoys good mixing even when using seemingly few particles in the underlying SMC sampler. This results in a computationally competitive particle MCMC algorithm. As illustrated in this thesis, PGAS is a useful tool for both Bayesian and frequentistic parameter inference as well as for state smoothing. The PGAS sampler is successfully applied to the classical problem of Wiener system identification, and it is also used for inference in the challenging class of non-Markovian latent variable models.Many nonlinear models encountered in practice contain some tractable substructure. As a second problem considered in this thesis, we develop Monte Carlo methods capable of exploiting such substructures to obtain more accurate estimators than what is provided otherwise. For the filtering problem, this can be done by using the well known Rao-Blackwellized particle filter (RBPF). The RBPF is analysed in terms of asymptotic variance, resulting in an expression for the performance gain offered by Rao-Blackwellization. Furthermore, a Rao-Blackwellized particle smoother is derived, capable of addressing the smoothing problem in so called mixed linear/nonlinear state-space models. The idea of Rao-Blackwellization is also used to develop an online algorithm for Bayesian parameter inference in nonlinear state-space models with affine parameter dependencies.
CNDM
CADICS
APA, Harvard, Vancouver, ISO, and other styles
10

Ru, Jifeng. "Adaptive estimation and detection techniques with applications." ScholarWorks@UNO, 2005. http://louisdl.louislibraries.org/u?/NOD,285.

Full text
Abstract:
Thesis (Ph. D.)--University of New Orleans, 2005.
Title from electronic submission form. "A dissertation ... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Engineering and Applied Science"--Dissertation t.p. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
11

Raillon, Loic. "Experimental identification of physical thermal models for demand response and performance evaluation." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI039.

Full text
Abstract:
La stratégie de l’Union Européenne pour atteindre les objectifs climatiques, est d’augmenter progressivement la part d’énergies renouvelables dans le mix énergétique et d’utiliser l’énergie plus efficacement de la production à la consommation finale. Cela implique de mesurer les performances énergétiques du bâtiment et des systèmes associés, indépendamment des conditions climatiques et de l’usage, pour fournir des solutions efficaces et adaptées de rénovation. Cela implique également de connaître la demande énergétique pour anticiper la production et le stockage d’énergie (mécanismes de demande et réponse). L’estimation des besoins énergétiques et des performances énergétiques des bâtiments ont un verrou scientifique commun : l’identification expérimentale d’un modèle physique du comportement intrinsèque du bâtiment. Les modèles boîte grise, déterminés d’après des lois physiques et les modèles boîte noire, déterminés heuristiquement, peuvent représenter un même système physique. Des relations entre les paramètres physiques et heuristiques existent si la structure de la boîte noire est choisie de sorte qu’elle corresponde à la structure physique. Pour trouver la meilleure représentation, nous proposons d’utiliser, des simulations de Monte Carlo pour analyser la propagation des erreurs dans les différentes transformations de modèle et, une méthode de priorisation pour classer l’influence des paramètres. Les résultats obtenus indiquent qu’il est préférable d’identifier les paramètres physiques. Néanmoins, les informations physiques, déterminées depuis l’estimation des paramètres, sont fiables si la structure est inversible et si la quantité d’information dans les données est suffisante. Nous montrons comment une structure de modèle identifiable peut être choisie, notamment grâce au profil de vraisemblance. L’identification expérimentale comporte trois phases : la sélection, la calibration et la validation du modèle. Ces trois phases sont détaillées dans le cas d’une expérimentation d’une maison réelle en utilisant une approche fréquentiste et Bayésienne. Plus précisément, nous proposons une méthode efficace de calibration Bayésienne pour estimer la distribution postérieure des paramètres et ainsi réaliser des simulations en tenant compte de toute les incertitudes, ce qui représente un atout pour le contrôle prédictif. Nous avons également étudié les capacités des méthodes séquentielles de Monte Carlo pour estimer simultanément les états et les paramètres d’un système. Une adaptation de la méthode de prédiction d’erreur récursive, dans une stratégie séquentielle de Monte Carlo, est proposée et comparée à une méthode de la littérature. Les méthodes séquentielles peuvent être utilisées pour identifier un premier modèle et fournir des informations sur la structure du modèle sélectionnée pendant que les données sont collectées. Par la suite, le modèle peut être amélioré si besoin, en utilisant le jeu de données et une méthode itérative
The European Union strategy for achieving the climate targets, is to progressively increase the share of renewable energy in the energy mix and to use the energy more efficiently from production to final consumption. It requires to measure the energy performance of buildings and associated systems, independently of weather conditions and user behavior, to provide efficient and adapted retrofitting solutions. It also requires to known the energy demand to anticipate the energy production and storage (demand response). The estimation of building energy demand and the estimation of energy performance of buildings have a common scientific: the experimental identification of the physical model of the building’s intrinsic behavior. Grey box models, determined from first principles, and black box models, determined heuristically, can describe the same physical process. Relations between the physical and mathematical parameters exist if the black box structure is chosen such that it matches the physical ones. To find the best model representation, we propose to use, Monte Carlo simulations for analyzing the propagation of errors in the different model transformations, and factor prioritization, for ranking the parameters according to their influence. The obtained results show that identifying the parameters on the state-space representation is a better choice. Nonetheless, physical information determined from the estimated parameters, are reliable if the model structure is invertible and the data are informative enough. We show how an identifiable model structure can be chosen, especially thanks to profile likelihood. Experimental identification consists of three phases: model selection, identification and validation. These three phases are detailed on a real house experiment by using a frequentist and Bayesian framework. More specifically, we proposed an efficient Bayesian calibration to estimate the parameter posterior distributions, which allows to simulate by taking all the uncertainties into account, which is suitable for model predictive control. We have also studied the capabilities of sequential Monte Carlo methods for estimating simultaneously the states and parameters. An adaptation of the recursive prediction error method into a sequential Monte Carlo framework, is proposed and compared to a method from the literature. Sequential methods can be used to provide a first model fit and insights on the selected model structure while the data are collected. Afterwards, the first model fit can be refined if necessary, by using iterative methods with the batch of data
APA, Harvard, Vancouver, ISO, and other styles
12

Jeník, Ivan. "Identifikace parametrů elasto-plastických modelů materiálu z experimentálních dat." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231979.

Full text
Abstract:
This master's thesis deals with the identification of the material flow curve from record of tensile test of smooth cylindrical specimen. First, necessary theory background is presented. Basic terms of incremental theory of plasticity, tensile test procedure and processing its outputs are described. Furthermore, possibilities of mathematical expression of the elastic-plastic material constitutive law, thus mathematical expression of the material flow curve itself. Mechanism of ductile damage of material is explained briefly as well. Overview of recent methods of the flow curve identification is given, focused on cases, when the stress distribution in a specimen is not uniaxial. That is either kind of analytic correction of basic formulas derived for uniaxial stress state, or application of mathematical optimization techniques combined with numerical simulation of the tensile test. Also unusual method of neural network is mentioned. For 8 given materials, the flow curve identification was performed using different methods. Namely by analytic correction, optimization, sequential identification and neural network. Algorithms of the last two methods were modified. Based on assessment of obtained results, application field and adjusting the parameters of single algorithms was recommended. It showed up, that an effective way to the accurate and credible results is the combination of different methods during flow curve identification procedure.
APA, Harvard, Vancouver, ISO, and other styles
13

Goliot, Alain. "Contribution à l'identification et à la conception des systèmes de commande de processus discontinus : modélisation à partie des réseaux de Pétri." Nancy 1, 1987. http://www.theses.fr/1987NAN10029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bonis, Ioannis. "Optimisation and control methodologies for large-scale and multi-scale systems." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/optimisation-and-control-methodologies-for-largescale-and-multiscale-systems(6c4a4f13-ebae-4d9d-95b7-cca754968d47).html.

Full text
Abstract:
Distributed parameter systems (DPS) comprise an important class of engineering systems ranging from "traditional" such as tubular reactors, to cutting edge processes such as nano-scale coatings. DPS have been studied extensively and significant advances have been noted, enabling their accurate simulation. To this end a variety of tools have been developed. However, extending these advances for systems design is not a trivial task . Rigorous design and operation policies entail systematic procedures for optimisation and control. These tasks are "upper-level" and utilize existing models and simulators. The higher the accuracy of the underlying models, the more the design procedure benefits. However, employing such models in the context of conventional algorithms may lead to inefficient formulations. The optimisation and control of DPS is a challenging task. These systems are typically discretised over a computational mesh, leading to large-scale problems. Handling the resulting large-scale systems may prove to be an intimidating task and requires special methodologies. Furthermore, it is often the case that the underlying physical phenomena span various temporal and spatial scales, thus complicating the analysis. Stiffness may also potentially be exhibited in the (nonlinear) models of such phenomena. The objective of this work is to design reliable and practical procedures for the optimisation and control of DPS. It has been observed in many systems of engineering interest that although they are described by infinite-dimensional Partial Differential Equations (PDEs) resulting in large discretisation problems, their behaviour has a finite number of significant components , as a result of their dissipative nature. This property has been exploited in various systematic model reduction techniques. Of key importance in this work is the identification of a low-dimensional dominant subspace for the system. This subspace is heuristically found to correspond to part of the eigenspectrum of the system and can therefore be identified efficiently using iterative matrix-free techniques. In this light, only low-dimensional Jacobians and Hessian matrices are involved in the formulation of the proposed algorithms, which are projections of the original matrices onto appropriate low-dimensional subspaces, computed efficiently with directional perturbations.The optimisation algorithm presented employs a 2-step projection scheme, firstly onto the dominant subspace of the system (corresponding to the right-most eigenvalues of the linearised system) and secondly onto the subspace of decision variables. This algorithm is inspired by reduced Hessian Sequential Quadratic Programming methods and therefore locates a local optimum of the nonlinear programming problem given by solving a sequence of reduced quadratic programming (QP) subproblems . This optimisation algorithm is appropriate for systems with a relatively small number of decision variables. Inequality constraints can be accommodated following a penalty-based strategy which aggregates all constraints using an appropriate function , or by employing a partial reduction technique in which only equality constraints are considered for the reduction and the inequalities are linearised and passed on to the QP subproblem . The control algorithm presented is based on the online adaptive construction of low-order linear models used in the context of a linear Model Predictive Control (MPC) algorithm , in which the discrete-time state-space model is recomputed at every sampling time in a receding horizon fashion. Successive linearisation around the current state on the closed-loop trajectory is combined with model reduction, resulting in an efficient procedure for the computation of reduced linearised models, projected onto the dominant subspace of the system. In this case, this subspace corresponds to the eigenvalues of largest magnitude of the discretised dynamical system. Control actions are computed from low-order QP problems solved efficiently online.The optimisation and control algorithms presented may employ input/output simulators (such as commercial packages) extending their use to upper-level tasks. They are also suitable for systems governed by microscopic rules, the equations of which do not exist in closed form. Illustrative case studies are presented, based on tubular reactor models, which exhibit rich parametric behaviour.
APA, Harvard, Vancouver, ISO, and other styles
15

Boudjenouia, Fouad. "Restauration d’images avec critères orientés qualité." Thesis, Orléans, 2017. http://www.theses.fr/2017ORLE2031/document.

Full text
Abstract:
Cette thèse concerne la restauration aveugle d’images (formulée comme un problème inverse mal-posé et mal-conditionné), en considérant particulièrement les systèmes SIMO. Dans un premier temps une technique d’identification aveugle de ce système où l’ordre du canal est inconnu (surestimé) est introduite. Nous introduisons d’abord une version simplifiée à coût réduit SCR de la méthode des relations croisées (CR). Ensuite, une version robuste R-SCR basée sur la recherche d’une solution parcimonieuse minimisant la fonction de coût CR est proposée. La restauration d’image est ensuite assurée par une nouvelle approche inspirée des techniques de décodage des signaux 1D et étendue ici aux cas de la restauration d’images en se basant sur une recherche arborescente efficace (algorithme ‘Stack’). Plusieurs améliorations de la méthode ‘Stack’ ont été introduites afin de réduire sa complexité et d’améliorer la qualité de restauration lorsque les images sont fortement bruitées. Ceci en utilisant une technique de régularisation et une approche d’optimisation all-at-once basée sur la descente du gradient qui permet de raffiner l’image estimée et mieux converger vers la solution optimale. Ensuite, les mesures de la qualité d’images sont utilisées comme fonctions de coûts (intégrées dans le critère global) et ce afin d’étudier leur potentiel pour améliorer les performances de restauration. Dans le contexte où l’image d’intérêt est corrompue par d’autres images interférentes, sa restauration nécessite le recours aux techniques de séparation aveugle de sources. Pour cela, une étude comparative de certaines techniques de séparation basées sur la propriété de décorrélation au second ordre et la parcimonie est réalisée
This thesis concerns the blind restoration of images (formulated as an ill-posed and illconditioned inverse problem), considering a SIMO system. Thus, a blind system identification technique in which the order of the channel is unknown (overestimated) is introduced. Firstly, a simplified version at reduced cost SCR of the cross relation (CR) method is introduced. Secondly, a robust version R-SCR based on the search for a sparse solution minimizing the CR cost function is proposed. Image restoration is then achieved by a new approach (inspired from 1D signal decoding techniques and extended here to the case of 2D images) based on an efficient tree search (Stack algorithm). Several improvements to the ‘Stack’ method have been introduced in order to reduce its complexity and to improve the restoration quality when the images are noisy. This is done using a regularization technique and an all-at-once optimization approach based on the gradient descent which refines the estimated image and improves the algorithm’s convergence towards the optimal solution. Then, image quality measurements are used as cost functions (integrated in the global criterion), in order to study their potential for improving restoration performance. In the context where the image of interest is corrupted by other interfering images, its restoration requires the use of blind sources separation techniques. In this sense, a comparative study of some separation techniques based on the property of second-order decorrelation and sparsity is performed
APA, Harvard, Vancouver, ISO, and other styles
16

Benammar, Riyadh. "Détection non-supervisée de motifs dans les partitions musicales manuscrites." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI112.

Full text
Abstract:
Cette thèse s'inscrit dans le contexte de la fouille de données appliquées aux partitions musicales manuscrites anciennes et vise une recherche de motifs mélodiques ou rythmiques fréquents définis comme des séquences de notes répétitives aux propriétés caractéristiques. On rencontre un grand nombre de déclinaisons possibles de motifs : les transpositions, les inversions et les motifs dits « miroirs ». Ces motifs permettent aux musicologues d'avoir un niveau d'analyse approfondi sur les œuvres d'un compositeur ou d'un style musical. Dans un contexte d'exploration de corpus de grande taille où les partitions sont juste numérisées et non transcrites, une recherche automatisée de motifs vérifiant des contraintes ciblées devient un outil indispensable à leur étude. Pour la réalisation de l'objectif de détection de motifs fréquents sans connaissance a priori, nous sommes partis d'images de partitions numérisées. Après des étapes de prétraitements sur l'image, nous avons exploité et adapté un modèle de détection et de reconnaissance de primitives musicales (tête de notes, hampes...) de la famille de réseaux de neurones à convolutions de type Region-Proposal CNN (RPN). Nous avons ensuite développé une méthode d'encodage de primitives pour générer une séquence de notes en évitant la tâche complexe de transcription complète de l'œuvre manuscrite. Cette séquence a ensuite été analysée à travers l'approche CSMA (Contraint String Mining Algorithm) que nous avons conçue pour détecter les motifs fréquents présents dans une ou plusieurs séquences avec une prise en compte de contraintes sur leur fréquence et leur taille, ainsi que la taille et le nombre de sauts autorisés (gaps) à l'intérieur des motifs. La prise en compte du gap a ensuite été étudiée pour contourner les erreurs de reconnaissance produites par le réseau RPN évitant ainsi la mise en place d'un système de post-correction des erreurs de transcription des partitions. Le travail a été finalement validé par l'étude des motifs musicaux pour des applications d'identification et de classification de compositeurs
This thesis is part of the data mining applied to ancient handwritten music scores and aims at a search for frequent melodic or rhythmic motifs defined as repetitive note sequences with characteristic properties. There are a large number of possible variations of motifs: transpositions, inversions and so-called "mirror" motifs. These motifs allow musicologists to have a level of in-depth analysis on the works of a composer or a musical style. In a context of exploring large corpora where scores are just digitized and not transcribed, an automated search for motifs that verify targeted constraints becomes an essential tool for their study. To achieve the objective of detecting frequent motifs without prior knowledge, we started from images of digitized scores. After pre-processing steps on the image, we exploited and adapted a model for detecting and recognizing musical primitives (note-heads, stems...) from the family of Region-Proposal CNN (RPN) convolution neural networks. We then developed a primitive encoding method to generate a sequence of notes without the complex task of transcribing the entire manuscript work. This sequence was then analyzed using the CSMA (Constraint String Mining Algorithm) approach designed to detect the frequent motifs present in one or more sequences, taking into account constraints on their frequency and length, as well as the size and number of gaps allowed within the motifs. The gap was then studied to avoid recognition errors produced by the RPN network, thus avoiding the implementation of a post-correction system for transcription errors. The work was finally validated by the study of musical motifs for composers identification and classification
APA, Harvard, Vancouver, ISO, and other styles
17

Taminato, Filipe Otsuka. "Aperfeiçoamento do algoritmo algébrico sequencial para a identificação de variações abruptas de impedância acústica via otimização." Universidade do Estado do Rio de Janeiro, 2014. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=6834.

Full text
Abstract:
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro
Neste trabalho são utilizados a técnica baseada na propagação de ondas acústicas e o método de otimização estocástica Luus-Jaakola (LJ) para solucionar o problema inverso relacionado à identificação de danos em barras. São apresentados o algoritmo algébrico sequencial (AAS) e o algoritmo algébrico sequencial aperfeiçoado (AASA) que modelam o problema direto de propagação de ondas acústicas em uma barra. O AASA consiste nas modificações introduzidas no AAS. O uso do AASA resolve com vantagens o problema de identificação de danos com variações abruptas de impedância. Neste trabalho são obtidos, usando-se o AAS-LJ e o AASA-LJ, os resultados de identificação de cinco cenários de danos. Três deles com perfil suave de impedância acústica generalizada e os outros dois abruptos. Além disso, com o objetivo de simular sinais reais de um experimento, foram introduzidos variados níveis de ruído. Os resultados alcançados mostram que o uso do AASA-LJ na resolução de problemas de identificação de danos em barras é bastante promissor, superando o AAS-LJ para perfis abruptos de impedância.
In this work the techniques based on the wave propagation approach and the Luus- Jaakola optimization method to solve the inverse problem of damage identification in bars are applied. The sequential algebraic algorithm (SAA) and the improved sequential algebraic algorithm (ISAA) that model the direct problem of acoustic wave propagation in bars are presented. The ISAA consists on modifications of the SAA. The use of the ISAA solves with advantages the problem of damage identification when the generalized acoustical impedance variations are abrupt. In this work the results of identification of five damage scenarios are obtained using the SAA and the ISAA. Three of them are smooth impedance profiles and the other two are rough ones. Moreover, to simulate signals obtained experimentally, different noise levels were introduced. It is shown that using the ISAA-LJ in solving problems of damage identification in bars is quite promising, furnishing better results than the SAA-LJ, specially when the impedance profiles are abrupt.
APA, Harvard, Vancouver, ISO, and other styles
18

El, Maadani Khalid. "Identification de systèmes séquentiels structurés : Application à la validation du test." Toulouse, INSA, 1993. http://www.theses.fr/1993ISAT0003.

Full text
Abstract:
Le travail presente dans ce memoire s'inscrit dans le cadre de l'evaluation et de la validation du test des systemes sequentiels deterministes, au niveau comportemental. Le critere d'evaluation adopte est celui de l'identification. L'identification d'un systeme sequentiel est generalement un probleme d'inference d'automate, dans les domaines de la synthese de systemes sequentiels, de l'inference reguliere et de l'apprentissage sequentiel, et un probleme de generation de sequence, dans le domaine de la generation de test. L'evaluation d'une sequence de test par identification est traitee ici comme un probleme d'inference qui consiste a determiner l'ensemble des machines acceptant cette sequence mais qui sont incompatibles avec le modele dont elle est issue. Le nombre de machines distinctes obtenues donne une mesure relative de la couverture de la sequence par rapport au modele. Deux approches differentes de l'evaluation et de la validation du test des systemes sequentiels sont proposees. La premiere, dite boite noire, correspond au cas ou le systeme analyse est decrit par un modele fonctionnel (machine a etats finie). La seconde, dite boite grise s'adresse aux systemes decrits par un modele structuro-fonctionnel (ensemble de machines interconnectees); elle exploite la connaissance structurelle du systeme et permet une reduction de la complexite algorithmique globale du processus. Cette approche consiste a evaluer la sequence, successivement, par rapport a chaque machine du systeme, supposee inconnue au sein d'un environnement connu. Les contraintes de commandabilite et d'observabilite induites par l'environnement sur le comportement externe de la machine consideree doivent alors etre prises en compte. Un prototype informatique nomme ida a ete developpe en prolog sur station de travail sun4 pour valider ces approches
APA, Harvard, Vancouver, ISO, and other styles
19

Chauveau, Aurélie. "Identification des mutations à visée diagnostique et pronostique dans les néoplasies myéloprolifératives et impact sur l'épissage alternatif Sequential analysis of 18 genes in polycythemia vera and essential thrombocythemia reveals an association between mutational status and clinical outcome, in Genes chromosomes & cancer 56(5), May 2017 Benefits and pitfalls of pegylated interferon-α2a therapy in patients with myeloproliferative neoplasm-associated myelofibrosis: a French Intergroup of Myeloproliferative neoplasms (FIM) study, in Haematologica 103, March 2018." Thesis, Brest, 2019. http://www.theses.fr/2019BRES0042.

Full text
Abstract:
Les néoplasies myéloprolifératives (NMP), non BCR-ABL1, regroupent principalement la polyglobulie de Vaquez (PV), la thrombocytémie essentielle (TE) et la myélofibrose primitive (MFP).Ces pathologies partagent, dans des proportions variables, une mutation commune, la mutation JAK2 V617F. La protéine JAK2 mutée a une activité tyrosine kinase constitutive, impliquée dans le développement de la maladie. Cette mutation, seule, n’explique pas l’hétérogénéité phénotypique au sein des NMP. L’avènement des techniques de séquençage haut débit a permis de mieux appréhender la physiopathologie. Notre travail avait pour objectif l’identification de mutations additionnelles au sein de deux cohortes suivies au long cours en lien avec un risque d’aggravation de la maladie, l’une regroupant des patients en phase chronique (TE et PV JAK2 V617F), la seconde regroupant des patients avec une myélofibrose traitée par interféron. A l’instar d’autres travaux récents, nous avons montré que le nombre de mutations et la présence de mutations additionnelles sont associés à l’évolution de la maladie, voire à la réponse au traitement.Parmi les mutations identifiées, certaines pourraient influencer l’épissage. La deuxième partie de ce travail a donc consisté à étudier l’épissage alternatif en fonction des mutations présentes, et en particulier la mutation JAK2 (V617F) et de manière globale dans les TE. Un saut de l’exon 14 de JAK2 a été décrit chez des patients NMP présentant, ou non, la mutation JAK2 V617F. Cette mutation du gène JAK2 est prédite pour altérer un site de fixation de la protéine SRSF6 régulatrice de l’épissage. Nous observons que le saut de l’exon 14 est un événement peu fréquent chez les patients, modulé en partie par l’expression des protéines SR. L’analyse transcriptomique montre une grande hétérogénéité entre les patients en termes d’expression et d’épissage, ce qui ne nous a pas permis de mettre en évidence de profil caractéristique. Ces résultats soulignent l’importance de l’identification des mutations additionnelles au diagnostic et au cours du suivi.Nous avons pu, en outre, identifier quelques transcrits alternatifs associés à la présence de ces mutations. Le rôle fonctionnel de ces variants reste à définir
Polycythemia vera (PV), essential thrombocythemia (ET) and primary myelofibrosis (PMF) are a group of Philadelphia-negative myeloproliferative neoplasm (MPN). These diseases share a common mutation, JAK2 V617F, in varying proportions. The mutated JAK2 protein has a constitutive tyrosine kinase activity, implicated in the physiopathology of MPN. This mutation alone does not explain the phenotypic heterogeneity within MPN.High throughput sequencing techniques helped understanding the physiopathology. This work aimed to identify additional mutations in two patient cohorts related to the aggravation risk of the disease. The first one consisted of patients in chronic phase (JAK2 V617F ET and PV), the second consisted in patients with myelofibrosis treated with interferon. Like other studies, we have shown that the number of mutations and the presence of additional mutations are associated with disease progression or with response to treatment. Some identified mutations could influence splicing. The second part of this work aimed at studying the putative impact of the JAK2 V617F mutation, on alternative splicing (AS).We also analyzed global AS profiles in ET. JAK2 exon 14 skipping has been described in NMP patients with or without the JAK2 V617F mutation.This mutation was predicted to alter the binding site of the SRSF6 splice-regulating protein. We observed that exon 14 skipping was an uncommon event in patients, in part related to SR protein expression. In addition, our transcriptomic-wide analysis showed a great heterogeneity between the patients with respect to both gene expression and splicing. This prevented us from identifying any characteristic profile. These results underscore the importance of identifying additional mutations at diagnosis and during follow-up. We have also been able to uncover some alternative transcripts associated with the presence of these mutations.The functional role of these variants remains to be defined
APA, Harvard, Vancouver, ISO, and other styles
20

Papež, Milan. "Monte Carlo identifikační strategie pro stavové modely." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400416.

Full text
Abstract:
Stavové modely jsou neobyčejně užitečné v mnoha inženýrských a vědeckých oblastech. Jejich atraktivita vychází především z toho faktu, že poskytují obecný nástroj pro popis široké škály dynamických systémů reálného světa. Nicméně, z důvodu jejich obecnosti, přidružené úlohy inference parametrů a stavů jsou ve většině praktických situacích nepoddajné. Tato dizertační práce uvažuje dvě zvláště důležité třídy nelineárních a ne-Gaussovských stavových modelů: podmíněně konjugované stavové modely a Markovsky přepínající nelineární modely. Hlavní rys těchto modelů spočívá v tom, že---navzdory jejich nepoddajnosti---obsahují poddajnou podstrukturu. Nepoddajná část požaduje abychom využily aproximační techniky. Monte Carlo výpočetní metody představují teoreticky a prakticky dobře etablovaný nástroj pro řešení tohoto problému. Výhoda těchto modelů spočívá v tom, že poddajná část může být využita pro zvýšení efektivity Monte Carlo metod tím, že se uchýlíme k Rao-Blackwellizaci. Konkrétně, tato doktorská práce navrhuje dva Rao-Blackwellizované částicové filtry pro identifikaci buďto statických anebo časově proměnných parametrů v podmíněně konjugovaných stavových modelech. Kromě toho, tato práce adoptuje nedávnou particle Markov chain Monte Carlo metodologii pro návrh Rao-Blackwellizovaných částicových Gibbsových jader pro vyhlazování stavů v Markovsky přepínajících nelineárních modelech. Tyto jádra jsou posléze použity pro inferenci parametrů metodou maximální věrohodnosti v uvažovaných modelech. Výsledné experimenty demonstrují, že navržené algoritmy překonávají příbuzné techniky ve smyslu přesnosti odhadu a výpočetního času.
APA, Harvard, Vancouver, ISO, and other styles
21

Mony, Hari 1977. "Sequential redundancy identification using transformation-based verification." Thesis, 2008. http://hdl.handle.net/2152/3890.

Full text
Abstract:
The design of complex digital hardware is challenging and error-prone. With short design cycles and increasing complexity of designs, functional verification has become the most expensive and time-consuming aspect of the digital design process. Sequential equivalence checking (SEC) has been proposed as a verification framework to perform a true sequential check of input/output equivalence between two designs. SEC provides several benefits that can enable a faster and more efficient way to design and verify large and complex digital hardware. It can be used to prove that micro-architectural optimizations needed for design closure preserve design functionality, and thus avoid the costly and incomplete functional verification regression traditionally used for such purposes. Moreover, SEC can be used to validate sequential synthesis transformations and thereby enable design and verification at a higher-level of abstraction. Use of sequential synthesis leads to shorter design cycles and can result in a significant improvement in design quality. In this dissertation, we study the problem of sequential redundancy identification to enable robust sequential equivalence checking solutions. In particular, we focus on the use of a transformation-based verification framework to synergistically leverage various transformations to simplify and decompose large problems which arise during sequential redundancy identification to enable an efficient and highly scalable SEC solution. We make five main contributions in this dissertation. First, we introduce a novel sequential redundancy identification framework that dramatically increases the scalability of SEC. Second, we propose the use of a flexible and synergistic set of transformation and verification algorithms for sequential redundancy identification. This more general approach enables greater speed and scalability and identifies a significantly greater degree of redundancy than previous approaches. Third, we introduce the theory and practice of transformation-based verification in the presence of constraints. Constraints are pervasively used in verification testbenches to specify environmental assumptions to prevent illegal input scenarios. Fourth, we develop the theoretical framework with corresponding efficient implementation for optimal sequential redundancy identification in the presence of constraints. Fifth, we address the scalability of transformation-based verification by proposing two new structural abstraction techniques. We also study the synergies between various transformation algorithms and propose new strategies for using these transformations to enable scalable sequential redundancy identification.
text
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Guo Wei, and 伍國維. "Structural System Identification by Sequential Quadratic Programming." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/34201510423579724266.

Full text
Abstract:
碩士
淡江大學
機械工程學系
85
The purpose of this study is to propose a general method to correlate thefinite element analysis data and modal test data systematically. Mathematically the structural system identification problem is identical to the optimum design problem in which the difference between analysis data and test data are used as objective function. When all constraints are satisfied and the objective function becomes minimum, we can obtain the new finite element analysis model which is similar to the test model. The Sequential Quadratic Programming(SQP)is adapted to solve this problem. The difference of analysis/test data in natural frequency, mode shape and design parameter will be considered in objective function. The natural frequency, mode shape, design parameter and structural mass also will be controlled in constraints to limit their allowances. With the helpof SQP and the improve move limit, the design analyst can obtain the more accurate finite element model easily and the saving of computer time is significant. Sensitivity is adopted to establish reanalysis modal and determine search direction. The better search direction avail to solve the optimum problem. A few numerical examples will be solved to demonstrate the capability of the above method.
APA, Harvard, Vancouver, ISO, and other styles
23

Chao, Cheng-Tsung, and 趙承宗. "Efficient Cheater Identification in Sequential Secret Sharing Schemes." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/48323577780858317542.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學系研究所
86
A secret sharing scheme is a method of hiding a secret by a dealeramong sev eral shadows such that some participants can reconstruct thesecret. In this th esis we propose two efficient secret sharing schemesthat can identify cheaters faster than the previous works. Our schemesare based on the sequential model and greatly reduce the amount of data kept by the participants. In our schemes , the dealer only has to send O(n) data instead of O((n^2)m) for the previous work in the literature, where n is the number of participants and m is the num ber of rounds of the sequential model. Moreover, only one shadow is kept by ea ch participant in our schemes without the need of any checking parameters whil e O(m) shadows and O(nm) checking parameters are needed for each participant i n the previous schemes.
APA, Harvard, Vancouver, ISO, and other styles
24

Su, Wan-Chih, and 蘇莞之. "Optimization of Glycoprotein Digestion and Sequential Enrichment for Identification of Glycoproteome." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/67259658316761563688.

Full text
Abstract:
碩士
國立臺灣大學
化學研究所
101
The utility of mass spectrometry (MS) for identification of post-translational modifications of proteins has boosted glycoproteomics research in recent years. However, site-specific delineation on the glycan occupancy and structure is still a challenging task due to the diversity of glycan structures, low abundance and low ionization response of glycopeptides in tryptic peptide mixtures. Nevertheless, several procedures in the shotgun-based glycoproteomic analysis platform, such as glycoprotein digestion and enrichment methods, are crucial for subsequent mass spectrometry-based glycoproteomic identification. In this study, we attempted to optimize digestion and enrichment methods to improve glycoprotein identification with six standard glycoproteins, namely horseradish peroxidase (HRP), bovine fetuin-B (Fet), chicken ovalbumin (OVA), bovine asialofetuin (FetA), bovine lactotransferrin (LF) and human transferrin (TF) and HeLa cell membrane proteins. In the first part of the thesis, we used these six single standard glycoprotein to evaluate glycoprotein denaturing methods and pre-digestion conditions. Of the denaturing methods, four protein denaturing methods, namely, heating (95℃), alcohol(50 % 2,2,2-Trifluoroethanol, TFE), neutral material (6M urea) and salt (6M guanidine-HCl) disruption, were applied to six standard glycoproteins. Among the four methods, the use of TFE (50%) denaturation provided the most glycopeptides identified (from 1.2 to 1.4 folds) and highest glycopeptide signals (from 1.5 to 32.4 folds) in the mass spectra. Of the pre-digestion conditions tested, various concentrations of ([DTT]: [IAM]: [2nd DTT], in mM) in the ratio of 5:25:25 (condition A), 5:45:45 (condition B), 10: 25:25 (condition C), and 10:45:45 (condition D) were compared on six standard glycoproteins. The condition (A) was shown to be the most appropriate pre-digestion condition for glycopeptides identification. Hence, for the standard glycoprotein mixtures, the 50% TFE denaturing method followed by pre-digestion condition consisting of 5 mM DTT, 25 mM IAM, and 25 mM 2nd DTT was employed to further enrich the glycopeptides. In the second part of the thesis, we aim to develop a sequential glycopeptide enrichment method by hyphened purification with non-specific HILIC (Hydrophilic interaction liquid chromatography) and sialic acid targeted TiO2 StageTips methods. On the result of six standard glycoprotein mixture, sequential enrichment by TiO2-HILIC could enrich more glycopeptides (47) compared to separate single HILIC (39 glycopeptides) and TiO2 (20 glycopeptides) enrichment method. Finally, the efficacies of the sequential enrichment approaches were evaluated using HeLa cell membrane proteins. Sequential HILIC-TiO2 (373.3 ± 37.9) and TiO2-HILIC (398.7 ± 19.5) both also efficiently enriched more glycopeptides compared to separate single HILIC (302.7 ± 41.5) and TiO2 (309.7 ± 29.2) in HeLa membrane proteins. By TiO2-HILIC, the coverage of N-glycoproteome analysis was increase 1.2 to 1.4-folds, hereby increasing the efficiency of enriching more intact glycopeptides from cell. Thus, in both standard glycoprotein mixtures and HeLa cell membrane proteins, glycoproteome identification could be expanded by our sequential glycopeptide enrichment approach. In conclusion, thus study provided an integrated protocol with optimized denaturation and the pre-digestion conditions and sequential glycopeptides enrichment method to increase glycopeptides identification coverage while preserve intact glycan and peptide structures which were valuable information for glycoproteome research. In the future, it is expected that this rational pipeline can be utilize to benefit in-depth analysis of glycoproteome.
APA, Harvard, Vancouver, ISO, and other styles
25

Κοψαυτόπουλος, Φώτης. "Advanced functional and sequential statistical time series methods for damage diagnosis in mechanical structures." Thesis, 2012. http://hdl.handle.net/10889/5828.

Full text
Abstract:
The past 30 years have witnessed major developments in vibration based damage detection and identification, also collectively referred to as damage diagnosis. Moreover, the past 10 years have seen a rapid increase in the amount of research related to Structural Health Monitoring (SHM) as quantified by the significant escalation in papers published on this subject. Thus, the increased interest in this engineering field and its associated potential constitute the main motive for this thesis. The goal of the thesis is the development and introduction of novel advanced functional and sequential statistical time series methods for vibration based damage diagnosis and SHM. After the introduction of the first chapter, Chapter II provides an experimental assessment and comparison of vibration based statistical time series methods for Structural Health Monitoring (SHM) via their application on a lightweight aluminum truss structure and a laboratory scale aircraft skeleton structure. A concise overview of the main non-parametric and parametric methods is presented, including response-only and excitation-response schemes. Damage detection and identification are based on univariate (scalar) versions of the methods, while both scalar (univariate) and vector (multivariate) schemes are considered. The methods' effectiveness for both damage detection and identification is assessed via various test cases corresponding to different damage scenarios, multiple experiments and various sensor locations on the considered structures. The results of the chapter confirm the high potential and effectiveness of vibration based statistical time series methods for SHM. Chapter III investigates the identification of stochastic systems under multiple operating conditions via Vector-dependent Functionally Pooled (VFP) models. In many applications a system operates under a variety of operating conditions (for instance operating temperature, humidity, damage location, damage magnitude and so on) which affect its dynamics, with each condition kept constant for a single commission cycle. Typical examples include mechanical structures operating under different environmental conditions, aircrafts under different flight conditions (altitude, velocity etc.), structures under different structural health states (various damage locations and magnitudes). In this way, damage location and magnitude may be considered as parameters that affect the operating conditions and as a result the structural dynamics. This chapter's work is based on the novel Functional Pooling (FP) framework, which has been recently introduced by the Stochastic Mechanical Systems \& Automation (SMSA) group of the Mechanical Engineering and Aeronautics Department of University of Patras. The main characteristic of Functionally Pooled (FP) models is that their model parameters and innovations sequence depend functionally on the operating parameters, and are projected on appropriate functional subspaces spanned by mutually independent basis functions. Thus, the fourth chapter of the thesis addresses the problem of identifying a globally valid and parsimonious stochastic system model based on input-output data records obtained under a sample of operating conditions characterized by more than one parameters. Hence, models that include a vector characterization of the operating condition are postulated. The problem is tackled within the novel FP framework that postulates proper global discrete-time linear time series models of the ARX and ARMAX types, data pooling techniques, and statistical parameter estimation. Corresponding Vector-dependent Functionally Pooled (VFP) ARX and ARMAX models are postulated, and proper estimators of the Least Squares (LS), Maximum Likelihood (ML), and Prediction Error (PE) types are developed. Model structure estimation is achieved via customary criteria (Bayesian Information Criterion) and a novel Genetic Algorithm (GA) based procedure. The strong consistency of the VFP-ARX least squares and maximum likelihood estimators is established, while the effectiveness of the complete estimation and identification method is demonstrated via two Monte Carlo studies. Based on the postulated VFP parametrization a vibration based statistical time series method that is capable of effective damage detection, precise localization, and magnitude estimation within a unified stochastic framework is introduced in Chapter IV. The method constitutes an important generalization of the recently introduced Functional Model Based Method (FMBM) in that it allows, for the first time in the statistical time series methods context, for complete and precise damage localization on continuous structural topologies. More precisely, the proposed method can accurately localize damage anywhere on properly defined continuous topologies on the structure, instead of pre-defined specific locations. Estimator uncertainties are taken into account, and uncertainty ellipsoids are provided for the damage location and magnitude. To achieve its goal, the method is based on the extended class of Vector-dependent Functionally Pooled (VFP) models, which are characterized by parameters that depend on both damage magnitude and location, as well as on proper statistical estimation and decision making schemes. The method is validated and its effectiveness is experimentally assessed via its application to damage detection, precise localization, and magnitude estimation on a prototype GARTEUR-type laboratory scale aircraft skeleton structure. The damage scenarios considered consist of varying size small masses attached to various continuous topologies on the structure. The method is shown to achieve effective damage detection, precise localization, and magnitude estimation based on even a single pair of measured excitation-response signals. Chapter V presents the introduction and experimental assessment of a sequential statistical time series method for vibration based SHM capable of achieving effective, robust and early damage detection, identification and quantification under uncertainties. The method is based on a combination of binary and multihypothesis versions of the statistically optimal Sequential Probability Ratio Test (SPRT), which employs the residual sequences obtained through a stochastic time series model of the healthy structure. In this work the full list of properties and capabilities of the SPRT are for the first time presented and explored in the context of vibration based damage detection, identification and quantification. The method is shown to achieve effective and robust damage detection, identification and quantification based on predetermined statistical hypothesis sampling plans, which are both analytically and experimentally compared and assessed. The method's performance is determined a priori via the use of the analytical expressions of the Operating Characteristic (OC) and Average Sample Number (ASN) functions in combination with baseline data records, while it requires on average a minimum number of samples in order to reach a decision compared to most powerful Fixed Sample Size (FSS) tests. The effectiveness of the proposed method is validated and experimentally assessed via its application on a lightweight aluminum truss structure, while the obtained results for three distinct vibration measurement positions prove the method's ability to operate based even on a single pair of measured excitation-response signals. Finally, Chapter VI contains the concluding remarks and future perspectives of the thesis.
Κατά τη διάρκεια των τελευταίων 30 ετών έχει σημειωθεί σημαντική ανάπτυξη στο πεδίο της ανίχνευσης και αναγνώρισης βλαβών, το οποίο αναφέρεται συνολικά και σαν διάγνωση βλαβών. Επίσης, κατά την τελευταία δεκαετία έχει σημειωθεί σημαντική πρόοδος στον τομέα της παρακολούθησης της υγείας (δομικής ακεραιότητας) κατασκευών. Στόχος αυτής της διατριβής είναι η ανάπτυξη εξελιγμένων συναρτησιακών και επαναληπτικών μεθόδων χρονοσειρών για τη διάγνωση βλαβών και την παρακολούθηση της υγείας κατασκευών υπό ταλάντωση. Αρχικά γίνεται η πειραματική αποτίμηση και κριτική σύγκριση των σημαντικότερων στατιστικών μεθόδων χρονοσειρών επί τη βάσει της εφαρμογής τους σε πρότυπες εργαστηριακές κατασκευές. Εφαρμόζονται μη-παραμετρικές και παραμετρικές μέθοδοι που βασίζονται σε ταλαντωτικά σήματα διέγερσης και απόκρισης των κατασκευών. Στη συνέχεια αναπτύσσονται στοχαστικά συναρτησιακά μοντέλα για την στοχαστική αναγνώριση κατασκευών υπό πολλαπλές συνθήκες λειτουργίας. Τα μοντέλα αυτά χρησιμοποιούνται για την αναπαράσταση κατασκευών σε διάφορες καταστάσεις βλάβης (θέση και μέγεθος βλάβης), ώστε να είναι δυνατή η συνολική μοντελοποίσή τους για όλες τις συνθήκες λειτουργίας. Τα μοντέλα αυτά αποτελούν τη βάση στην οποία αναπτύσσεται μια συναρτησιακή μέθοδος η οποία είναι ικανή να αντιμετωπίσει συνολικά και ενιαία το πρόβλημα της ανίχνευσης, εντοπισμού και εκτίμησης βλαβών σε κατασκευές. Η πειραματική αποτίμηση της μεθόδου γίνεται με πολλαπλά πειράματα σε εργαστηριακό σκελετό αεροσκάφους. Στο τελευταίο κεφάλαιο της διατριβής προτείνεται μια καινοτόμος στατιστική επαναληπτική μέθοδο για την παρακολούθηση της υγείας κατασκευών. Η μέθοδος κρίνεται αποτελεσματική υπό καθεστώς λειτουργικών αβεβαιοτήτων, καθώς χρησιμοποιεί επαναληπτικά και στατιστικά τεστ πολλαπλών υποθέσεων. Η αποτίμηση της μεθόδου γίνεται με πολλαπλά εργαστηριακά πειράματα, ενώ η μέθοδος κρίνεται ικανή να λειτουργήσει με τη χρήση ενός ζεύγους ταλαντωτικών σημάτων διέγερσης-απόκρισης.
APA, Harvard, Vancouver, ISO, and other styles
26

Gulule, Ellasy Priscilla. "Comparative analysis of ordinary kriging and sequential Gaussian simulation for recoverable reserve estimation at Kayelekera Mine." Thesis, 2016. http://hdl.handle.net/10539/21049.

Full text
Abstract:
A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Engineering. Johannesburg, 2016
It is of great importance to minimize misclassification of ore and waste during grade control for a mine operation. This research report compares two recoverable reserve estimation techniques for ore classification for Kayelekera Uranium Mine. The research was performed on two data sets taken from the pit with different grade distributions. The two techniques evaluated were Sequential Gaussian Simulation and Ordinary Kriging. A comparison of the estimates from these techniques was done to investigate which method gives more accurate estimates. Based on the results from profits and loss, grade tonnage curves the difference between the techniques is very low. It was concluded that similarity in the estimates were due to Sequential Gaussian Simulation estimates were from an average of 100 simulation which turned out to be similar to Ordinary Kriging. Additionally, similarities in the estimates were due to the close spaced intervals of the blast hole/sample data used. Whilst OK generally produced acceptable results like SGS, the local variability of grades was not adequately reproduced by the technique. Subsequently, if variability is not much of a concern, like if large blocks were to be mined, then either technique can be used and yield similar results.
M T 2016
APA, Harvard, Vancouver, ISO, and other styles
27

Chung, Hsin-Line, and 鍾欣霖. "Building Integrated and Hybrid Prediction Systems for Computational Identification of Protein-Protein Interaction Hot Spot Residues by Using Motif Recognition, Sequential and Spatial Properties." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/833svr.

Full text
Abstract:
碩士
國立中央大學
資訊工程學系
103
In a protein–protein interface, a small subset of residues contribute to the majority of the binding free energy, called the “hot spot”. Identifying and understanding hot spots and their mechanisms would have significant implications for bioinformatics and practical applications. Recently, many differences approaches have been used for predicted hot spot residues. We present an effective hot spot residues prediction system, HotSpotFinder, which contains motif recognition, sequential and spatial features and integrates feature set by two-step feature selection method. Through the two predictor of the system, called HotSpotFinder-Integrated and HotSpotFinder-Hybrid, to predict PPI hot spot residues. A total 38 optimal integrated feature and a novel system designed concept are provided and compared with other computational hot spot prediction models, HotSpotFinder offers significant performance improvement in terms of precision, MCC, F1 score and sensitivity, even in the independent dataset.
APA, Harvard, Vancouver, ISO, and other styles
28

Ghosh, Shuvajyoti. "Novel Sub-Optimal And Particle Filtering Strategies For Identification Of Nonlinear Structural Dynamical Systems." Thesis, 2008. http://hdl.handle.net/2005/622.

Full text
Abstract:
Development of dynamic state estimation techniques and their applications in problems of identification in structural engineering have been taken up. The thrust of the study has been the identification of structural systems that exhibit nonlinear behavior, mainly in the form of constitutive and geometric nonlinearities. Methods encompassing both linearization based strategies and those involving nonlinear filtering have been explored. The applications of derivative-free locally transversal linearization (LTL) and multi-step transversal linearization (MTrL) schemes for developing newer forms of the extended Kalman filter (EKF) algorithm have been explored. Apart from the inherent advantages of these methods in avoiding gradient calculations, the study also demonstrates their superior numerical accuracy and considerably less sensitivity to the choice of step sizes. The range of numerical illustrations covers SDOF as well as MDOF oscillators with time-invariant parameters and those with discontinuous temporal variations. A new form of the sequential importance sampling (SIS) filter is developed which explores the scope of the existing SIS filters to cover nonlinear measurement equations and more general forms of noise involving multiplicative and (or) Gaussian/ non-Gaussian noises. The formulation of this method involves Ito-Taylor’s expansions of the nonlinear functions in the measurement equation and the development of the ideal ispdf while accounting for the non-Gaussian terms appearing in the governing equation. Numerical illustrations on parameter identification of a few nonlinear oscillators and a geometrically nonlinear Euler–Bernoulli beam reveal a remarkably improved performance of the proposed methods over one of the best known algorithms, i.e. the unscented particle filter. The study demonstrates the applicability of diverse range of mathematical tools including Magnus’ functional expansions, theory of SDE-s, Ito-Taylor’s expansions and simulation and characterization of the non-Gaussian random variables to the problem of nonlinear structural system identification.
APA, Harvard, Vancouver, ISO, and other styles
29

Slabbert, Yolandi. "A strategic sequential, integrated, sustainable organisation-stakeholder relationship (SISOSR) model for building stakeholder partnerships : a corporate communication perspective." Thesis, 2012. http://hdl.handle.net/10500/8836.

Full text
Abstract:
A dominant focus on organisational stakeholders is currently evident in both the literature and in practice since it is argued that the success of organisations is predominantly dependent on stakeholders’ perception of the organisation. This stakeholder emphasis is evident in the inclusion of a chapter on governing stakeholder relations in the King III report and the development of various stakeholder standards in South Africa, including corporate social investment, corporate governance, corporate citizenship, corporate sustainability and the triple bottom line. Despite the recognition of the importance and necessity of building and maintaining stakeholder relations in the literature, there is a dearth of research on how to actually build these relationships. The aim of this study was to address this shortcoming by proposing a generic, integrated approach to sustainable organisation-stakeholder relationship (OSR) building with strategic stakeholders whereby strategic stakeholder identification, OSR development and OSR maintenance, which are often studied independently, would be integrated in order to constitute a new unified model. This model will promote a sustainable OSR-building process for organisation-stakeholder partnership (OSP) development.The following three building blocks for such a model were proposed: a strategic communication foundation that promotes the integration of specific corporate communication functions that is practised from a two-way symmetrical communication perspective as the basis for effective OSR building; a theoretical foundation, which is an integration of Freeman’s stakeholder concept (1984) from a normative, relational viewpoint, Ferguson’s relational paradigm for public relations (1984) and Ledingham’s (2003) theory of relationship management, encapsulated by Grunig’s (1984) excellence theory, of which the proposed OSR-building model would be a pragmatic representation; and a conceptualisation of the OSR-building model where the actual phases of the OSR-building process would be proposed to provide step-by-step guidance for OSR building. This model promotes a partnership approach with strategic stakeholders, which is based on the proposition of an OSR development continuum, which implies that an OSR could grow in intensity over time, from a foundational OSR, mutually-beneficial OSR, sustainable OSR, to ultimate organisational-stakeholder partnerships (OSPs). This model was built from a corporate communication perspective, and subsequently highlighted the contribution of corporate communication in the organisation as an OSR-building function to ensure organisational effectiveness. This study provided an exploratory literature review to constitute a conceptual framework for OSR-building of which the principles of the framework would be further explored and measured in leading listed South African organisations, by means of a quantitative web-based survey and qualitative one-on-one interviews to compose an OSR-building model that provides guidance on the process of OSR building on the basis of insights from theory and practice.
Ingevolge die argument dat die sukses van organisasies hoofsaaklik afhanklik is van die persepsies wat belangegroepe oor organisasies het, word ‘n dominante fokus tans op organisatoriese belangegroepe in die literatuur en praktyk geplaas. Die fokus op belangegroepe is sigbaar in die insluiting van ‘n hoofstuk oor die bou van belangegroepverhoudings in die King III verslag asook die ontwikkeling van verskeie belangegroepstandaarde in Suid Afrika, wat korporatiewe sosiale verantwoordelikheid, korporatiewe burgerskap, korporatiewe volhoubaarheid en drievoudige eindresultaat insluit. Ten spyte daarvan dat die belangrikheid en noodsaaklikheid van die bou en behoud van belangegroepverhoudings erken word in die literatuur, is daar ‘n tekort aan navorsing oor hoe om die verhoudings te bou. Die studie poog om dié tekortkoming aan te spreek deur middel van ‘n generiese, geϊntegreerde benadering vir volhoubare organisatoriese-belangegroepvershoudings (OBV) met strategiese belangegroepe voor te stel, waar strategiese belangegroep identifikasie, OBV ontwikkeling en OBV instandhouding, aspekte wat dikwels afsonderlik bestudeer word, geintegreer word in ‘n nuwe, verenigde model. Hierdie model sal ’n volhoubare OBV verbouiingsproses voorstel vir die ontwikkeling van organisatoriese-belangegroepvennootskappe. Drie boustene word vir die model voorgestel naamlik; ‘n strategiese kommunkasie fondasie wat die integrasie van spesifieke korporatiewe kommunikasie funksies vanuit ‘n twee-rigting simmetriese kommunikasie perspektief as basis vir die effektiewe bou van OBV insluit; ‘n teoretiese fondasie wat ‘n integrasie van Freeman (1984) se belangegroepkonsep van ‘n normatiewe, verhoudingsstandpunt, Ferguson (1984) se verhoudingsparadigma vir openbare skakelwerk en Ledingham (2003) se verhoudingsbestuursteorie insluit, omhul deur Grunig (1984) se uitnemendheidsteorie, waarvan die voorgestelde OBV model ‘n praktiese voorstelling sal wees; en ‘n konseptualisering van OBV-verbouing wat die fases van die OBV proses sal stipuleer om stap-vir-stap riglyne vir die bou van OBV voor te stel. ‘n Vennootskapsbenadering met strategiese belangegroepe word voorgestel deur die model, wat gebaseer is op die proposisie van ‘n OBV ontwikkelingskontinuum, wat impliseer dat ‘n OBV oor tyd in intensiteit kan groei van ‘n basiese OBV, wedersydse voordelige OBV, volhoubare OBV tot ‘n uiteindelike organisatoriese-belangegroepvennootskap. Die model is gebou uit ‘n korporatiewe kommunikasiestandpunt, wat gevolglik die bydrae van korporatiewe kommunikasie in die organisasie as ’n OBV-verbouingsfunksie om organisatoriese effektiwiteit te verseker, beklemtoon. Die studie bied ‘n verkennende literatuurstudie om ’n konseptuele raamwerk vir OBV-verbouing daar te stel, waarvan die beginsels van die raamwerk verder verken en gemeet is in gelysde Suid-Afrikaanse organisasies deur middel van ‘n kwantitatiewe web-gebaseerde opname en een-tot-een onderhoude om ’n OBV-verbouingsmodel te ontwikkel wat riglyne vir die proses van OBV-verbouing bied, gebaseer op beide teoretiese en praktiese insigte.
Communication Science
D. Litt. et Phil. (Communication)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography