To see the other types of publications on this topic, follow the link: Metoda QR.

Dissertations / Theses on the topic 'Metoda QR'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 30 dissertations / theses for your research on the topic 'Metoda QR.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dorini, Fábio Antonio. "Método QR :." Florianópolis, SC, 2000. http://repositorio.ufsc.br/xmlui/handle/123456789/78519.

Full text
Abstract:
Dissertação (Mestrado) - Universidade Federal de Santa Catarina, Centro de Ciências Físicas e Matemáticas.
Made available in DSpace on 2012-10-17T15:28:58Z (GMT). No. of bitstreams: 0Bitstream added on 2014-09-25T19:17:26Z : No. of bitstreams: 1 170074.pdf: 1661870 bytes, checksum: 1fbfe9c10fdab7889a1cabd91a6a8d2b (MD5)
Estuda os fluxos de Toda generalizados caracterizados pela equação diferencial
APA, Harvard, Vancouver, ISO, and other styles
2

Smith, David McCulloch. "Regression using QR decomposition methods." Thesis, University of Kent, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McDonald, Edward James. "An analysis of QR methods for computing Lyapunov exponents." Thesis, University of Strathclyde, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ren, Minzhen. "Cordic-based Givens QR decomposition for MIMO detectors." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50256.

Full text
Abstract:
The object of the thesis research is to realize a complex-valued QR decomposition (QRD) algorithm on FPGAs for MIMO communication systems. The challenge is to implement a QRD processor that efficiently utilizes hardware resources to meet throughput requirements in MIMO systems. By studying the basic QRD algorithm using Givens rotations and the CORDIC algorithm, the thesis develops a master-slave structure to more efficiently implement CORDIC-based Givens rotations compared to traditional methods. Based on the master-slave structure, an processing-element array architecture is proposed to further improve result precision and to achieve near-theoretical latency with parallelized normalization and rotations. The proposed architecture also demonstrates flexible scalability through implementations for different sizes of QRDs. The QRD implementations can process 7.41, 1.90 and 0.209 million matrices per second for two by two, four by four and eight by eight QRDs respectively. This study has built the foundation to develop QRD processors that can fulfill high throughput requirements for MIMO systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Hassanein, Mohamed Sameh. "Secure digital documents using Steganography and QR Code." Thesis, Brunel University, 2014. http://bura.brunel.ac.uk/handle/2438/10619.

Full text
Abstract:
With the increasing use of the Internet several problems have arisen regarding the processing of electronic documents. These include content filtering, content retrieval/search. Moreover, document security has taken a centre stage including copyright protection, broadcast monitoring etc. There is an acute need of an effective tool which can find the identity, location and the time when the document was created so that it can be determined whether or not the contents of the document were tampered with after creation. Owing the sensitivity of the large amounts of data which is processed on a daily basis, verifying the authenticity and integrity of a document is more important now than it ever was. Unsurprisingly document authenticity verification has become the centre of attention in the world of research. Consequently, this research is concerned with creating a tool which deals with the above problem. This research proposes the use of a Quick Response Code as a message carrier for Text Key-print. The Text Key-print is a novel method which employs the basic element of the language (i.e. Characters of the alphabet) in order to achieve authenticity of electronic documents through the transformation of its physical structure into a logical structured relationship. The resultant dimensional matrix is then converted into a binary stream and encapsulated with a serial number or URL inside a Quick response Code (QR code) to form a digital fingerprint mark. For hiding a QR code, two image steganography techniques were developed based upon the spatial and the transform domains. In the spatial domain, three methods were proposed and implemented based on the least significant bit insertion technique and the use of pseudorandom number generator to scatter the message into a set of arbitrary pixels. These methods utilise the three colour channels in the images based on the RGB model based in order to embed one, two or three bits per the eight bit channel which results in three different hiding capacities. The second technique is an adaptive approach in transforming domain where a threshold value is calculated under a predefined location for embedding in order to identify the embedding strength of the embedding technique. The quality of the generated stego images was evaluated using both objective (PSNR) and Subjective (DSCQS) methods to ensure the reliability of our proposed methods. The experimental results revealed that PSNR is not a strong indicator of the perceived stego image quality, but not a bad interpreter also of the actual quality of stego images. Since the visual difference between the cover and the stego image must be absolutely imperceptible to the human visual system, it was logically convenient to ask human observers with different qualifications and experience in the field of image processing to evaluate the perceived quality of the cover and the stego image. Thus, the subjective responses were analysed using statistical measurements to describe the distribution of the scores given by the assessors. Thus, the proposed scheme presents an alternative approach to protect digital documents rather than the traditional techniques of digital signature and watermarking.
APA, Harvard, Vancouver, ISO, and other styles
6

Owen, Anne M. "Widescale analysis of transcriptomics data using cloud computing methods." Thesis, University of Essex, 2016. http://repository.essex.ac.uk/16125/.

Full text
Abstract:
This study explores the handling and analyzing of big data in the field of bioinformatics. The focus has been on improving the analysis of public domain data for Affymetrix GeneChips which are a widely used technology for measuring gene expression. Methods to determine the bias in gene expression due to G-stacks associated with runs of guanine in probes have been explored via the use of a grid and various types of cloud computing. An attempt has been made to find the best way of storing and analyzing big data used in bioinformatics. A grid and various types of cloud computing have been employed. The experience gained in using a grid and different clouds has been reported. In the case of Windows Azure, a public cloud has been employed in a new way to demonstrate the use of the R statistical language for research in bioinformatics. This work has studied the G-stack bias in a broad range of GeneChip data from public repositories. A wide scale survey has been carried out to determine the extent of the Gstack bias in four different chips across three different species. The study commenced with the human GeneChip HG U133A. A second human GeneChip HG U133 Plus2 was then examined, followed by a plant chip, Arabidopsis thaliana, and then a bacterium chip, Pseudomonas aeruginosa. Comparisons have also been made between the use of widely recognised algorithms RMA and PLIER for the normalization stage of extracting gene expression from GeneChip data.
APA, Harvard, Vancouver, ISO, and other styles
7

Krämer, Julia. "Development of novel methods for periplasmic release of biotherapeutic products." Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7603/.

Full text
Abstract:
The production of biotherapeutics including antibodies and antibody fragments is a rapidly expanding market with an increasing number of products being approved for use. One of the major platforms used for production of such therapeutics is Escherichia coli, which offers a rapid production at low production costs. The favoured location for targeting these biotherapeutic is the periplasm of E. coli as this environment supports the formation of disulphide bonds and simplifies the purification process. There are a number of periplasmic release procedures currently practised in industry including osmotic shock. However their limitations call for the development of an improved generic periplasmic release method. This project demonstrates how the polymer poly(styrene-co-maleic acid) (SMA) can be applied as a novel and alternative periplasmic release agent. The amphipathic polymer self-assembles into discs encapsulating membrane proteins and thereby destabilises the outer membrane consequently releasing the periplasm. Data presented here show that SMA releases the model target proteins with a higher yield at equal or higher target purity than the conventional methods. Furthermore the developed methods was analysed and refined to be compatible with existing downstream and first steps for its adaptation on industrial scale were made.
APA, Harvard, Vancouver, ISO, and other styles
8

Jia, Tianye. "Strategies and statistical methods for linkage disequilibrium-based mapping of complex traits." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3292/.

Full text
Abstract:
Nowadays, there are many statistical methods available for genetic association analyses with data various designs. However, it is usually ignored in these analyses that an analytical method must be appropriate for an experimental design from which data is collected. In addition, association study is a population-based analysis and, thus its inference is highly vulnerable to many population-oriented confounding factors. This thesis starts with a comprehensive survey and comparison of those methods commonly used in the literature of genetic association study in order to obtain insights into the statistical aspects and problem of the methods. On the basis of these reviews, we managed to calculate the optimal trend set for the Armitage’s trend test for different penetrance models with a high level of genetic heterogeneity. We introduced two new strategies to adjust for the population stratification in association analyses. We proposed a maximum likelihood estimation method to adjust for biases in statistical inference of linkage disequilibrium (LD) between pairs of polymorphic loci by using non-random samples. In the process of the analysis, we derived a more sophisticated but robust likelihood-based statistical framework, accounting properly for the non-random nature of case and control samples. Finally, we developed a multi-point likelihood-based statistical approach for a genome-wide search for the genetic variants that contribute to phenotypic variation of complex quantitative traits. We tested these methods through intensive simulation studies and demonstrated their application in analyses with large case and control SNP datasets of the Parkinson’s disease. Despite that we have mainly focused on SNP data scored from microarray techniques, the theory and methodology presented here paved a useful stepping stone approach to the modeling and analysis of data depicting genome structure and function from the new generation sequencing techniques.
APA, Harvard, Vancouver, ISO, and other styles
9

Tancjurová, Jana. "Metody indikace chaosu v nelineárních dynamických systémech." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-400451.

Full text
Abstract:
The master's thesis deals mainly with continuous nonlinear dynamical systems that exhibit chaotic behavior. The main goal is to create algorithms for chaos detection and their subsequent testing on known models. Most of the thesis is devoted to the estimation of the Lyapunov exponents, further it deals with the estimation of the fractal dimension of an attractor and summarizes the 0--1 test. The thesis includes three algorithms created in MATLAB -- an algorithm for estimating the largest Lyapunov exponent and two algorithms for estimating the entire Lyapunov spectra. These algorithms are then tested on five continuous dynamical systems. Especially the error of estimation, speed of these algorithms and properties of Lyapunov exponents in different areas of system behavior are investigated.
APA, Harvard, Vancouver, ISO, and other styles
10

Lopez, Florent. "Task-based multifrontal QR solver for heterogeneous architectures." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30303/document.

Full text
Abstract:
Afin de s'adapter aux architectures multicoeurs et aux machines de plus en plus complexes, les modèles de programmations basés sur un parallélisme de tâche ont gagné en popularité dans la communauté du calcul scientifique haute performance. Les moteurs d'exécution fournissent une interface de programmation qui correspond à ce paradigme ainsi que des outils pour l'ordonnancement des tâches qui définissent l'application. Dans cette étude, nous explorons la conception de solveurs directes creux à base de tâches, qui représentent une charge de travail extrêmement irrégulière, avec des tâches de granularités et de caractéristiques différentes ainsi qu'une consommation mémoire variable, au-dessus d'un moteur d'exécution. Dans le cadre du solveur qr mumps, nous montrons dans un premier temps la viabilité et l'efficacité de notre approche avec l'implémentation d'une méthode multifrontale pour la factorisation de matrices creuses, en se basant sur le modèle de programmation parallèle appelé "flux de tâches séquentielles" (Sequential Task Flow). Cette approche, nous a ensuite permis de développer des fonctionnalités telles que l'intégration de noyaux dense de factorisation de type "minimisation de cAfin de s'adapter aux architectures multicoeurs et aux machines de plus en plus complexes, les modèles de programmations basés sur un parallélisme de tâche ont gagné en popularité dans la communauté du calcul scientifique haute performance. Les moteurs d'exécution fournissent une interface de programmation qui correspond à ce paradigme ainsi que des outils pour l'ordonnancement des tâches qui définissent l'application. Dans cette étude, nous explorons la conception de solveurs directes creux à base de tâches, qui représentent une charge de travail extrêmement irrégulière, avec des tâches de granularités et de caractéristiques différentes ainsi qu'une consommation mémoire variable, au-dessus d'un moteur d'exécution. Dans le cadre du solveur qr mumps, nous montrons dans un premier temps la viabilité et l'efficacité de notre approche avec l'implémentation d'une méthode multifrontale pour la factorisation de matrices creuses, en se basant sur le modèle de programmation parallèle appelé "flux de tâches séquentielles" (Sequential Task Flow). Cette approche, nous a ensuite permis de développer des fonctionnalités telles que l'intégration de noyaux dense de factorisation de type "minimisation de cAfin de s'adapter aux architectures multicoeurs et aux machines de plus en plus complexes, les modèles de programmations basés sur un parallélisme de tâche ont gagné en popularité dans la communauté du calcul scientifique haute performance. Les moteurs d'exécution fournissent une interface de programmation qui correspond à ce paradigme ainsi que des outils pour l'ordonnancement des tâches qui définissent l'application
To face the advent of multicore processors and the ever increasing complexity of hardware architectures, programming models based on DAG parallelism regained popularity in the high performance, scientific computing community. Modern runtime systems offer a programming interface that complies with this paradigm and powerful engines for scheduling the tasks into which the application is decomposed. These tools have already proved their effectiveness on a number of dense linear algebra applications. In this study we investigate the design of task-based sparse direct solvers which constitute extremely irregular workloads, with tasks of different granularities and characteristics with variable memory consumption on top of runtime systems. In the context of the qr mumps solver, we prove the usability and effectiveness of our approach with the implementation of a sparse matrix multifrontal factorization based on a Sequential Task Flow parallel programming model. Using this programming model, we developed features such as the integration of dense 2D Communication Avoiding algorithms in the multifrontal method allowing for better scalability compared to the original approach used in qr mumps. In addition we introduced a memory-aware algorithm to control the memory behaviour of our solver and show, in the context of multicore architectures, an important reduction of the memory footprint for the multifrontal QR factorization with a small impact on performance. Following this approach, we move to heterogeneous architectures where task granularity and scheduling strategies are critical to achieve performance. We present, for the multifrontal method, a hierarchical strategy for data partitioning and a scheduling algorithm capable of handling the heterogeneity of resources. Finally we present a study on the reproducibility of executions and the use of alternative programming models for the implementation of the multifrontal method. All the experimental results presented in this study are evaluated with a detailed performance analysis measuring the impact of several identified effects on the performance and scalability. Thanks to this original analysis, presented in the first part of this study, we are capable of fully understanding the results obtained with our solver
APA, Harvard, Vancouver, ISO, and other styles
11

Swift, Benjamin M. C. "Development of rapid phage based detection methods for mycobacteria." Thesis, University of Nottingham, 2014. http://eprints.nottingham.ac.uk/14225/.

Full text
Abstract:
MAP is the causative agent of a wasting disease in ruminants and other animals called Johne’s disease. Culture of the organism can take months and in the case of some sheep strains of MAP, culture can take up to a year. It can take several years for an animal infected with MAP to show clinical symptoms of disease. During this subclinical stage of infection, MAP can be shed into the environment contaminating their surroundings and infecting other animals. As well as this Johne’s disease is particularly difficult to diagnose during the subclinical stage of infection. Culture is very difficult and takes too long to be a viable method to diagnose Johne’s disease. Microscopic methods can be used on histological samples to detect MAP, however common acid-fast stains used are not specific for MAP and other mycobacteria and acid-fast organisms can be detected. Molecular methods, such as PCR, exist to rapidly detect the signature DNA sequences of these organisms, however they have the disadvantage of not being able to distinguish between live and dead organisms. Other immunological methods, such as ELISA tests, exist and are routinely used to diagnose Johne’s disease, however their sensitivity is very poor especially during the subclinical stage of disease. The aim of these studies was to develop novel rapid methods of detecting MAP to act as an alternative to methods already available. Sample processing using magnetic separation was carried out to allow good capture of MAP cells and to allow efficient phage infection. Using the phage assay, a specific, sensitive phage based method was developed that could detect approximately 10 cells per ml of blood within 24 h in the laboratory with a sensitive, specific plaque-PCR. This optimised detection method was then used to determine whether MAP cells could be detected in clinical blood samples of cattle suffering from Johne’s disease. The results suggest that animals experimentally and naturally infected with MAP harboured cells in their blood during subclinical and clinical stages of infection. A novel high-throughput method of detecting mycobacteria was also developed. Using phage D29 as a novel mycobacterial DNA extraction tool, viable MAP cells were detected within 8 h and the format of the assay means that it can be adapted to be used in a high-throughput capacity. Factors affecting phage infection and phage-host interactions were investigated to make sure the phage based methods of detection were as efficient as possible. It was found that periods of recovery were often necessary to not only make sure the phage were not inhibited but to also allow the host cells to be metabolically active as it was found that phage D29 can only infect mycobacteria cells that are metabolically active. A fluorescent fusion-peptide capable of specifically labelling MAP cells was also developed to be used as an alternative to acid-fast staining. Peptides that were found to specifically bind to MAP cells were fused with green fluorescent protein and cells mounted on slides were specifically labelled with the fluorescent fusion protein. This resulted in a good alternative to the generic acid-fast staining methods. The blood phage assay has shown that viable MAP cells can be found in the blood of animals suffering from Johne’s disease within 24 h and this can be confirmed using a MAP specific plaque-PCR protocol. A novel faster method to detect MAP was also developed, to cut down the time to detection of viable MAP cells to 8 h, which can be formatted to be used in a high-throughput capacity. The phage assay was used as a tool to determine different metabolic states of mycobacteria, and helped investigate optimal detection conditions when using the phage assay. Finally a novel fluorescent label was developed to detect MAP as an alternative to insensitive acid-fast staining. The development of these novel methods to rapidly, specifically and sensitively detect MAP will push further the understanding of Johne’s disease and help control it.
APA, Harvard, Vancouver, ISO, and other styles
12

Braddick, Darren. "Quantitative assay methods and mathematical modelling of peptidoglycan transglycosylation." Thesis, University of Warwick, 2012. http://wrap.warwick.ac.uk/57211/.

Full text
Abstract:
The proportion of antibiotic resistant Gram-positive strains in the clinic and community continue to rise, despite the number of new antibiotics continuing to fall with time. At the intersection of this problem is the established challenge of working with what has ultimately been both nature’s and humanity’s favoured and most successful antibiotic target, the biosynthesis of the bacterial cell wall. The challenge lies in the predominately membrane/lipid linked habitat that the enzymes and substrates of this complex biosynthetic pathway function within. Membrane protein science remains non-trivial and often difficult, and as such remains undeveloped despite its hugely important role in the medical and biological sciences. As a result, there is a paucity of understanding for this pathway, with limited methods for assay of the activity of the biosynthetic enzymes. These enzyme include the monofunctional transglycosylases, monofunctional transpeptidase penicillin-binding proteins (PBPs) and bifunctional PBPs capable of both transglycosylation and transpeptidation. A number of these enzymes were expressed and purified, with the intention of obtaining novel kinetic and catalytic characterisation of their activities. The more complex of these enzymes could not be proven to be active, and so the comparatively simpler enzyme, an S. aureus monofunctional transglycosylase called MGT, was taken as a model enzyme and used to help design novel assay methods for its transglycosylase activity. The assays developed in this work gave access to novel time-course data and will help demonstrate other interesting mechanistic/catalytic information about the MGT enzyme and of transglycosylation in general. Mathematical modelling was performed around the experimental work. Novel and unique models were designed to define the mechanism of the MGT and generic transglycosylation, as this had not been performed before. The mathematical concepts of structural identifiability and structural indistinguishability were used to analyse these models. Data from experiments were then used to attempt data fitting with the models, and information about the underlying unknown kinetic parameters were collected. Together, a new framework of understanding of the MGT and transglycosylation can be made, which may hopefully be a small step towards answering the challenge now posed by widespread antibiotic resistance.
APA, Harvard, Vancouver, ISO, and other styles
13

Brena, Maria Camilla. "Effect of different poultry production methods on Campylobacter incidence and transmission in the broiler meat food chain." Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/18837/.

Full text
Abstract:
Campylobacter is the main cause of human bacterial gastroenteritis worldwide. Within the EU reported cases are rising each year. Epidemiological studies have identified that chicken meat is one of the major sources of human infection. However, it is poorly understood whether differences in chickens’ rearing and production methods impact on the contamination levels of Campylobacter on chicken meat and therefore the risk of entry into the food chain. To investigate the role of production system, flocks from diverse broiler commercial production systems with differences in welfare standards, bird type and stocking densities were investigated during the whole rearing period and at slaughter. Caecal samples were collected to estimate the flock prevalence. In order to assess the level of carcass contamination during processing, neck skin samples were collected at different production stages. Breast meat samples were also investigated to estimate the risk that chicken meat poses to human health. The objective was to link the flock Campylobacter status to the risk of contamination on the consumer’s plate. All samples were cultured for the presence of Campylobacter species. A quantitative method based on ISO 10272-2:2006, was used to determine the level of flock colonisation and Campylobacter contamination on broiler carcasses and final products. Results show that birds reared indoors under higher welfare standards with decreased stocking density with a slower growing breed (Hubbard JA57) had a reduced prevalence of Campylobacter, compared to the standard fast growing breed (Ross 308) when grown at the same stocking density. The production system with the higher Campylobacter prevalence and the higher Campylobacter count in the caecal contents, also reported a greater Campylobacter prevalence and counts on carcasses. The bacterial numbers on the final product appeared to be strongly associated with the intestinal colonisation of the slaughter batch. Consequently it is crucial to prevent flock colonisation during the rearing period, to ensure negative flocks are entering into the processing plant. The significance of the aforementioned point was also highlighted by the fact that production stages such as final washing and chilling have little impact in the reduction of contamination of the final product. The high level of contaminated carcasses showed clearly that the chicken meat is putting the UK consumers’ health at risk. An increased incidence of welfare issues, such as pododermatitis and hock lesions, was observed among the production system with the higher level of colonisation, which bring to light a link between Campylobacter colonisation and welfare issues. Furthermore, this study emphasised that stressful events such as thinning and transport were followed by an increase in Campylobacter prevalence. This highlights the importance of animal health and welfare interactions with Campylobacter spp colonisation. Multi Locus Sequence Typing (MLST) was used to determine how diverse and distinct the genetic Campylobacter population structure was among the different commercial production systems investigated. Results showed that all production systems could be potential sources of Campylobacter infection in humans with common clonal complexes found. Changes in the prevalence of genotypes associated with the final product compared to those genotypes found in birds arriving from farms were observed. This may reflect the enhanced ability of certain genotypes to resist environmental stressors, such as carcass washing, chilling, chlorine dioxide treatment and oxygen that occur during processing. In this data set, isolates belonging to the ST-257 complex showed a higher tendency to survive in the slaughterhouse environment. Internal contamination of the breast muscle was also reported in our study, hence posing a further public health threat, as the bacteria contained within the muscle are better able to survive cooking. These studies have demonstrated that this pathogen was highly prevalent among the broiler population investigated. Due to the common extent of this pathogen in food and its impact on human health, it is necessary for the Government bodies, food producers and retailers, to raise consumers’ awareness of the Campylobacter issue. Particularly the consumers must be made aware of how to manage the risk appropriately during food preparation.
APA, Harvard, Vancouver, ISO, and other styles
14

Aboklaish, Ali F. "The development of methods to investigate the mechanisms underlying serum resistance of Ureaplasma species." Thesis, Cardiff University, 2014. http://orca.cf.ac.uk/70797/.

Full text
Abstract:
The human Ureaplasma species are among the smallest and simplest self-replicating bacteria known to date. These microbes cause infection in humans, particularly in the upper genital tract during pregnancy, leading to several adverse outcomes including preterm birth, chorioamnionitis, and respiratory diseases of neonates. Little is known about the pathogenesis of Ureaplasma and mechanisms by which they avoid recognition and killing by the complement system. In this thesis, some mechanisms underlying serum resistance of Ureaplasma spp. were investigated. This goal was achieved by creating serum-resistant models of serum-sensitive laboratory Ureaplasma strains and developing and using some proteomic and molecular biology methods to study the role of potential factors, which mediate serum resistance and play a role in pathogenesis of Ureaplasma. My original contribution to the knowledge in this work was the development of transposon mutagenesis method that can now be used to study virulence genes of Ureaplasma. This method will also allow genetic manipulation of Ureaplasma for future studies. Monitoring and investigating induced serum-resistant strains using immunoblot analysis and proteomics revealed significant changes in two candidate proteins coincident with serum resistance. The first was the elongation factor Tu protein that found to be immunogenic and had altered pI isoforms. The observed change in this protein was consistent in all serum-resistant strains, which suggests a possible role in mechanism of serum resistance, possibly as a mediator for binding complement regulators, such as factor H and C4BP, at the cell surface of Ureaplasma. The second candidate protein was a novel 41 kDa protein that was uniquely expressed in all induced serum-resistant strains. Expression of this protein in all resistant strains strongly indicates its involvement in mechanism(s) of serum resistance of Ureaplasma. The possible gene that encodes for this 41 kDa protein has putatively been identified as UUR10_0137 in the genome of U. urealyticum serovar 10 (strain ATCC 33699) using the transposon mutagenesis method developed in this study. Although the gene product of UUR10_0137 gene is not known (hypothetical protein), this protein is now identified and proposed to have a role in serum resistance of Ureaplasma. The product of the UUR10_0137 gene could function as a complement regulator or inhibitor that prevents the activation of complement system, protecting Ureaplasma from the complement attack. The contribution of the multiple- banded antigen, MBA, was proven to be unimportant to serum resistance. Sole antigenic variations in this major surface antigen of Ureaplasma did not play any role in mediating serum resistance. Confirmation of a gene that mediates complement resistance would dramatically increase our understanding of Ureaplasma pathogenicity and provide a target for future human studies with preterm birth and Ureaplasma infection.
APA, Harvard, Vancouver, ISO, and other styles
15

Saibel, Anna, Ylva Blomkvist, and Gabriel Kitzler. "Applicering av blockkedjeteknik på värdekedjor : Metod för ökad transparens i livsföretags förädlingsprocesser." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-238453.

Full text
Abstract:
Denna rapport undersöker appliceringen av en blockkedja på svenska livsmedelsvärdekedjor. Studiens syfte är att undersöka teknikens potential för lagring av information i avsikt att förenkla spårning mellan olika led. Mer specifikt är rapportens mäl att reda ut vilka positiva effekter tekniken skulle kunna ge upphov till för verksamheten, vilka begränsningar en applicering skulle kunna medföra samt vilken typ av blockkedja som skulle lämpa sig för ändamålet. Studien har utförts med hjälp av ett flertal intervjupersoner för att samla validerande data. Tidigare studier visar att den nya tekniken kan medföra en effektivare datahantering, underlätta kommunikation mellan olika led i värdekedjor samt öka spårbarhet och transparens. Vidare kan tekniken öka livsmedelssäkerhet och minska bedrägerier då ansvarsskyldigheten ökar till följd av högre spårbarhetsgrad. Som studien visar finns det dock utmaningar med lansering och implementering av tekniken där bland annat krav på konsensus mellan alla involverade parter i en värdekedja skapar ökad komplexitet. Idag saknas krav på informationsdelning genom en delad databas mellan organisationer, vilket gör att en standardisering i framtiden kan behöva utformas. Den mest lämpade typen av blockkedja för ändamålet är en åtkomstreglerad blockkedja med bestämmelser för vem som är tillåten att läsa, lägga till, respektive validera information i databasen. En prototyp av en applikation baserad på tekniken skapades under studiens gång för att illustrera hur informationsdelning skulle kunna ske mellan olika organisationer.
This report examines the application of a blockchain on different types of Swedish food value chains. The purpose of the study is to evaluate the capacity of the technique for storing information to simplify tracking between the different stages in the supply chain. More specifically, the purpose is to identify the positive effects that the technology could bring businesses, what challenges an implementation could give rise to and finally what type of blockchain would suit the purpose. The study has been conducted with the help of interviewees in order to collect crucial data. Previous studies show that the new technology could lead to more efficient data management, facilitate communication between different stages of the value chains as well as increase traceability and transparency. Furthermore, the technology can increase food security and reduce fraud since higher traceability increases accountability. The study shows that there are indeed challenges with the launch and implementation of the technology, including the need for consensus among all parties involves in a value chain, which leads to increased complexity. Today there are no official requirements for information sharing through a shared database between organizations, in the future there might be a need for standardization. The results show that the most suitable type of blockchain for the purpose is one with permission and access control with regulations regarding who is allowed to read, add and validate information in the database. A prototype of an application based on the technology has been created to illustrate how information sharing could be done between different organizations.
APA, Harvard, Vancouver, ISO, and other styles
16

Krishnan, Anand. "Laser and optical based methods for detecting and characterising microorganisms." Thesis, University of Glasgow, 2008. http://theses.gla.ac.uk/436/.

Full text
Abstract:
This work investigated novel optical methods of characterizing the activity of microorganisms. Two different systems are studied in detail in this work. The possibility of using line scan speckle systems and imaging systems to understand the microbial behaviour, growth and motility was investigated. Conventionally, the growth and viability of microorganisms are determined by swabbing, plating and incubation, typically at 37degreesC for at least 24 hours. The proposed system allows real-time quantification of morphology and population changes of the microorganisms. An important aspect of the line scan system is the dynamic biospeckle. Dynamic speckle can be obtained from the movement of particles suspended in liquids. The speckle patterns show fluctuations in space and time which may be correlated with the activity of the constituents in the suspension. Initially the speckle parameters were standardized to non-motile and inert specimens such as polystyrene microspheres and suspensions of Staphylococcus aureus. The same optical systems and parameters were later tested on motile, active and live organisms of Escherichia coli. The experimental results that are presented describe the time history of the dynamic speckle pattern. A number of algorithms were used to analyse the intensity data. A 2D-FFT algorithm was used to evaluate the space and time-varying autocorrelation. Analysis of the speckle data in the Fourier domain provided insight into the motility of the organisms in broth. The mathematical analysis also gave further insight into the culture broth evaporation and its particle sedimentation characteristics at 37degreesC. These features correlated with the periodic motions associated with the organism and may therefore provide a signature for the organism and a means of monitoring. These results aided the developemnt of imaging bacterial detection systems which were discussed in the second half of the work. The second experimental system focuses on quantifying the morphology and population dynamics of Euglena gracilis under ambient conditions through image processing. Unlike many other cell systems, Euglena cells change from round to long to round cell shape and these different cell shapes were analyzed over time. In the morphological studies of single Euglena cells, image processing tools and filtering techniques were used and different parameters identified and their efficiency at determining cell shape compared. The best parameter for processing the images and its effectiveness in detecting even the interior motions of constituents within a dead cell was found. The efficiency of the measurement parameters in following sequences of shape changes of the Euglena cell was compared with the visual assessment tests from 12 volunteers and other simple measurement methods including parameters relating to the cells eccentricity, and image processing in the space and frequency domains. One of the major advantages of this system is that living cells can be examined in their natural state without being killed, fixed, and stained. As a result, the dynamics of ongoing biological processes in live cells can be observed and recorded in high contrast and sharp clarity. The population statistics of Euglena gracilis was done in liquid culture. A custom built microscopy system was employed and the laser beam was coupled with a dark field illumination system to enhance the contrast of the images. Different image filters were employed for extracting useful information on the population statistics. Similarly as with the shape study of the Euglena cell, different parameters were identified and the best parameter was selected. The population study of the Euglena cells provided a detection system that indicated the activity of the population.
APA, Harvard, Vancouver, ISO, and other styles
17

Gerrard, Zara Elizabeth. "The impact of a new method for the detection of Mycobacterium avium subspecies paratuberculosis on the control of Johne's disease in dairy cattle." Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/50334/.

Full text
Abstract:
Johne’s disease (JD) is a severe wasting disease of ruminants, characterised by chronic enteritis, reduction in milk yield, and severe weight loss despite a maintained appetite. The causative agent is Mycobacterium avium subspecies paratuberculosis (MAP), a slow growing pathogen that can take up to 18 weeks for detection on solid culture. Control programmes rely on sensitive diagnostics to identify infected animals quickly so they can be either removed from the herd or managed differently to control the spread of disease. Unfortunately, the Gold Standard of detection is culture, which due to decontamination procedures, has low sensitivity. Enzyme-linked immunosorbent assays (ELISA) are used more often than faecal culture within control programmes as they are cheaper and quicker than culture methods. However, they only detect the animal’s immune response, rather than the causative agent. This can cause some issues with diagnosis as the immune response can be affected by other variables. Therefore, to effectively control disease, a new detection method needs to be developed. In this series of studies, phage-PCR was used within large scale on-farm sampling to establish its performance against the Gold Standard (liquid culture with ESP-trek) and MAP specific antibody milk ELISA (ab-ELISA). Phage-PCR is thought to be more sensitive than other methods due to its low limit of detection. It is also rapid and relatively inexpensive. Results suggest that phage-PCR can detect more animals shedding MAP into their milk than other methods, or in the least a different group of animals than the other methods. There was some evidence that animals who have had an ab-ELISA positive result in the last year are shedding less MAP into their milk, suggesting that the immune response is helping to control the disease in the short-term. However, this was not observed beyond one year. Phage-PCR had a better agreement with faecal culture than milk culture or ab-ELISA, but this was limited. There was also evidence that early detection could be achieved, as some animals were identified as faecal shedding with phage-PCR before they had seroconversion and detected with a-ELISA. However, it must be noted that these animals may not be infected and just passaging MAP through the GI tract from the contaminated environment. An investigation into the prevalence of MAP in pasteurised milk using phage-PCR was also carried out. There is thought to be an association between MAP and Crohn’s Disease, with milk highlighted by some as a key transmission vector. There was an increase in the proportion of samples containing viable MAP when compared to other surveys within the literature. However, this was thought to be due to the lower limit of detection that phage-PCR provides, rather than an increase in prevalence. Phage-PCR can be used effectively for large-scale on-farm sampling to identify animals shedding MAP into the milk. However, some changes to the assay and sample processing will have to be undertaken before this can be used within industry as its current format is laborious and not suited to automation. Until then, it could be used as a tool to further research and understanding into JD in dairy cattle.
APA, Harvard, Vancouver, ISO, and other styles
18

Kienmayer, Mattis. "Att avsudda bilder." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-74835.

Full text
Abstract:
Denna uppsats rör vid problemet med suddiga bilder och vad man kan göra för att försöka återställa den eftersökta skarpa bilden. Problemet tacklas med hjälp av linjär-algebraiska medel, både regularisering och iterativa metoder. Som resultat visar sig DFPM (dynamical functional particle method) jämförbar med både konjugerad gradient-metoden och LSQR-algoritmen (least squares QR), dock med användning av andra parametrar än de teoretiskt optimala. Utöver tidigare kända metoder presenteras och testas även ett alternativt minimeringsproblem.
APA, Harvard, Vancouver, ISO, and other styles
19

Donfack, Simplice. "Methods and algorithms for solving linear systems of equations on massively parallel computers." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112042.

Full text
Abstract:
Les processeurs multi-cœurs sont considérés de nos jours comme l'avenir des calculateurs et auront un impact important dans le calcul scientifique. Cette thèse présente une nouvelle approche de résolution des grands systèmes linéaires creux et denses, qui soit adaptée à l'exécution sur les futurs machines pétaflopiques et en particulier celles ayant un nombre important de cœurs. Compte tenu du coût croissant des communications comparé au temps dont les processeurs mettent pour effectuer les opérations arithmétiques, notre approche adopte le principe de minimisation des communications au prix de quelques calculs redondants et utilise plusieurs adaptations pour atteindre de meilleures performances sur les machines multi-cœurs. Nous décomposons le problème à résoudre en plusieurs phases qui sont ensuite mises en œuvre séparément. Dans la première partie, nous présentons un algorithme basé sur le partitionnement d'hypergraphe qui réduit considérablement le remplissage ("fill-in") induit lors de la factorisation LU des matrices creuses non symétriques. Dans la deuxième partie, nous présentons deux algorithmes de réduction de communication pour les factorisations LU et QR qui sont adaptés aux environnements multi-cœurs. La principale contribution de cette partie est de réorganiser les opérations de la factorisation de manière à réduire la sollicitation du bus tout en utilisant de façon optimale les ressources. Nous étendons ensuite ce travail aux clusters de processeurs multi-cœurs. Dans la troisième partie, nous présentons une nouvelle approche d'ordonnancement et d'optimisation. La localité des données et l'équilibrage des charges représentent un sérieux compromis pour le choix des méthodes d'ordonnancement. Sur les machines NUMA par exemple où la localité des données n'est pas une option, nous avons observé qu'en présence de perturbations systèmes (" OS noise"), les performances pouvaient rapidement se dégrader et devenir difficiles à prédire. Pour cela, nous présentons une approche combinant un ordonnancement statique et dynamique pour ordonnancer les tâches de nos algorithmes. Nos résultats obtenues sur plusieurs architectures montrent que tous nos algorithmes sont efficaces et conduisent à des gains de performances significatifs. Nous pouvons atteindre des améliorations de l'ordre de 30 à 110% par rapport aux correspondants de nos algorithmes dans les bibliothèques numériques bien connues de la littérature
Multicore processors are considered to be nowadays the future of computing, and they will have an important impact in scientific computing. In this thesis, we study methods and algorithms for solving efficiently sparse and dense large linear systems on future petascale machines and in particular these having a significant number of cores. Due to the increasing communication cost compared to the time the processors take to perform arithmetic operations, our approach embrace the communication avoiding algorithm principle by doing some redundant computations and uses several adaptations to achieve better performance on multicore machines.We decompose the problem to solve into several phases that would be then designed or optimized separately. In the first part, we present an algorithm based on hypergraph partitioning and which considerably reduces the fill-in incurred in the LU factorization of sparse unsymmetric matrices. In the second part, we present two communication avoiding algorithms that are adapted to multicore environments. The main contribution of this part is to reorganize the computations such as to reduce bus contention and using efficiently resources. Then, we extend this work for clusters of multi-core processors. In the third part, we present a new scheduling and optimization approach. Data locality and load balancing are a serious trade-off in the choice of the scheduling strategy. On NUMA machines for example, where the data locality is not an option, we have observed that in the presence of noise, performance could quickly deteriorate and become difficult to predict. To overcome this bottleneck, we present an approach that combines a static and a dynamic scheduling approach to schedule the tasks of our algorithms.Our results obtained on several architectures show that all our algorithms are efficient and lead to significant performance gains. We can achieve from 30 up to 110% improvement over the corresponding routines of our algorithms in well known libraries
APA, Harvard, Vancouver, ISO, and other styles
20

Aksoy, Ceren. "Identification Of Serotype Specific Dna Marker For Salmonella Typhimurium By Rapd-pcr Method." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605441/index.pdf.

Full text
Abstract:
This study was performed for the identification of specific DNA marker using RAPD-PCR (Random Amplified Polymorphic DNA) method for serotype Salmonella Typhimurium which is one of the most prevalent serotype causing food poisining all over the world. The Primer 3 (RAPD 9.1), 5&rsquo
-CGT GCA CGC-3&lsquo
, was used in RAPD-PCR with 35 different Salmonella isolates. 12 of them were serotype Salmonella Typhimurium and 23 of them were belonging to other six different serotypes. Accordingly, two different 300 bp and 700 bp sized amplification products were obtained from the 12 different Salmonella Typhimurium isolates. On the other hand, other 23 Salmonella isolates of six different serotypes gave only 300 bp amplification band while 700 bp amplification band was not observed with Primer 3 (RAPD 9.1). After the discovery of 700 bp fragmentwhich was specific for S. Typhimurium, it was decided to sequence. The 700 bp band was ligated onto the vector pUC 19 to sequence. The cloninig operation gave positive results by the formation of blue and white colonies, but plasmid isolation process from white colonies containing the ligated vector was not achieved. Therefore, sequencing of the 700 bp fragment together with plasmid DNA could not be completed. However it wil be sent to USA for sequencing. According to these results, it was discovered that 700 bp amplification product was found as a specific polymorphic region for Salmonella Typhimurium after RAPD application on genomic DNA and this band can be used as a specific marker for detection and identification of Salmonella Typhimurium.
APA, Harvard, Vancouver, ISO, and other styles
21

Skoglund, Ingegerd. "Algorithms for a Partially Regularized Least Squares Problem." Licentiate thesis, Linköping : Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

HUANG, YEN-MEI, and 黃燕美. "Numerical Study on Basic QR Method." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/7x2pvm.

Full text
Abstract:
碩士
國立高雄師範大學
數學系
106
In this study, we want to explore whether the loss of orthogonality affects the findings of eigenvalues using basic QR method. We first introduce Norms of Vector and Matrix and Bauer-Fike theorem. Then we explain Power Method and three kind methods of QR decomposition: Gram-Schmidt process, Givens rotations, Householder transformations. Finally, we use the Hilbert matrix to observe the orthogonality obtained by the above method, and to check whether the eigenvalues are affected by the loss of orthogonality.
APA, Harvard, Vancouver, ISO, and other styles
23

Lan, Wen-Shao, and 藍文劭. "Secret QR Code Method based on Triple-Module Group." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/45687908859490235980.

Full text
Abstract:
碩士
元智大學
資訊傳播學系
105
The QR codes are very convenient to be used and can be accommodated with large amount of data, so that the QR codes are widely used in our life. When the QR data is privacy, everyone can easily to get the QR information. That is, the privacy the QR data is not protected. In this article, we proposed a secret hiding method by modifying the modules of the QR code based on triple module group. The new method can effective reduce the altered modules among the triple-module group. Compared to the related works, we can hide more hidden amount of data than the others, and moreover, we can preserve the readability on the QR code. The secret protection mechanism be applied with data hiding and secret sharing. It still shows the visual appearance of the QR code, and does not affect the QR Code itself readability. Everyone can only read original QR data without secret key, but the authorized user with the secret key can decode the secret data from the masked QR code. Therefore, the proposed method can protect the privacy and security of the QR data.
APA, Harvard, Vancouver, ISO, and other styles
24

YO, SIN-YAN, and 游昕彥. "The Generation Method of Dynamic Color Icon QR Code." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/76787840548853449913.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
105
QR Code is a popular 2-D bar code. Traditional QR Code is composed of black and white blocks. Therefore, the content is unknown before scanning. If a color icon, such as trademarks, can be embedded, the usability is better. Researchers proposed some methods to improve the appearance of QR Code, making QR Code itself provide a visual message. Referencing from other previous works, in order to avoid damaging the coding structure, the QR Code visual effects must accommodate the integrity of the coding structure to have a better accuracy. Nowadays, with the widely used of electronic displays, QR Code does not only appear on paper, but also appear in electric billboards. As a result, the color icon QR Code can be shown as animations dynamically, making economic value of QR Code higher. Generating dynamic color QR Codes which keeps the integrity of its structure, is the goal of this thesis. In this thesis, the halftone technology, and the concept of the center module is applied to get the black and white block QR Code. Some generated blocks will cover the embedded icon in the QR code, and is uncovered using the Gauss-Jordan elimination method. The characteristics of the QR reader is also considered to reduce the center module. Finally, a series of color icon QR Codes to are used to get the dynamic color icon QR Codes. Our method produces a number of color icon QR Codes and dynamic color icon QR Codes. Through the user scan experiment, we validated that the QR Code generated by our method can be identified by the scanning software commonly used in the market.
APA, Harvard, Vancouver, ISO, and other styles
25

Su, Yi-Chieh, and 蘇翊杰. "A Method of Visually Significant Trademark in QR Code Using the Halftone Techniques." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/b8qe39.

Full text
Abstract:
碩士
元智大學
資訊工程學系
104
With the rise of Internet and smart phones, more and more media use QR Code to demonstrate information like advertising or website. We can conveniently use the camera on mobile phone to scan the QR Coode, and capture the information. QR Code has become the convenient and quick way to get any information in daily life. However, QR Code is composed of irregular blocks of black and white, the combination of the image is meaningless and hard to be interpreted directly. In order to increase the applicability of QR Code, the most common way is covering an image onto a QR Code. Because the limit of QR Code coding standard, the image will be smaller and unclear. In our paper, we first generate a basic QR Code, and adjust the blocks of black and white module with Gauss-Jordan Elimination. Then we use halftone techniques to strengthen the visual effect. Our proposed method can show the significant trademark in the QR Code. Consumers can identify the trademark immediately while scanning the QR Code, and it is also beneficial to company to achieve the purpose of advertising.
APA, Harvard, Vancouver, ISO, and other styles
26

Chien-HanLee and 李建翰. "QR Code Beautification Based on Gauss-Jordan Elimination and a Novel Rendering Method." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/35651835599082999524.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系
102
QR code beautification is generally formulated as an optimization problem that minimizes the visual perception distortion subject to acceptable decoding rate. However, since the arithmetic operations of RS code are defined over a finite field F, normal convex optimization approaches cannot be directly applied. Consequently, solving the optimization problem becomes a challenge and related works usually take much time to generate an approximate result. In this work, we propose a two-stage approach to generate QR code with high quality visual content. In the first stage, a baseline QR code with poor visual quality is first synthesized based on the Gauss-Jordan elimination procedure. In the second stage, a rendering mechanism is designed to improve the visual quality while avoid affecting the decodability of the QR code. The experimental results show that the proposed method substantially enhances the appearance of the QR code and the processing complexity is near real-time.
APA, Harvard, Vancouver, ISO, and other styles
27

Tsai, Heng-Chieh, and 蔡恒杰. "A Parallel Method for the Double-Shift QR Iteration in the Quadratic Eigenvalue Problem." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/01942968721684466061.

Full text
Abstract:
碩士
國立交通大學
應用數學系
90
In this paper, we study the numerical solution of the quadratic eigenvalue problem ( A + B + C)x = 0, where A, B, and C are real nxn matrices, and A is nonsingular. We transform the quadratic eigenvalue problem into an enlarged standard eigenvalue problem where is a matrix of order 2n. Then we apply the Householder method and the double-shift QR iteration to get the eigenvalues of matrix . In order to obtain the eigenvalues faster, we propose a parallel method for the double-shift QR iteration. Our parallel method for the double-shift QR iteration decreases data-transferring time to one-third.
APA, Harvard, Vancouver, ISO, and other styles
28

Yeh, Hsiao-Yu, and 葉小語. "Large-Scale MIMO Detector Based on Newton’s Method with Lattice Reduction and QR Decomposition." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/7ch2q7.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
107
Massive multiple-input multiple-output (MIMO) was proposed for the higher wireless data transmission rate without increasing channel bandwidth and transmit power. It plays an important role in the prospective fth-generation (5G) wireless communication. Meanwhile, the complexity of the MIMO detector increases signicantly along with the number of antennas. So, the design of high-performance low-complexity MIMO detector is a big challenge in hardware. There are numerous low-complexity MIMO detection algorithms proposed in order to solve this problem. However, many algorithms such as intra-iterative interference cancellation (IIC) detector provide high throughput and lower complexity, but they only approach the performance of the minimum-mean-square-error (MMSE) detector. The triangular approximate relaxation based detector (TASER) can approximate maximum likelihood (ML) detection performance, but is only subject to BPSK and QPSK modulation. However, its complexity still very high. Then, this study proposes a lattice-reduction-aided (LRA) symbol-wise (SW) IIC detector which can support M-QAM modulation. Although being near-MMSE performance, the proposed algorithm has advantages in the higher convergence rate and lower computational complexity in detector parts. The iteration number of the proposed algorithm is less than that of the IIC detector. In 64-QAM 128 x 8 antenna setting MIMO system, the proposed detector reduces about 95:35% computational complexity for IIC detector.
APA, Harvard, Vancouver, ISO, and other styles
29

"Observability Methods in Sensor Scheduling." Doctoral diss., 2015. http://hdl.handle.net/2286/R.I.34856.

Full text
Abstract:
abstract: Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
Dissertation/Thesis
Doctoral Dissertation Applied Mathematics 2015
APA, Harvard, Vancouver, ISO, and other styles
30

"Vom Innovationsimpuls zum Markteintritt. Theorie, Praxis, Methoden." facultas WUV, 2014. http://epub.wu.ac.at/4414/1/Finale_Version_Vom_Innovationsimpuls_zum_Markteintritt_vom_verlag.pdf.

Full text
Abstract:
Die Grenzregionen rund um die Zentren Bratislava und Wien gehören zu den am schnellsten wachsenden Regionen in Europa - insbesondere die High-Tech-Industrie betreffend (www.contor-analyse.de). Ein Erfolgsfaktor für kommerziell erfolgreiche High-Tech (Start Up) Unternehmen ist die frühzeitige Identifikation von Nutzeranforderungen und Verkaufsargumenten bei Innovationen. Interdisziplinäre Teams, die technisch und kaufmännisch ausgebildete Arbeitskräfte beinhalten, stellen die Basis für unternehmerische Innovations-Erfolgsgeschichten dar. Im August 2011 ist ein Team aus Forschern der Technischen Universität Wien, der Wirtschaftsuniversität Wien, der Wirtschaftsuniversität Bratislava und des Inkubators INITS angetreten, High-Tech Unternehmen bei deren Markteintritt zu unterstützen und die universitäre Ausbildung von Interessierten an Innovationen im B2B High-Tech-Bereich zu adaptieren. Das Projekt Grenzüberschreitendes HiTECH Center wurde gestartet (Projektlaufzeit 08/2011 bis 12/2013, Förderprogramm ETC, creating the future: Programm zur grenzüberschreitenden Zusammenarbeit SLOWAKEI - ÖSTERREICH 2007-2013, www.hitechcentrum.eu). Zielsetzung war die Entwicklung einer Methodik für einen erfolgreichen Markteintritt in B2 B High-Tech-Märkten. Das Projekt wurde mit sieben Arbeitspaketen konzipiert. Arbeitspaket sechs betrifft eine Publikation der wichtigsten Lernergebnisse. Die vorliegende Arbeit stellt dieses Ergebnis dar und wurde erst durch eine Projektverlängerung bis November 2014 ermöglicht. Die Vorarbeiten zum Projekt und die erste Analysephase innerhalb der Projektlaufzeit zeigen eine Lücke an Forschungsergebnissen zum Thema "Marketing Testbed" und von vergleichbaren interdisziplinären Lehrveranstaltungen an österreichischen Universitäten. Existierende Marketing- und Innovationslehrgänge beschäftigen sich in überwiegender Zahl mit B2C Themen und sind nicht interdisziplinär. Trotz der geografischen Nähe der beiden Länder Österreich und Slowakei ist die zu geringe Transparenz der Märkte - und der damit verbundenen Chancen - derzeit eine Barriere für eine schnellere Entwicklung dieser grenzüberschreitenden Region. Weiters besteht über die Grenzen hinaus ein Mangel an interdisziplinär ausgebildetem Personal, das Marketingaufgaben der High-Tech-Anbieter effizient bearbeiten kann. Dem Projektteam stellten sich daher unter anderem folgende Fragen: Mit welcher Methodik können High-Tech Start Up Unternehmen in frühen Innovationsphasen unterstützt werden, um einen erfolgreichen Markteintritt zu schaffen? Wie stark beeinflusst die Thematik "Multidisziplinäre Kommunikation" den Prozess vom Innovationsimpuls zum Markteintritt? Wie können die Anforderungen der innovierenden High-Tech Firmen in die Universitätslehre integriert werden? Wie können interdisziplinäre Lehrformate - auch grenzüberschreitend - umgesetzt werden? Das Projektteam konnte im Rahmen der Projektlaufz eit ein erstes Regelwerk für Marketing Testbeds entwickeln und dieses Wissen bereits in wissenschaftlichen Arbeiten und ersten Implementierungen anwenden. Insgesamt wurden am Institut für Marketing Management in Wien acht Arbeiten von Studierenden fertiggestellt (davon zwei Dissertationen). An der WU Bratislava wurden 17 studentische Arbeiten abgeschlossen und sechs interdisziplinäre Projekte umgesetzt. Es fand ein intensive Wissensaustausch mit drei Synergieprojekten (INNOVMAT, DUO STARS, SMARTNET) statt und die Zwischenergebnisse des HiTECH Centrum Projekts waren die Basis für ein weiteres europäisches Projekt (Projekt REALITY, Programm ERASMUS MUNDUS). Das Hauptergebnis des Projekts liegt in der Bestätigung der Wichtigkeit der multidisziplinären Kommunikation in allen Bereichen vom Innovationsimpuls zum Markteintritt. Für eine nachhaltige Wirkung der Projektergebnisse wird die Gründung eines HiTECH Center Vereins sorgen, der sich mit den angestoßenen Forschungsthemen beschäftigt und High-Tech Start Ups in deren frühen Markteintrittsphasen unterstützt. (authors' abstract)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography