To see the other types of publications on this topic, follow the link: Management of multiple sources.

Dissertations / Theses on the topic 'Management of multiple sources'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Management of multiple sources.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Roberson, Daniel Richard. "Application of multiple information sources to prediction of engine time on-wing." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99041.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2015. In conjunction with the Leaders for Global Operations Program at MIT.<br>Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015. In conjunction with the Leaders for Global Operations Program at MIT.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 85-88).<br>The maintenance and operation of commercial turbofan engines relies upon an understanding of the factors which contribute to engine degradation from the operational mission, environment and maintenance procedures. A multiple information source system is developed using the Pratt & Whitney engine to combine predictive engineering simulations with socio-technical effects and environmental factors for an improved predictive system for engine time on-wing. The system establishes an airport severity factor for all operating airports based upon mission parameters and environmental parameters. The final system involves three hierarchical layers: a 1-D engineering simulation; a parametric survival study; and a logistic regression study. Each of these layers is combined so that the output of the prior becomes the input of the next model. The combined system demonstrates an improvement in current practices at a fleet level from an R2 of 0.526 to 0.7966 and provides an indication of the relationship suspended particulate matter and engine degradation. The potential effects on the airline industry from city based severity in maintenance contracts are explored. Application of multiple information sources requires both knowledge of the system, and access to the data. The organizational structure of a data analytics organization is described; an architecture for integration of this team within an existing corporate environment is proposed.<br>by Daniel Richard Roberson.<br>M.B.A.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
2

Kok, Theodorus Antonius Hendrik. "Development of a strategy for the management and control of multiple energy sources within series hybrid electric vehicles." Thesis, University of Sunderland, 2015. http://sure.sunderland.ac.uk/6580/.

Full text
Abstract:
The battery in an EV is designed according to a power to energy ratio and is a trade-off in the design of the pack. It also suffers from effects such as rate capacity effect, ripple effects and inefficiency under charging. These effects result in losses through which the capacity and life span of the batteries are compromised affecting range and drivability. In this thesis a novel development path resulting in a novel Power and Energy Management Strategy (PEMS) is presented. The effects of (dis)charging a battery are researched and converted to an energy optimisation formula and result in reduced power demand for the converter which reduces weight. The resulting Power Management Strategy (PMS) aims to recover energy more efficiently into UC while responding fast to a change in demand. The effects of converters on the battery current ripple are researched and discussed, resulting in an optimal topology layout, improved battery life and reduced losses. Through the use of Markov Chain analysis and a newly derived Bias function a predictive Energy Management Strategy (EMS) is developed which is practical to use in EVs. This resulted in a PEMS which because of the fast PMS results in a fast response time. The use of Markov Chain results in predictive EMS and improves the efficiency of the energy sources and allows the design to be reduced in size. Through the design methodology used the parallel topology (the battery converter parallel to the UC Module) was rated preferred choice over battery only and battery with UC Module. The rating was based on capacity, ripple control, weight, 10 year cost, potential for motor controller efficiency improvement, range and efficiency. v The combination of method and PEMS resulted in an improved life expectancy of the pack to over 10 year (up from 7) while increasing range and without sacrificing drivability.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Ping. "Learning from Multiple Knowledge Sources." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/214795.

Full text
Abstract:
Computer and Information Science<br>Ph.D.<br>In supervised learning, it is usually assumed that true labels are readily available from a single annotator or source. However, recent advances in corroborative technology have given rise to situations where the true label of the target is unknown. In such problems, multiple sources or annotators are often available that provide noisy labels of the targets. In these multi-annotator problems, building a classifier in the traditional single-annotator manner, without regard for the annotator properties may not be effective in general. In recent years, how to make the best use of the labeling information provided by multiple annotators to approximate the hidden true concept has drawn the attention of researchers in machine learning and data mining. In our previous work, a probabilistic method (i.e., MAP-ML algorithm) of iteratively evaluating the different annotators and giving an estimate of the hidden true labels is developed. However, the method assumes the error rate of each annotator is consistent across all the input data. This is an impractical assumption in many cases since annotator knowledge can fluctuate considerably depending on the groups of input instances. In this dissertation, one of our proposed methods, GMM-MAPML algorithm, follows MAP-ML but relaxes the data-independent assumption, i.e., we assume an annotator may not be consistently accurate across the entire feature space. GMM-MAPML uses a Gaussian mixture model (GMM) and Bayesian information criterion (BIC) to find the fittest model to approximate the distribution of the instances. Then the maximum a posterior (MAP) estimation of the hidden true labels and the maximum-likelihood (ML) estimation of quality of multiple annotators at each Gaussian component are provided alternately. Recent studies show that it is not the case that employing more annotators regardless of their expertise will result in improved highest aggregating performance. In this dissertation, we also propose a novel algorithm to integrate multiple annotators by Aggregating Experts and Filtering Novices, which we call AEFN. AEFN iteratively evaluates annotators, filters the low-quality annotators, and re-estimates the labels based only on information obtained from the good annotators. The noisy annotations we integrate are from any combination of human and previously existing machine-based classifiers, and thus AEFN can be applied to many real-world problems. Emotional speech classification, CASP9 protein disorder prediction, and biomedical text annotation experiments show a significant performance improvement of the proposed methods (i.e., GMM-MAPML and AEFN) as compared to the majority voting baseline and the previous data-independent MAP-ML method. Recent experiments include predicting novel drug indications (i.e., drug repositioning) for both approved drugs and new molecules by integrating multiple chemical, biological or phenotypic data sources.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
4

Stevenson, Robert Mark. "Multiple knowledge sources for word sense disambiguation." Thesis, University of Sheffield, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dench, M. "Structural vibration control using multiple synchronous sources." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/349006/.

Full text
Abstract:
The advantages of isolating vibrating machinery from its supporting structure are that the chances of vibration induced fatigue failure of structural components are reduced, the structure becomes more inhabitable for people due to less vibration exposure and the sound radiated by the structure into the environment is reduced. This last point is especially important for machinery operating in a marine environment because low frequency sound propagates very well underwater, and the machinery induced sound radiated from a ship or submarine is a primary detection and classification mechanism for passive sonar systems. This thesis investigates the control of vibration from an elastic support structure upon which multiple vibrating systems are passively mounted. The excitations are assumed to occur at discrete frequencies with a finite number of harmonic components and the machines are all assumed to be supplied with power from the same electrical supply. Active vibration control may be achieved by adjusting the phase of the voltage supplied to one or more of the machines, so that a minimum value of a measurable cost function is obtained. Adjusting the phase of a machine with respect to a reference machine is known as synchrophasing and is a well established technique for controlling the sound in aircraft cabins and in ducts containing axial fans. However, the use of the technique for reducing the vibration of machinery mounted on elastic structures seems to have received very little attention in the literature and would appear to be a gap in the current knowledge. This thesis aims to address that gap by investigating theoretically and experimentally how synchrophasing can be implemented as an active structural vibration control technique.
APA, Harvard, Vancouver, ISO, and other styles
6

MEDEIROS, Ícaro Rafael da Silva. "Tag suggestion using multiple sources of knowledge." Universidade Federal de Pernambuco, 2010. https://repositorio.ufpe.br/handle/123456789/2275.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:56:06Z (GMT). No. of bitstreams: 2 arquivo2739_1.pdf: 2586871 bytes, checksum: 3a0e10a22b131714039f0e8ffe875d80 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010<br>Nos sistemas de tagging social usuários atribuem tags (palavras-chave) a recursos (páginas Web, fotos, publicações, etc), criando uma estrutura conhecida como folksonomia, que possibilita uma melhora na navegação, organização e recuperação de informação. Atualmente, esses sistemas são muito populares na Web, portanto, melhorar sua qualidade e automatizar o processo de atribuição de tags é uma tarefa importante. Neste trabalho é proposto um sistema que automaticamente atribui tags a páginas, baseando-se em múltiplas fontes de conhecimento como o conteúdo textual, estrutura de hiperlinks e bases de conhecimento. A partir dessas fontes, vários atributos são extraídos para construir um classificador que decide que termos devem ser sugeridos como tag. Experimentos usando um dataset com tags e páginas extraídas do Delicious, um importante sistema de tagging social, mostram que nossos métodos obtém bons resultados de precisão e cobertura, quando comparado com tags sugeridas por usuários. Além disso, uma comparação com trabalhos relacionados mostra que nosso sistema tem uma qualidade de sugestão comparável a abordagens estado da arte na área. Finalmente, uma avaliação com usuários foi feita para simular um ambiente real, o que também produziu bons resultados
APA, Harvard, Vancouver, ISO, and other styles
7

Rice, Michael, and Erik Perrins. "Maximum Likelihood Detection from Multiple Bit Sources." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596443.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV<br>This paper deals with the problem of producing the best bit stream from a number of input bit streams with varying degrees of reliability. The best source selector and smart source selector are recast as detectors, and the maximum likelihood bit detector (MLBD) is derived from basic principles under the assumption that each bit value is accompanied by a quality measure proportional to its probability of error. We show that both the majority voter and the best source selector are special cases of the MLBD and define the conditions under which these special cases occur. We give a mathematical proof that the MLBD is the same as or better than the best source selector.
APA, Harvard, Vancouver, ISO, and other styles
8

Kabzinska, Ewa Joanna. "Empirical likelihood approach for estimation from multiple sources." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/422166/.

Full text
Abstract:
Empirical likelihood is a non-parametric, likelihood-based inference approach. In the design-based empirical likelihood approach introduced by Berger and De La Riva Torres (2016), the parameter of interest is expressed as a solution to an estimating equation. The maximum empirical likelihood point estimator is obtained by maximising the empirical likelihood function under a system of constraints. A single vector of weights, which can be used to estimate various parameters, is created. Design-based empirical likelihood confidence intervals are based on the χ<sup>2</sup> approximation of the empirical likelihood ratio function. The confidence intervals are range-preserving and asymmetric, with the shape driven by the distribution of the data. In this thesis we focus on the extension and application of design-based empirical likelihood methods to various problems occurring in survey inference. First, a design-based empirical likelihood methodology for parameter estimation in two surveys context, in presence of alignment and benchmark constraints, is developed. Second, a design-based empirical likelihood multiplicity adjusted estimator for multiple frame surveys is proposed. Third, design-based empirical likelihood is applied to a practical problem of census coverage estimation. The main contribution of this thesis is defining the empirical likelihood methodology for the studied problems and showing that the aligned and multiplicity adjusted empirical likelihood estimators are √n-design-consistent. We also discuss how the original proofs presented by Berger and De La Riva Torres (2016) can be adjusted to show that the empirical likelihood ratio statistic is pivotal and follows a χ<sup>2</sup> distribution under alignment constraints and when the multiplicity adjustments are used. We evaluate the asymptotic performance of the empirical likelihood estimators in a series of simulations on real and artificial data. We also discuss the computational aspects of the calculations necessary to obtain empirical likelihood point estimates and confidence intervals and propose a practical way to obtain empirical likelihood confidence intervals in situations when they might be difficult to obtain using standard approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

Brizzi, Francesco. "Estimating HIV incidence from multiple sources of data." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273803.

Full text
Abstract:
This thesis develops novel statistical methodology for estimating the incidence and the prevalence of Human Immunodeficiency Virus (HIV) using routinely collected surveillance data. The robust estimation of HIV incidence and prevalence is crucial to correctly evaluate the effectiveness of targeted public health interventions and to accurately predict the HIV- related burden imposed on healthcare services. Bayesian CD4-based multi-state back-calculation methods are a key tool for monitoring the HIV epidemic, providing estimates of HIV incidence and diagnosis rates by disentangling their competing contribution to the observed surveillance data. Improving the effectiveness of public health interventions, requires targeting specific age-groups at high risk of infection; however, existing methods are limited in that they do not allow for such subgroups to be identified. Therefore the methodological focus of this thesis lies in developing a rigorous statistical framework for age-dependent back-calculation in order to achieve the joint estimation of age-and-time dependent HIV incidence and diagnosis rates. Key challenges we specifically addressed include ensuring the computational feasibility of proposed methods, an issue that has previously hindered extensions of back-calculation, and achieving the joint modelling of time-and-age specific incidence. The suitability of non-parametric bivariate smoothing methods for modelling the age-and-time specific incidence has been investigated in detail within comprehensive simulation studies. Furthermore, in order to enhance the generalisability of the proposed model, we developed back-calculation that can admit surveillance data less rich in detail; these handle surveillance data collected from an intermediate point of the epidemic, or only available on a coarse scale, and concern both age-dependent and age-independent back-calculation. The applicability of the proposed methods is illustrated using routinely collected surveillance data from England and Wales, for the HIV epidemic among men who have sex with men (MSM).
APA, Harvard, Vancouver, ISO, and other styles
10

Farooq, Umar. "Product Reputation Evaluation based on Multiple Web Sources." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE2013.

Full text
Abstract:
Internet est une immense source de données non structurées dont l'extraction et l'analyse devient un enjeu majeur. Ces informations peuvent être plus qu'utiles à des consommateurs et des fabricants dans leur processus de prise de décision quant à un produit. Dans ce contexte, l'exploitation de telles information se révèle être une tâche très difficile. De nombreuses méthodes d'évaluation de produits existent à l'heure actuelle, utilisant principalement les notes et les commentaires disponibles sur Internet. Cependant, ces méthodes rencontrent vite des limites et ne sont donc pas en mesure de répondre aux besoins et aux exigences des clients ou des fabricants. Par exemple, les méthodes existantes d'analyse de sentiments, qui classent les opinions des clients sur un produit à l’aide de leur polarité, ne sont pas en mesure de déterminer le contexte du mot dans une phrase avec précision, ce qui biaise fortement leurs résultats. De plus, les méthodes de traitement des négations utilisées, qui déterminent les sentiments exprimées par les clients dans leur commentaires, ne sont pas en mesure de traiter tous les types de négation, ne considèrent pas non plus toutes les exceptions où les négations se comportent différemment. De même, les modèles existants d'estimation de réputation de produits sont basés sur une source unique, et donc peu robuste aux fausses évaluations ou aux évaluations biaisées, ne sont pas en mesure de refléter les opinions récentes. Ils ne permettent pas aux utilisateurs d'évaluer le produit au regard de critères spécifiques, et ainsi ne fournissent pas une estimation précise. D'autre part, les systèmes de réputation évaluant des produits fonctionnent de manière centralisée, entraînant des problèmes de robustesse et des facilités de manipulation, voire de falsification, d'informations, ces approches ne convenant pas à résoudre un problème aussi complexe. Cette thèse propose des modèles et des méthodes d'évaluation de la réputation dédiées aux produits, fonctionnant à partir des données disponibles sur Internet, et visant à fournir des informations précises aux consommateurs et aux fabricants, les appuyant dans leur prise de décision. Ces méthodes concernent i) l'extraction des données d'évaluation des produits à partir de plusieurs sources; ii) une analyse sémantique des évaluations des clients pour déterminer si les opinions exprimées sur chacune des caractéristiques d'un produit sont positives ou négatives; iii) le calcul des différentes valeurs de réputation d'un produit, tout en considérant différents critères d'évaluation, et iv) enfin, le retour des résultats aux consommateurs ou aux fabricants afin de les aider dans leur prise de décisions. Cette thèse contribue à trois principaux domaines de recherche à savoir i) l'analyse des sentiments exprimées quant aux caractéristiques d'un produit, comprenant une méthode de désambiguïsation du sens des mots ainsi qu'une prise en compte plus fine des négations pour améliorer la performance de l'analyse de sentiments selon différents niveaux; ii) les modèles d'évaluation de la réputation d'un produit, basé sur un modèlemathématique calculant plusieurs valeurs de réputation pour une évaluation d'un produit selon différents critères et enfin iii) une architecture multi-agents robuste, facilitant le déploiement et la parallélisation des tâches. Sur Internet, la plupart des opinions sur des produits sont de nature textuelle, comme par exemple les avis des consommateurs. Afin d'analyser de tels commentaires, une méthode d'analyse de sentiments exprimés ciblant spécifiquement les caractéristiques d'un produit a été développée. Une méthode de désambiguïsation identifiant le sens des mots selon leur contexte tout en déterminant leur polarité a enrichi le processus, qui fût complété par une méthode d'analyse fine des négations, déterminant les séquences de mots affectées par chaque type de négation<br>The extraction of unstructured data from the Web and to analyzing them in order to determine useful information which can be used by customers and manufacturers to make decisions about product is a challengeable task. There are some existing techniques to evaluate products based on the ratings and product reviews posted on the Web. However, all these techniques have some inherent issues and limitations and therefore not able to fulfill the needs and requirements of both customer and manufacturer. For instance, the existing sentiment analysis methods (which classify the opinions in customer reviewsabout a product as positive or negative) are not able to determine the context of word in a sentence accurately. In addition, negation handling methods adopted while determining the sentiment are not able to deal with all types of negations and they also do not consider all exceptions where negations behave differently. Similarly, the existing product reputation models are based on single source, not robust to false and biased ratings, not able to reflect the recent opinions, do not allow users to evaluate product on different criteria, and also do not provide a good estimation accuracy. On the other hand, the existingproduct reputation systems are centralized which has issues such as single point of failure, easy to falsify evaluation information and not suitable approach to solve a complex problem. This thesis proposes methods and techniques for evaluating product reputation based on data available on the Web and to provide valuable information to customers and manufacturers for decision making. These methods perform the following tasks: 1) extract product evaluation data from multiple Web sources 2) analyze product reviews in order to determine that whether opinions about product features in customer reviews are positive or negative, 3) computes different product reputation values while considering different evaluation criteria, and 4) finally the results are provided to customers and manufacturers in order to make decisions. This thesis contributes in three main research areas i.e. 1) feature level sentiment analysis, 2) product reputation model and 3) multiagent architecture. First, a word sense disambiguation and negation handling methods are proposed in order to improve the performance of feature level sentiment analysis. Second, a novel mathematical model is proposed which computes several reputation values in order to evaluate product based on different criteria. Finally, multiagent architecture for review analysis and product evaluation is proposed. Huge amount of the product evaluation data on the Web is in textual form (i.e. product reviews). In order to analyze product reviews to evaluate product we propose a feature level sentiment analysis method which determines the opinions about different features of a product. A word sense disambiguation method is introduced which identify the sense of words according to the context while determining the polarity. Inaddition, a negation handling method is proposed which determine the sequence of words affected by different types of negations. The results show that both word sense disambiguation and negation handling methods improve the overall accuracy of feature level sentiment analysis. A multi-source product reputation model is proposed where informative, robust and strategy proof aggregation methods are introduced to compute different reputation values. Sources from which reviews are extracted may not be creditable hence a source credibility measuring method is proposed in order to avoid malicious web sources. In addition, suitable decay principles for product reputation are also introduced in order to reflect the newest opinions about product quickly. The model also considers several parameters such as reviewer expertise, rating trustworthiness, time span of ratings, reviewer age, sex and location in order to evaluate product in different ways
APA, Harvard, Vancouver, ISO, and other styles
11

Yankova-Doseva, Milena. "TERMS - Text Extraction from Redundant and Multiple Sources." Thesis, University of Sheffield, 2010. http://etheses.whiterose.ac.uk/933/.

Full text
Abstract:
In this work we present our approach to the identity resolution problem: discovering references to one and the same object that come from different sources. Solving this problem is important for a number of different communities (e.g. Database, NLP and Semantic Web) that process heterogeneous data where variations of the same objects are referenced in different formats (e.g. textual documents, web pages, database records, ontologies etc.). Identity resolution aims at creating a single view into the data where different facts are interlinked and incompleteness is remedied. We propose a four-step approach that starts with schema alignment of incoming data sources. As a second step - candidate selection - we discard those entities that are totally different from those that they are compared to. Next the main evidence for identity of two entities comes from applying similarity measures comparing their attribute values. The last step in the identity resolution process is data fusion or merging entities found to be identical into a single object. The principal novel contribution of our solution is the use of a rich semantic knowledge representation that allows for flexible and unified interpretation during the resolution process. Thus we are not restricted in the type of information that can be processed (although we have focussed our work on problems relating to information extracted from text). We report the implementation of these four steps in an IDentity Resolution Framework (IDRF) and their application to two use-cases. We propose a rule based approach for customisation in each step and introduce logical operators and their interpretation during the process. Our final evaluation shows that this approach facilitates high accuracy in resolving identity.
APA, Harvard, Vancouver, ISO, and other styles
12

Sitek, Arkadiusz. "The development of multiple line transmission sources for SPECT." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0019/NQ27248.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Selcon, Stephen Jonathan. "Multiple information sources : the effect of redundancy on performance." Thesis, University of Reading, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Yeang, Chen-Hsiang 1969. "Inferring regulatory networks from multiple sources of genomic data." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28731.

Full text
Abstract:
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.<br>Includes bibliographical references (p. 279-299).<br>(cont.) algorithm to identify the regulatory models from protein-DNA binding and gene expression data. These models to a large extent agree with the knowledge of gene regulation pertaining to the corresponding regulators. The three works in this thesis provide a framework of modeling gene regulatory networks.<br>This thesis addresses the problems of modeling the gene regulatory system from multiple sources of large-scale datasets. In the first part, we develop a computational framework of building and validating simple, mechanistic models of gene regulation from multiple sources of data. These models, which we call physical network models, annotate the network of molecular interactions with several types of attributes (variables). We associate model attributes with physical interaction and knock-out gene expression data according to the confidence measures of data and the hypothesis that gene regulation is achieved via molecular interaction cascades. By applying standard model inference algorithms, we are able to obtain the configurations of model attributes which optimally fit the data. Because existing datasets do not provide sufficient constraints to the models, there are many optimal configurations which fit the data equally well. In the second part, we develop an information theoretic score to measure the expected capacity of new knock-out experiments in terms of reducing the model uncertainty. We collaborate with biologists to perform suggested knock-out experiments and analyze the data. The results indicate that we can reduce model uncertainty by incorporating new data. The first two parts focus on the regulatory effects along single pathways. In the third part, we consider the combinatorial effects of multiple transcription factors on transcription control. We simplify the problem by characterizing a combinatorial function of multiple regulators in terms of the properties of single regulators: the function of a regulator and its direction of effectiveness. With this characterization, we develop an incremental<br>by Chen-Hsiang Yeang.<br>Sc.D.
APA, Harvard, Vancouver, ISO, and other styles
15

Daniels, Reza Che. "The income distribution with multiple sources of survey error." Doctoral thesis, University of Cape Town, 2013. http://hdl.handle.net/11427/5777.

Full text
Abstract:
Includes abstract.<br>Includes bibliographical references.<br>Estimating parameters of the income distribution in public-use micro datasets is frequently complicated by multiple sources of survey error. This dissertation consists of three main chapters that, taken together, provide insight into several important econometric concerns that arise when analysing income from household surveys. The country of interest is South Africa, but despite this geographical specificity, the discussion in each chapter is generalisable to any household survey concerned with measuring any component of income.
APA, Harvard, Vancouver, ISO, and other styles
16

Crawhall, Robert J. H. "EMI potential of multiple sources within a shielded enclosure." Thesis, University of Ottawa (Canada), 1993. http://hdl.handle.net/10393/6750.

Full text
Abstract:
An analytic model is developed for the prediction of electromagnetic emissions potential due to multiple integrated circuits (ICs) within a shielded enclosure. Detailed analysis of radiated emissions from an IC leads to dipole representations of the sources. These dipole sources are then applied in the determination of the fields and currents induced on the inside of the enclosure. The magnitude of these disturbances is taken as a metric of electromagnetic emissions potential. The power spectral density and the radiation efficiency of the ICs are investigated. ICs are represented by magnetic and electric dipoles, the magnitude and polarization of which are determined through measurement or calculation. Green's functions are derived that relate the dipole sources to the electromagnetic disturbances induced on the walls of the enclosure. Mapping matrices are proposed that relate multiple sources to multiple points on the wall. The role of source diversity in the summing problem is discussed. A stochastic analysis of the multiple source problem determines the distribution of disturbances due to known probability distributions of significant source factors. The shielding effectiveness of enclosures is determined using perturbation models of the leakage paths driven by the disturbances calculated through the application of the mapping matrices.
APA, Harvard, Vancouver, ISO, and other styles
17

Wu, Zhenyu, Ali Bilgin, and Michael W. Marcellin. "JOINT SOURCE/CHANNEL CODING FOR TRANSMISSION OF MULTIPLE SOURCES." International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604932.

Full text
Abstract:
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada<br>A practical joint source/channel coding algorithm is proposed for the transmission of multiple images and videos to reduce the overall reconstructed source distortion at the receiver within a given total bit rate. It is demonstrated that by joint coding of multiple sources with such an objective, both improved distortion performance as well as reduced quality variation can be achieved at the same time. Experimental results based on multiple images and video sequences justify our conclusion.
APA, Harvard, Vancouver, ISO, and other styles
18

Coogle, Richard A. "Using multiple agents in uncertainty minimization of ablating target sources." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53036.

Full text
Abstract:
The objective of this research effort is to provide an efficient methodology for a multi-agent robotic system to observe moving targets that are generated from an ablation process. An ablation process is a process where a larger mass is reduced in volume as a result of erosion; this erosion results in smaller, independent masses. An example of such a process is the natural process that gives rise to icebergs, which are generated through an ablation process referred to as ice calving. Ships that operate in polar regions continue to face the threat of floating ice sheets and icebergs generated from the ice ablation process. Although systems have been implemented to track these threats with varying degrees of success, many of these techniques require that the operations are conducted outside of some boundary where the icebergs are known not to drift. Since instances where polar operations must be conducted within such a boundary line do exist (e.g., resource exploration), methods for situational awareness of icebergs for these operations are necessary. In this research, efficacy of these methods is correlated to the initial acquisition time of observing newly ablated targets, as it provides for the ability to enact early countermeasures. To address the research objective, the iceberg tracking problem is defined such that it is re-cast within a class of robotic, multiagent target-observation problems. From this new definition, the primary contributions of this research are obtained: 1) A definition of the iceberg observation problem that extends an existing robotic observation problem to the requirements for the observation of floating ice masses; 2) A method for modeling the activity regions on an ablating source to extract ideal search regions to quickly acquire newly ablated targets; 3) A method for extracting metrics for this model that can be used to assess performance of observation algorithms and perform resource allocation. A robot controller is developed that implements the algorithms that result from these contributions and comparisons are made to existing target acquisition techniques.
APA, Harvard, Vancouver, ISO, and other styles
19

Zahrn, Frederick Craig. "Studies of inventory control and capacity planning with multiple sources." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29736.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2010.<br>Committee Co-Chair: John H. Vande Vate; Committee Co-Chair: Shi-Jie Deng; Committee Member: Anton J. Kleywegt; Committee Member: Hayriye Ayhan; Committee Member: Mark E. Ferguson. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
20

Grobe, Gerrit. "Real options analysis of investments under multiple sources of uncertainty." Thesis, University of Cambridge, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Baoping. "Intelligent Fusion of Evidence from Multiple Sources for Text Classification." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28198.

Full text
Abstract:
Automatic text classification using current approaches is known to perform poorly when documents are noisy or when limited amounts of textual content is available. Yet, many users need access to such documents, which are found in large numbers in digital libraries and in the WWW. If documents are not classified, they are difficult to find when browsing. Further, searching precision suffers when categories cannot be checked, since many documents may be retrieved that would fail to meet category constraints. In this work, we study how different types of evidence from multiple sources can be intelligently fused to improve classification of text documents into predefined categories. We present a classification framework based on an inductive learning method -- Genetic Programming (GP) -- to fuse evidence from multiple sources. We show that good classification is possible with documents which are noisy or which have small amounts of text (e.g., short metadata records) -- if multiple sources of evidence are fused in an intelligent way. The framework is validated through experiments performed on documents in two testbeds. One is the ACM Digital Library (using a subset available in connection with CITIDEL, part of NSF's National Science Digital Library). The other is Web data, in particular that portion associated with the Cadê Web directory. Our studies have shown that improvement can be achieved relative to other machine learning approaches if genetic programming methods are combined with classifiers such as kNN. Extensive analysis was performed to study the results generated through the GP-based fusion approach and to understand key factors that promote good classification.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Benanzer, Todd W. "System Design of Undersea Vehicles with Multiple Sources of Uncertainty." Wright State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=wright1215046954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Meyer, Ryan. "Multiple potential well structure in inertial electrostatic confinement devices." Diss., Columbia, Mo. : University of Missouri-Columbia, 2004. http://hdl.handle.net/10355/4098.

Full text
Abstract:
Thesis (M.S.) University of Missouri-Columbia, 2004.<br>The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (June 30, 2006) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
24

Edwards, Paul Martin. "Intelligent monitoring & management of light sources." Thesis, Cardiff University, 2007. http://orca.cf.ac.uk/54622/.

Full text
Abstract:
A new method for the monitoring of filament lamps and low pressure discharge lamps has been developed. The new technique monitors the electrical characteristics of the lamp to provide real time analysis of the lamp's condition, without the need for additional wires or expensive light sensors. The advent of low-cost microcontrollers developed for electrical metering applications means that not only is this technique technologically practical, it is also financially viable. The deployment of this technology, particularly in the case of UV water sterilizers, would improve safety and save the significant expense and environmental impact of unnecessary replacement lamps.
APA, Harvard, Vancouver, ISO, and other styles
25

Lombard, Anthony [Verfasser]. "Localization of Multiple Independent Sound Sources in Adverse Environments / Anthony Lombard." München : Verlag Dr. Hut, 2012. http://d-nb.info/1029400148/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Rezazadeh, Arezou. "Error exponent analysis for the multiple access channel with correlated sources." Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/667611.

Full text
Abstract:
Due to delay constraints of modern communication systems, studying reliable communication with finite-length codewords is much needed. Error exponents are one approach to study the finite-length regime from the information-theoretic point of view. In this thesis, we study the achievable exponent for single-user communication and also multiple-access channel with both independent and correlated sources. By studying different coding schemes including independent and identically distributed, independent and conditionally distributed, message-dependent, generalized constant-composition and conditional constant-composition ensembles, we derive a number of achievable exponents for both single-user and multi-user communication, and we analyze them.<br>A causa de les restriccions de retard dels sistemes de comunicació moderns, estudiar la fiabilitat de la comunicació amb paraules de codis de longitud finita és important. Els exponents d’error són un mètode per estudiar el règim de longitud finita des del punt de vista de la teoria de la informació. En aquesta tesi, ens centrem en assolir l’exponent per a la comunicació d’un sol usuari i també per l’accés múltiple amb fonts independents i correlacionades. En estudiar els següents esquemes de codificació amb paraules independents i idènticament distribuïdes, independents i condicionalment distribuïdes, depenent del missatge, composició constant generalitzada, i conjunts de composició constant condicional, obtenim i analitzem diversos exponents d’error assolibles tant per a la comunicació d’un sol usuari com per la de múltiples usuaris.<br>Las restricciones cada vez más fuertes en el retraso de transmisión de los sistemas de comunicación modernos hacen necesario estudiar la fiabilidad de la comunicación con palabras de códigos de longitud finita. Los exponentes de error son un método para estudiar el régimen de longitud finita desde el punto de vista la teoría de la información. En esta tesis, nos centramos en calcular el exponente para la comunicación tanto de un solo usuario como para el acceso múltiple con fuentes independientes y correladas. Estudiando diferentes familias de codificación, como son esquemas independientes e idénticamente distribuidos, independientes y condicionalmente distribuidos, que dependen del mensaje, de composición constante generalizada, y conjuntos de composición constante condicional, obtenemos y analizamos varios exponentes alcanzables tanto para la comunicación de un solo usuario como para la de múltiples usuarios.
APA, Harvard, Vancouver, ISO, and other styles
27

Ma, J. "Merging and revision of uncertain knowledge and information from multiple sources." Thesis, Queen's University Belfast, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bruneaux, Luke Julien. "Multiple Unnecessary Protein Sources and Cost to Growth Rate in E.coli." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:11041.

Full text
Abstract:
The fitness and macromolecular composition of the gram-negative bacterium E.coli are governed by a seemingly insurmountable level of complexity. However, simple phenomenological measures may be found that describe its systems-level response to a variety of inputs. This thesis explores phenomenological approaches providing accurate quantitative descriptions of complex systems in E.coli. Chapter 1 examines the relationship between unnecessary protein production and growth rate in E.coli. It was previously unknown whether the negative effects on growth rate due to multiple unnecessary protein fractions would add linearly or collectively to produce a nonlinear response. Within the regime of this thesis, it appears that the interplay between growth rate and protein is consistent with a non-interacting model. We do not need to account for complex interaction between system components. Appendix A describes a novel technique for real-time measurement of messenger RNA in single living E.coli cells. Using this technique, one may accurately describe the transcriptional response of gene networks in single cells.<br>Physics
APA, Harvard, Vancouver, ISO, and other styles
29

Hall, Kimberlee K., and Phillip R. Scheuerman. "Development of Multiple Regression Models to Predict Sources of Fecal Pollution." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etsu-works/2880.

Full text
Abstract:
This study assessed the usefulness of multivariate statistical tools to characterize watershed dynamics and prioritize streams for remediation. Three multiple regression models were developed using water quality data collected from Sinking Creek in the Watauga River watershed in Northeast Tennessee. Model 1 included all water quality parameters, model 2 included parameters identified by stepwise regression, and model 3 was developed using canonical discriminant analysis. Models were evaluated in seven creeks to determine if they correctly classified land use and level of fecal pollution. At the watershed level, the models were statistically significant (p < 0.001) but with low r2 values (Model 1 r2 = 0.02, Model 2 r2 = 0.01, Model 3 r2 = 0.35). Model 3 correctly classified land use in five of seven creeks. These results suggest this approach can be used to set priorities and identify pollution sources, but may be limited when applied across entire watersheds.
APA, Harvard, Vancouver, ISO, and other styles
30

Daly, Nancy Ann. "Recognition of words from their spellings : integration of multiple knowledge sources." Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/14791.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1987.<br>MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.<br>This research was supported by the National Science Foundation and the Defense Advanced Research Projects Agency.<br>Bibliography: leaves 112-114.<br>by Nancy Ann Daly.<br>M.S.
APA, Harvard, Vancouver, ISO, and other styles
31

Vickery, Kathryn J. "Southern African dust sources as identified by multiple space borne sensors." Master's thesis, University of Cape Town, 2010. http://hdl.handle.net/11427/4814.

Full text
Abstract:
Includes abstract.<br>Includes bibliographical references (leaves 132-145).<br>Mineral aerosols emitted from arid and semi-arid regions effect global radiation, contribute to regional nutrient dynamics and impact local soil and water quality. Satellite imagery has been central to the identification and determining the distribution of source areas and the trajectories of dust around the globe. This study focuses on the dryland regions of Botswana, Namibia and South Africa. It uses the capabilities of the ultraviolet channels provided by the older Total Ozone Mapping Spectrometer (TOMS), the Ozone Monitoring Instrument (OMI) (a TOMS follow up), the visible bands of Moderate Resolution Imaging Spectroradiometer (MODIS), and the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). This study compares various dust detection products but also focuses on the application of thermal infrared bands from MSG through the usage of the new "Pink Dust" visua lisation technique using channels 7 (8.7 ~m), 9 (lO.8 ~m), and 10 (12.0 ~m). This multisensor approach resulted in a regional maps highlighting the distribution of source points and establishing some of the prevalent transport pathways and likely deposition zones. Southern African dust sources include a few large and many small pans, subtle inland depressions and ephemeral river systems, which are subject to a range of climatic conditions as part of the Kalahari and Namib region. This work in particular examines if source points are productive due to favourable climatic conditions. The debate around transport limit verses supply limit can only be solved at the local scale which requires observation at higher spatial and temporal resolution as provided by the latest dust detection products. MSG and MODIS in particular have shown distinct source point clusters in Etosha and the Makgadikgadi Pans which based on the courser resolution of older TOMS, have so far been treated as homogeneous sources. Data analyses reveal 327 individual dust plumes over the 2005-2008 study period, some of which are more than 300 km in length. These are integrated into existing climate and weather records provided by National Centers for Environmental Prediction (NCEP) data. The results identified a set dust drivers such as the Continental High Pressure, Bergwinds, Tropical Temperate and West Coast Troughs, and Westerly and Easterly Wave lows. This enhances our ability to predict such events, in particular, if transport acts as the limiting driver. Some of these find ings also have the potential to enhance our knowledge of the aerosol generation process elsewhere. The quality of findings are still limited by problems associated with dust plume substrates and clearly require significant surface validation relating to hydrological and climatic controls at the micro-scale. It is furthermore evident that no current instrument fully meets the requirements of the mineral aerosol research community.
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Sen. "Disease, Drug, and Target Association Predictions by Integrating Multiple Heterogeneous Sources." Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1342194249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Olsen, M. Rolf. "Rating congruence between various management appraisal sources : a study of the relative value of various management appraisal sources and factors affecting rating congruence between these sources." Thesis, Brunel University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Argudo, Medrano Oscar. "Realistic reconstruction and rendering of detailed 3D scenarios from multiple data sources." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/620733.

Full text
Abstract:
During the last years, we have witnessed significant improvements in digital terrain modeling, mainly through photogrammetric techniques based on satellite and aerial photography, as well as laser scanning. These techniques allow the creation of Digital Elevation Models (DEM) and Digital Surface Models (DSM) that can be streamed over the network and explored through virtual globe applications like Google Earth or NASA WorldWind. The resolution of these 3D scenes has improved noticeably in the last years, reaching in some urban areas resolutions up to 1m or less for DEM and buildings, and less than 10 cm per pixel in the associated aerial imagery. However, in rural, forest or mountainous areas, the typical resolution for elevation datasets ranges between 5 and 30 meters, and typical resolution of corresponding aerial photographs ranges between 25 cm to 1 m. This current level of detail is only sufficient for aerial points of view, but as the viewpoint approaches the surface the terrain loses its realistic appearance. One approach to augment the detail on top of currently available datasets is adding synthetic details in a plausible manner, i.e. including elements that match the features perceived in the aerial view. By combining the real dataset with the instancing of models on the terrain and other procedural detail techniques, the effective resolution can potentially become arbitrary. There are several applications that do not need an exact reproduction of the real elements but would greatly benefit from plausibly enhanced terrain models: videogames and entertainment applications, visual impact assessment (e.g. how a new ski resort would look), virtual tourism, simulations, etc. In this thesis we propose new methods and tools to help the reconstruction and synthesis of high-resolution terrain scenes from currently available data sources, in order to achieve realistically looking ground-level views. In particular, we decided to focus on rural scenarios, mountains and forest areas. Our main goal is the combination of plausible synthetic elements and procedural detail with publicly available real data to create detailed 3D scenes from existing locations. Our research has focused on the following contributions: - An efficient pipeline for aerial imagery segmentation - Plausible terrain enhancement from high-resolution examples - Super-resolution of DEM by transferring details from the aerial photograph - Synthesis of arbitrary tree picture variations from a reduced set of photographs - Reconstruction of 3D tree models from a single image - A compact and efficient tree representation for real-time rendering of forest landscapes<br>Durant els darrers anys, hem presenciat avenços significatius en el modelat digital de terrenys, principalment gràcies a tècniques fotogramètriques, basades en fotografia aèria o satèl·lit, i a escàners làser. Aquestes tècniques permeten crear Models Digitals d'Elevacions (DEM) i Models Digitals de Superfícies (DSM) que es poden retransmetre per la xarxa i ser explorats mitjançant aplicacions de globus virtuals com ara Google Earth o NASA WorldWind. La resolució d'aquestes escenes 3D ha millorat considerablement durant els darrers anys, arribant a algunes àrees urbanes a resolucions d'un metre o menys per al DEM i edificis, i fins a menys de 10 cm per píxel a les fotografies aèries associades. No obstant, en entorns rurals, boscos i zones muntanyoses, la resolució típica per a dades d'elevació es troba entre 5 i 30 metres, i per a les corresponents fotografies aèries varia entre 25 cm i 1m. Aquest nivell de detall només és suficient per a punts de vista aeris, però a mesura que ens apropem a la superfície el terreny perd tot el realisme. Una manera d'augmentar el detall dels conjunts de dades actuals és afegint a l'escena detalls sintètics de manera plausible, és a dir, incloure elements que encaixin amb les característiques que es perceben a la vista aèria. Així, combinant les dades reals amb instàncies de models sobre el terreny i altres tècniques de detall procedural, la resolució efectiva del model pot arribar a ser arbitrària. Hi ha diverses aplicacions per a les quals no cal una reproducció exacta dels elements reals, però que es beneficiarien de models de terreny augmentats de manera plausible: videojocs i aplicacions d'entreteniment, avaluació de l'impacte visual (per exemple, com es veuria una nova estació d'esquí), turisme virtual, simulacions, etc. En aquesta tesi, proposem nous mètodes i eines per ajudar a la reconstrucció i síntesi de terrenys en alta resolució partint de conjunts de dades disponibles públicament, per tal d'aconseguir vistes a nivell de terra realistes. En particular, hem decidit centrar-nos en escenes rurals, muntanyes i àrees boscoses. El nostre principal objectiu és la combinació d'elements sintètics plausibles i detall procedural amb dades reals disponibles públicament per tal de generar escenes 3D d'ubicacions existents. La nostra recerca s'ha centrat en les següents contribucions: - Un pipeline eficient per a segmentació d'imatges aèries - Millora plausible de models de terreny a partir d'exemples d’alta resolució - Super-resolució de models d'elevacions transferint-hi detalls de la fotografia aèria - Síntesis d'un nombre arbitrari de variacions d’imatges d’arbres a partir d'un conjunt reduït de fotografies - Reconstrucció de models 3D d'arbres a partir d'una única fotografia - Una representació compacta i eficient d'arbres per a navegació en temps real d'escenes
APA, Harvard, Vancouver, ISO, and other styles
35

Tremblay, Monica Chiarini. "Uncertainty in the information supply chain : integrating multiple health care data sources." [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Endress, William. "Merging Multiple Telemetry Files from Widely Separated Sources for Improved Data Integrity." International Foundation for Telemetering, 2012. http://hdl.handle.net/10150/581824.

Full text
Abstract:
Merging telemetry data from multiple data sources into a single file, provides the ability to fill in gaps in the data and reduce noise by taking advantage of the multiple sources. This is desirable when analyzing the data as there is only one file to work from. Also, the analysts will spend less time trying to explain away gaps and spikes in data that are attributable to dropped and noisy telemetry frames, leading to more accurate reports. This paper discusses the issues and solutions for doing the merge.
APA, Harvard, Vancouver, ISO, and other styles
37

Liao, Zhining. "Query processing for data integration from multiple data sources over the Internet." Thesis, University of Ulster, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Huff, Amy K. "Multiple stable oxygen isotope analysis of atmospheric carbon monoxide and its sources /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1998. http://wwwlib.umi.com/cr/ucsd/fullcit?p9835376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Tecle, Ghebremuse Emehatsion. "Investigation and explanation of major factors affecting academic writing : using multiple sources." Thesis, University College London (University of London), 1998. http://discovery.ucl.ac.uk/10007488/.

Full text
Abstract:
This study investigates whether teaching writing using multiple sources approach (TWUMSA) is more effective than the current traditional approach to teaching writing for academic purposes. The main research questions are: Can teaching which involves teaching writing using multiple sources (on a topic) lead to improved academic writing? And what is the nature of the intertextual links made by the subjects (students) in the study? 112 subjects (56 control and 56 experimental) served in the study. The experimental groups received instruction on the basis of a teaching approach using multiple sources which involves understanding and organizing texts, selecting, generating and connecting ideas, paraphrasing, and integrating citations and documenting sources. The control groups received instruction on the basis of the current traditional approach for 16 weeks. The statistical method of comparison of means of independent samples T-test was applied and the results of the post tests (phases I and II) show that there are statistically significant differences between the approach using sources and the current traditional approach. The relationship between prior knowledge of subject matter and post test is (positively) modest. The analysis of the subjects' essays reveal that more subjects in the control groups composed their essays using information from, for instance , the second text, then moved to either the first or the third text one after the other but they did not take any more pieces of information from the text/s they had already drawn, whereas more subjects in the experimental groups composed some content units from one text, moved to another text and moved now and then to the text/s they had already drawn pieces of information or content units. Thus, the intertextual links made by the experimental groups appear better and more interconnected and interwoven than that of the control groups. Three major categories of composing content units (CUs) are established: (1) direct copy CUs, (2) paraphrased CUs, and (3) generated CUs . On the basis of the content units the subjects exhibited, they are classified into five kinds of writers: 'compilers', 'harmonizers', 'constructivists', 'dualists', and 'paraphrasers'. Thirty-eight lecturers who were teaching sophomore level at the University of Asmara, Eritrea and 200 sophomores completed questionnaires accompanied by rating scales in verbal form such as a. Very high b. high c. moderate, and d. little. The lecturers' ratings to (some similar) statements in the questionnaires are lower than that of the sophomores' ratings. The students' responses to the statements in the questionnaire indicate their unfamiliarity with writing using sources, positive attitudinal changes toward writing using sources, and mostly moderate perception of their capabilities towards writing using other texts. The students also linked the benefits of writing using sources to other courses. The experimental groups appear better users of strategies or activities in the processes of writing using sources. Analysis of the interview data revealed some causes of the problems the interviewees faced when they wrote using sources. The interviewees stressed the importance of prior knowledge to writing. They also reported positive attitudinal changes toward learning through TWUMSA.
APA, Harvard, Vancouver, ISO, and other styles
40

Lim, Young Shin. "Effects of Likability of Multiple Layers of Sources on Social Network Sites." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1461252155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Sama, Sanjana. "An Empirical Study Investigating Source Code Summarization Using Multiple Sources of Information." Youngstown State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1527673352984124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cooke, Payton, and Payton Cooke. "Comparative Analysis of Multiple Data Sources for Travel Time and Delay Measurement." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/622847.

Full text
Abstract:
Arterial performance measurement is an essential tool for both researchers and practitioners, guiding decisions on traffic management, future improvements, and public information. Link travel time and intersection control delay are two primary performance measures that are used to evaluate arterial level of service. Despite recent technological advancements, collecting travel time and intersection delay data can be a time-consuming and complicated process. Limited budgets, numerous available technologies, a rapidly changing field, and other challenges make performance measurement and comparison of data sources difficult. Three common data collection sources (probe vehicles, Bluetooth media access control readers, and manual queue length counts) are often used for performance measurement and validation of new data methods. Comparing these and other data sources is important as agencies and researchers collect arterial performance data. This study provides a methodology for comparing data sources, using statistical tests and linear correlation to compare methods and identify strengths and weaknesses. Additionally, this study examines data normality as an issue that is seldom considered, yet can affect the performance of statistical tests. These comparisons can provide insight into the selection of a particular data source for use in the field or for research. Data collected along Grant Road in Tucson, Arizona, was used as a case study to evaluate the methodology and the data sources. For evaluating travel time, GPS probe vehicle and Bluetooth sources produced similar results. Bluetooth can provide a greater volume of data more easily in addition to samples large enough for more rigorous statistical evaluation, but probe vehicles are more versatile and provide higher resolution data. For evaluating intersection delay, probe vehicle and queue count methods did not always produce similar results.
APA, Harvard, Vancouver, ISO, and other styles
43

Pather, Direshin. "A model for context awareness for mobile applications using multiple-input sources." Thesis, Nelson Mandela Metropolitan University, 2015. http://hdl.handle.net/10948/2969.

Full text
Abstract:
Context-aware computing enables mobile applications to discover and benefit from valuable context information, such as user location, time of day and current activity. However, determining the users’ context throughout their daily activities is one of the main challenges of context-aware computing. With the increasing number of built-in mobile sensors and other input sources, existing context models do not effectively handle context information related to personal user context. The objective of this research was to develop an improved context-aware model to support the context awareness needs of mobile applications. An existing context-aware model was selected as the most complete model to use as a basis for the proposed model to support context awareness in mobile applications. The existing context-aware model was modified to address the shortcomings of existing models in dealing with context information related to personal user context. The proposed model supports four different context dimensions, namely Physical, User Activity, Health and User Preferences. A prototype, called CoPro was developed, based on the proposed model, to demonstrate the effectiveness of the model. Several experiments were designed and conducted to determine if CoPro was effective, reliable and capable. CoPro was considered effective as it produced low-level context as well as inferred context. The reliability of the model was confirmed by evaluating CoPro using Quality of Context (QoC) metrics such as Accuracy, Freshness, Certainty and Completeness. CoPro was also found to be capable of dealing with the limitations of the mobile computing platform such as limited processing power. The research determined that the proposed context-aware model can be used to successfully support context awareness in mobile applications. Design recommendations were proposed and future work will involve converting the CoPro prototype into middleware in the form of an API to provide easier access to context awareness support in mobile applications.
APA, Harvard, Vancouver, ISO, and other styles
44

Page, Scott F. "Multiple objective sensor management and optimisation." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/66600/.

Full text
Abstract:
One of the key challenges associated with exploiting modern Autonomous Vehicle technology for military surveillance tasks is the development of Sensor Management strategies which maximise the performance of the on-board Data-Fusion systems. The focus of this thesis is the development of Sensor Management algorithms which aim to optimise target tracking processes. Three principal theoretical and analytical contributions are presented which are related to the manner in which such problems are formulated and subsequently solved. Firstly, the trade-offs between optimising target tracking and other system-level objectives relating to expected operating lifetime are explored in an autonomous ground sensor scenario. This is achieved by modelling the observer trajectory control design as a probabilistic, information--theoretic, multiple-objective optimisation problem. This novel approach explores the relationships between the changes in sensor-target geometry that are induced by tracking performance measures and those relating to power consumption. This culminates in a novel observer trajectory control algorithm based on the minimax approach. The second contribution is an analysis of the propagation of error through a limited-lookahead sensor control feedback loop. In the last decade, it has been shown that the use of such non-myopic (multiple-step) planning strategies can lead to superior performance in many Sensor Management scenarios. However, relatively little is known about the performance of strategies which use different horizon lengths. It is shown that, in the general case, planning performance is a function of the length of the horizon over which the optimisation is performed. While increasing the horizon maximises the chances of achieving global optimality, by revealing information about the substructure of the decision space, it also increases the impact of any prediction error, approximations, or unforeseen risk present within the scenario. These competing mechanisms are demonstrated using an example tracking problem. This provides the motivation for a novel sensor control methodology that employs an adaptive length optimisation horizon. A route to selecting the optimal horizon size is proposed, based on a new non-myopic risk equilibrium which identifies the point where the two competing mechanisms are balanced. The third area of contribution concerns the development of a number of novel optimisation algorithms aimed at solving the resulting sequential decision making problems. These problems are typically solved using stochastic search methods such as Genetic Algorithms or Simulated Annealing. The techniques presented in this thesis are extensions of the recently proposed Repeated Weighted Boosting Search algorithm. In its original form, it is only applicable to continuous, single-objective, optimisation problems. The extensions facilitate application to mixed search spaces and Pareto multiple-objective problems. The resulting algorithms have performance comparable with Genetic Algorithm variants, and offer a number of advantages such as ease of implementation and limited tuning requirements.
APA, Harvard, Vancouver, ISO, and other styles
45

Pinto, Jonathan Hunder Dutra Gherard. "Conversor modular multinível aplicado a sistema híbrido de armazenamento de energia." Universidade Federal de Juiz de Fora (UFJF), 2018. https://repositorio.ufjf.br/jspui/handle/ufjf/6501.

Full text
Abstract:
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-03-27T13:46:07Z No. of bitstreams: 1 jonathanhunderdutragherardpinto.pdf: 6016290 bytes, checksum: 50eab93d008d20c4a60c851574b2c6f3 (MD5)<br>Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-03-27T13:57:34Z (GMT) No. of bitstreams: 1 jonathanhunderdutragherardpinto.pdf: 6016290 bytes, checksum: 50eab93d008d20c4a60c851574b2c6f3 (MD5)<br>Made available in DSpace on 2018-03-27T13:57:34Z (GMT). No. of bitstreams: 1 jonathanhunderdutragherardpinto.pdf: 6016290 bytes, checksum: 50eab93d008d20c4a60c851574b2c6f3 (MD5) Previous issue date: 2018-02-19<br>Este trabalho tem como contribuição o desenvolvimento de uma estratégia de equa-lização das tensões em um conversor multinível modular, como parte integrante de um sistema híbrido de armazenamento de energia. O conversor modular multinível realiza a conexão em série de módulos supercapacitores, o que possibilita aumentar a ten-são sem prejudicar a transferência rápida de energia. Em relação à outras topologias, este trabalho permite reduzir a quantidade, volume e massa do elemento magnético da estrutura do conversor. Um banco de baterias de íons de lítio também integra o sistema por intermédio de um conversor estático. Como é a fonte de maior densidade de energia, fornece a potência média requerida pelo carga. A associação com uma fonte de transferência rápida de energia permite aumentar o desempenho dinâmico, a eficiência energética e a vida útil da bateria. Com efeito, tem-se um sistema híbrido de armazenamento de energia que requer estratégias de gestão para múltiplas fontes de suprimento. Os resultados simulados considerando a estimativa da demanda de po-tência de um protótipo de veículo elétrico, são adequados e propiciam os fundamentos necessários para a construção de um protótipo.<br>This work is a contribution to develop a strategy equalization of tensions in a mo-dular multilevel converter as part of a hybrid system energy storage. The multilevel modular converter realizes the series connection of supercapacitor modules, which al-lows to increase the voltage without cause damages to the quick energy transfer. In relation to other topologies, it allows reduction of the quantity, volume and mass of the magnetic element of the converter structure. A lithium-ion battery bank also integrates the system via a voltage boost converter. This battery is the source of high energy density, which provides the average power required by the load. The association with a fast transfer power source allows for increased dynamic performance, energy efficiency and service life. In fact, there is a hybrid energy storage system that requires mana-gement strategies for multiple sources of supply. The simulated results were obtained considering the power demand estimation of an electric vehicle prototype.
APA, Harvard, Vancouver, ISO, and other styles
46

Sivertsson, Yulia. "Management accountants´ participation in strategic management processes: multiple-case study." Thesis, Högskolan Dalarna, Institutionen för kultur och samhälle, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:du-37445.

Full text
Abstract:
Aim – The aim of this study is to explore how Management accountants (MAs) participate in strategic management processes nowadays and to explain reasons for potential differences in involvement of MAs in strategic management processes between different organizations.Method - The study is based on a multi-case study approach conducted among three independent companies in Sweden. The information from semi-structural interviews with MAs and archival data in form of job announcements for Senior MAs positions is used to analyze and cross-check the relationship. The time-horizon is cross-sectional.Findings - The study shows that involvement of MAs in strategic management processes varies a lot within organizations being influenced by the following factors: personal traits, business knowledge, relationship with management and established role. Some major variations on cross-company level are identified between subsidiary and HQ, and between representatives of different capital ownership forms.Conclusions - The study suggests that power imbalance in organizations hinders applying critical thinking and expressing objective opinion by MAs, that makes it difficult to claim a fully explicit business-partner role. Process of MAs’ involvement in the strategic management decision making presents a product of interrelation between two strategies for legitimizing of truth claims proposed by Heizmann and Olsson (2015): executing power of authority and executing power of expertise.
APA, Harvard, Vancouver, ISO, and other styles
47

Habool, Al-Shamery Maitham. "Reconstruction of multiple point sources by employing a modified Gerchberg-Saxton iterative algorithm." Thesis, University of Sussex, 2018. http://sro.sussex.ac.uk/id/eprint/79826/.

Full text
Abstract:
Digital holograms has been developed an used in many applications. They are a technique by which a wavefront can be recorded and then reconstructed, often even in the absence of the original object. In this project, we use digital holography methods in which the original object amplitude and phase are recorded numerically, which would allow these data be downloaded to a spatial light modulator (SLM).This provides digital holography with capabilities that are not available using optical holographic methods. The digital holographically reconstructed image can be refocused to different depths depending on the reconstruction distance. This remarkable aspect of digital holography as can be useful in many applications and one of the most beneficial applications is when it is used for the biological cell studies. In this research, point source digital in-line and off-axis digital holography with a numerical reconstruction has been studied. The point source hologram can be used in many biological applications. As the original object we use the binary amplitude Fresnel zone plate which is made by rings with an alternating opaque and transparent transmittance. The in-line hologram of a spherical wave of wavelength, λ, emanating from the point source is initially employed in the project. Also, we subsequently employ an off-axis point source in which the original point-source object is translated away from original on-axis location. Firstly, we create the binary amplitude Fresnel zone plate (FZP) which is considered the hologram of the point source. We determine a phase-only digital hologram calculation technique for the single point source object. We have used a modified Gerchberg-Saxton algorithm (MGSA) instead of the non-iterative algorithm employed in classical analogue holography. The first complex amplitude distribution, i(x, y), is the result of the Fourier transform of the point source phase combined with a random phase. This complex filed distribution is the input of the iteration process. Secondly, we propagate this light field by using the Fourier transform method. Next we apply the first constraint by modifying the amplitude distribution, that is by replacing it with the measured modulus and keeping the phase distribution unchanged. We use the root mean square error (RMSE) criterion between the reconstructed field and the target field to control the iteration process. The RMSE decreases at each iteration, giving rise to an error-reduction in the reconstructed wavefront. We then extend this method to the reconstruction of multiple points sources. Thus the overall aim of this thesis has been to create an algorithm that is able to reconstruct the multi-point source objects from only their modulus. The method could then be used for biological microscopy applications in which it is necessary to determine the position of a fluorescing source from within a volume of biological tissue.
APA, Harvard, Vancouver, ISO, and other styles
48

Swartling, Mikael. "Direction of Arrival Estimation and Localization of Multiple Speech Sources in Enclosed Environments." Doctoral thesis, Blekinge Tekniska Högskola, Avdelningen för elektroteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00520.

Full text
Abstract:
Speech communication is gaining in popularity in many different contexts as technology evolves. With the introduction of mobile electronic devices such as cell phones and laptops, and fixed electronic devices such as video and teleconferencing systems, more people are communicating which leads to an increasing demand for new services and better speech quality. Methods to enhance speech recorded by microphones often operate blindly without prior knowledge of the signals. With the addition of multiple microphones to allow for spatial filtering, many blind speech enhancement methods have to operate blindly also in the spatial domain. When attempting to improve the quality of spoken communication it is often necessary to be able to reliably determine the location of the speakers. A dedicated source localization method on top of the speech enhancement methods can assist the speech enhancement method by providing the spatial information about the sources. This thesis addresses the problem of speech-source localization, with a focus on the problem of localization in the presence of multiple concurrent speech sources. The primary work consists of methods to estimate the direction of arrival of multiple concurrent speech sources from an array of sensors and a method to correct the ambiguities when estimating the spatial locations of multiple speech sources from multiple arrays of sensors. The thesis also improves the well-known SRP-based methods with higher-order statistics, and presents an analysis of how the SRP-PHAT performs when the sensor array geometry is not fully calibrated. The thesis is concluded by two envelope-domain-based methods for tonal pattern detection and tonal disturbance detection and cancelation which can be useful to further increase the usability of the proposed localization methods. The main contribution of the thesis is a complete methodology to spatially locate multiple speech sources in enclosed environments. New methods and improvements to the combined solution are presented for the direction-of-arrival estimation, the location estimation and the location ambiguity correction, as well as a sensor array calibration sensitivity analysis.
APA, Harvard, Vancouver, ISO, and other styles
49

Yang, Shih-Feng, and 楊士鋒. "Multiple Source Data Management for Gadget Creation on Web Portals." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/63943530406973328048.

Full text
Abstract:
碩士<br>國立中央大學<br>資訊工程研究所<br>96<br>The Web 2.0 trend has bought more and more users into World Wide Web, and allowed users to create and share their information on the Web. Therefore, personal information integration has become an important research domain. Personal portal, such as MyYahoo, iGoogle and Netvibes, provides end-users a convenient platform to manage their desired web information. All the applications in personal portal can be easily added by users, and there is no application update problem. Users can add any kinds of gadget, like news, stock, calendar and e-mail, into their personal portal, and manage all information by the interactive interface. Although the personal portal can make people easily to manage their information, it is hard for a non-expert user to design a gadget. In this paper, we present an online gadget creation service to help non-expert users who want to create a gadget to manage their desired web information. We propose a simple process to create a gadget which can monitor the update of web pages, and present a gadget as several different forms. Additionally, users can extract web information from multiple data sources by Page Fetch Plan process, and accomplish the personal information integration works. Finally, users can easily share the gadgets to their friends through personal portal platform.
APA, Harvard, Vancouver, ISO, and other styles
50

Chiang, Ming-Sung, and 江明松. "Design and Implementation of Multiple Source Charging Management System for Portable Devices." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/79534830314453503068.

Full text
Abstract:
碩士<br>國立東華大學<br>電機工程學系<br>97<br>This thesis presents a multiple sources battery charging system for portable devices. It integrates the energy sources from photovoltaic (PV) module, wind turbine generator (WTG) and AC adapter to charge the Li-ion battery for increasing the power quality, reducing the grid power consumption and prolonging the battery utilization period. The final objective is to replace the conventional battery charger IC realized with LDO. The proposed system has the following features: (1) Multiple PWM converters for converting the PV and wind power and controlling the charging current of adapter. (2) Effective energy management strategy for utilizing the renewable power, reducing the grid power and prolong the battery life cycle. (3) Intelligent maximum power point tracking (MPPT) control of the PV and WTG for increase the conversion efficiency. (4) Three-stage charge of the battery to ensure the highest state of charge (SOC). This thesis has provided the investigations of multiple sources power management configuration for portable devices, characteristics of PV module and micro WTG, design of power converters and MPPT controllers for PV module and micro WTG, design of input voltage controller with current source input, design of power balance controller for multiple sources, design of battery charge, build of LabVIEW-based supervisory system and simulation and circuit implementation of the proposed system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography